halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 0
36
| timestamp
stringclasses 652
values | year
stringclasses 55
values | url
stringlengths 43
370
| text
stringlengths 16
2.18M
|
---|---|---|---|---|---|---|
01757250 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757250/file/ASME_JMR_2018_Hao_Li_Nayak_Caro_HAL.pdf | Guangbo Hao
email: g.hao@ucc.ie
Haiyang Li
Abhilash Nayak
Stéphane Caro
Stephane Caro
Design of a Compliant Gripper With Multimode Jaws
Introduction
A recent and significant part of the paradigm shift brought forth by the industrial revolution is the miniaturization of electromechanical devices. Along with the reduction of size, miniaturization reduces the cost, energy and material consumption. On the other hand, the fabrication, manipulation and assembly of miniaturized components is difficult and challenging. Grippers are grasping tools for various objects, which have been extensively used in different fields such as material handling, manufacturing and medical devices [START_REF] Verotti | A Comprehensive Survey on Microgrippers Design: Mechanical Structure[END_REF]. Traditional grippers are usually composed of rigid-body kinematic joints, which have issues associated with friction, wear and clearance/backlash [START_REF] Nordin | Controlling mechanical systems with backlash-a survey[END_REF]. Those issues lead to poor resolution and repeatability of grasping motion, which makes the high-precision manipulation of miniaturized components challenging. In addition to being extremely difficult to grip sub-micrometre objects such as optical fibres and micro lens, traditional grippers are also very hard to grip brittle objects such as powder granular. This is because the minimal incremental motion (i.e. resolution) of the jaw in the traditional gripper is usually larger than the radius of the micro-object or already causes the breaking of the brittle object. Figure 1 shows a parallel-jaw gripper as an example. Although advanced control can be used to improve the gripper's resolution, its effort is trivial compared to the resulting high complexity and increased cost [START_REF] Nordin | Controlling mechanical systems with backlash-a survey[END_REF].
Figure 1: Comparison of traditional parallel-jaw gripper's resolution and size/deformation of objects
Although mechanisms are often composed of rigid bodies connected by joints, compliant mechanisms that include flexible elements as kinematic joints can be utilised to transmit a load and/or motion. The advances in compliant mechanisms have provided a new direction to address the above rigid-body problems easily [START_REF] Howell | Compliant Mechanisms[END_REF]. The direct result of eliminating rigid-body kinematic joints removes friction, wear and backlash, enabling very high precision motions. In addition, it can be free of assembly when using compliant mechanisms so that miniaturization and monolithic fabrication are easily obtained. There are mainly two approaches to design compliant grippers. The first one is the optimisation method [START_REF] Zhu | Topology optimization of hinge-free compliant mechanisms with multiple outputs using level set method[END_REF] and the second is the kinematic substitution method [START_REF] Hao | Conceptual designs of multi-degree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF]. The former optimizes the materials' distribution to meet specified motion requirements, which includes topology optimization, geometry optimization and size optimization. However, the optimization result often leads to sensitive manufacturing, and therefore minor fabrication error can largely change the output motion [START_REF] Wang | Design of Multimaterial Compliant Mechanisms Using Level-Set Methods[END_REF]. Also, the kinematics of the resulting gripper by optimization is not intuitive to engineers. The latter design method is very interesting since it takes advantage of the large number of existing rigid-body mechanisms and their existing knowledge. It renders a very clear kinematic meaning, which is easily used for shortening the design process. Parallel or closed-loop rigid-body architectures gain an upper hand here as their intrinsic properties favour the characteristics of compliant mechanisms like compactness, symmetry to reduce parasitic motions, low stiffness along the desired degrees of freedom (DOF) and high stiffness in other directions.
Moreover, compliant mechanisms usually work around a given (mostly singular) position for small range of motions (instantaneous motions). Therefore, parallel singular configurations existing in parallel manipulators may be advantageously exploited [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF][START_REF] Rubbert | Using singularities of parallel manipulators for enhancing the rigid-body replacement design method of compliant mechanisms[END_REF][START_REF] Rubbert | Design of a compensation mechanism for an active cardiac stabilizer based on an assembly of planar compliant mechanisms[END_REF][START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity as explained in [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF]. Rubbert et al used an actuation singularity to typesynthesize a compliant medical device [START_REF] Rubbert | Using singularities of parallel manipulators for enhancing the rigid-body replacement design method of compliant mechanisms[END_REF][START_REF] Rubbert | Design of a compensation mechanism for an active cardiac stabilizer based on an assembly of planar compliant mechanisms[END_REF]. Another interesting kind of parallel singularity for a parallel manipulator that does not depend on the choice of actuation is a constraint singularity [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. Constraint singularities may divide the workspace of a parallel manipulator into different operation modes resulting in a reconfigurable mechanism. Algebraic geometry tools have proved to be efficient in performing global analysis of parallel manipulators and recognizing their operation modes leading to mobility-reconfiguration [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF][START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF][START_REF] He | Design and Analysis of a New 7R Single-Loop Mechanism with 4R, 6R and 7R Operation Modes[END_REF]. The resulting mobility-reconfiguration can enable different modes of grasping in grippers. Thus, the reconfigurable compliant gripper unveils an ability to grasp a plethora of shapes or adapt to specific requirements unlike other compliant grippers in literature that exhibit only one (mostly parallel mode) of these grasping modes [START_REF] Beroz | Compliant microgripper with parallel straight-line jaw trajectory for nanostructure manipulation[END_REF][START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF].
Though there are abundant reconfigurable rigid-body mechanisms in the literature, the study of reconfigurable compliant mechanisms is limited. Hao studied the mobility and structure reconfiguration of compliant mechanisms [START_REF] Hao | Mobility and Structure Re-configurability of Compliant Mechanisms[END_REF] while Hao and Li introduced a position-space based structure reconfiguration approach to the reconfiguration of compliant mechanisms and to minimize parasitic motions [START_REF] Hao | Position space-based compliant mechanism reconfiguration approach and its application in the reduction of parasitic motion[END_REF]. Note that in rigid-body mechanisms, using the underactuated/adaptive grasping method [START_REF] Birglen | Self-adaptive mechanical finger and method[END_REF][START_REF] Laliberté | Underactuation in robotic grasping hands[END_REF], a versatile gripper for adapting to different shapes can be achieved. In this paper, one of the simplest yet ubiquitous parallel mechanisms, a fourbar linkage is considered at a constraint singularity configuration to design a reconfigurable compliant fourbar mechanism and then to construct a reconfigurable compliant gripper. From our best understanding, this is the first piece of work that considers a constraint singularity to design a reconfigurable compliant mechanism with multiple operation modes, also called motion modes. This remaining of this paper is organised as follows. Section 2 describes the design of a multi-mode compliant four-bar mechanism and conducts the associated kinematic analysis. The multi-mode compliant gripper is proposed in Section 3 based on the work presented in Section 2, followed by the analytical kinetostatic modelling. A case study is discussed in Section 4, which shows the analysis of the gripper under different actuation schemes. Section 5 draws the conclusions.
Design of a multi-mode compliant four-bar mechanism 2.1 Compliant four-bar mechanism at its singularity position
A comprehensive singularity and operation mode analysis of a parallelogram mechanism is reported in [START_REF] Nayak | A Reconfigurable Compliant Four-Bar Mechanism with Multiple Operation Modes[END_REF], using algebraic geometry tools. As a specific case of the parallelogram mechanism, a four-bar linkage with equilateral links as shown in Fig. 2 is used in this paper, where the link length is l. Link AD is fixed, AB and CD are the cranks and BC is the coupler. Origin of the fixed frame, O0 coincides with the centre of link AD while that of the moving frame O1 with the centre of BC. The bar/link BC is designated as the output link with AB/CD as the input link. The location and orientation of the coupler with respect to the fixed frame can be denoted by (a, b, ϕ), where a and b are the Cartesian coordinates of point O1 attached to the coupler, and ϕ is the orientation of the latter about z0-axis, i.e., angle between x0 and x1 axes. The two constraint singularity positions of the equilateral four-bar linkage are identified in Fig. 3 [START_REF] Nayak | A Reconfigurable Compliant Four-Bar Mechanism with Multiple Operation Modes[END_REF]. At a constraint singularity, the mechanism may switch from one operation mode to another. Therefore, in case of the four-bar linkage with equal link lengths, the DOF at a constraint singularity is equal to 2. In this configuration, points A, B, C and D are collinear and the corresponding motion type is a translational motion along the normal to the line ABCD passing through the four points A, B, C and D, combined with a rotation about an axis directed along z0 and passing through the line ABCD. Eventually, it is noteworthy that two actuators are required in order to control the end-effector in those constraint singularities in order to manage the operation mode changing. Based on the constraint singularity configuration of the four-bar rigid-body mechanism represented in Fig. 3, a compliant four-bar mechanism can be designed through kinematically replacing the rigid rotational joints with compliant rotational joints [START_REF] Howell | Compliant Mechanisms[END_REF][START_REF] Hao | Conceptual designs of multi-degree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF] in the singularity position. Note that, the singularity position (Fig. 3(a)) is the undeformed (home) configuration of the compliant mechanism. Each of the compliant rotational joints can be any type compliant rotational joint such as cross-spring rotational joint, notch rotational joint and cartwheel rotational joint [START_REF] Howell | Compliant Mechanisms[END_REF].
In this section, the cross-spring rotational joint as the rotational/revolute joint (RJ) is employed to synthesize the design of a reconfigurable compliant four-bar mechanism based on the identified singularity (Fig. 4). In Fig. 4, the RJ-0 and RJ-2 are traditional cross-spring rotational joints, while both the RJ-1 and the RJ-3 are double cross-spring joints. Each of the two joints, RJ-1 and RJ-3, consists of two traditional cross-spring rotational joints in series with a common rotational axis and a secondary stage (encircled in Fig. 4). This serial arrangement creates symmetry and allows for greater motion and less stress in the mechanism. It should be mentioned that using these joints can allow large-amplitude motions compared to notch joints, which thus serves for illustrating the present concept easily. Note that these joints are not as compact and simple (with manufacture and precise issues) as circular notch joints. In addition, the parasitic rotational shift of these joints will be minimized if the beams intersect at an appropriate position of their length [START_REF] Henein | The art of flexure mechanism design[END_REF].
We specify that the Bar-0 is fixed to the ground and the Bar-2 is the output motion stage, also named coupler. Bar-1, Bar-2 and Bar-3 correspond to links, CD, BC, and AB, respectively, in Figure 3. The link length can be expressed as l=LB+LR (1) where LB and LR are the lengths of each bar block and each compliant rotational joint, respectively, as indicated in Fig. 4.
Like the rigid-body four-bar mechanism shown in Fig. 3(a), the output motion stage (Bar-2) of the compliant four-bar mechanism has multiple operation modes under two rotational actuations (controlled by two input angles α and β for Bar-1 and Bar-3, respectively), as shown in Fig. 4. However, the compliant four-bar mechanism has more operation modes than its rigid counterpart as discussed below. A moving coordinate system (o-xyz) is defined in Fig. 4, which is located on Bar-2 (link BC), which coincides with the fixed frame (O-XYZ) in the singularity position. Based on this assumption, Bar-2's operation modes of the compliant fourbar mechanism are listed below: a) Operation mode I: Rotation in the XY-plane about the Axis-L, when α ≠ 0 and β = 0. b) Operation mode II: Rotation in the XY-plane about the Axis-R when α = 0 and β≠0. In this mode, the cross-spring joints can tolerate this constrained configuration (close enough to the singularity) due to the induced small elastic extension of joints, but do not work as ideal revolute joints anymore. c) Operation mode III: General rotation in the XY-plane about other axes except the Axis-L and Axis-R, when α≠β. Similar to the constraint in operation mode II, the cross-spring joints in this mode are no longer working as ideal revolute joints. d) Operation mode IV: Pure translational motions in the XY-plane along the X-and Y-axes (mainly along the Y-axis), when α=β.
It is noted that the rotational axes associated with α and β are both fixed axes (as indicated by solid lines in Fig. 4); while Axis-L and Axis-R (as indicated by dashed lines in Fig. 4) are both mobile axes. The rotational axis of α is the rotational axis of joint RJ-0, and the rotational axis of β is the rotational axis of joint RJ-3. Axis-L is the rotational axis of joint RJ-2, which moves as Bar-3 rotates. Axis-R is the rotational axis of joint RJ-1, which moves as Bar-1 rotates. As shown in Fig. 3, in the initial singular configuration, Axis-L overlaps with the axis of α; and Axis-R lies in the plane spanned by Axis-L and the Axis of β.
It should be also pointed out that it is possible for operation modes II and III having large-range motion thanks to the particular use of rotational joints which may not be true for circular notch joints anymore.
These operation modes are also highlighted in Fig. 5 with verification through the printed prototype. In order to simplify the analysis, let α and β be non-negative in Fig. 5. The primary motions of output motion stage (Bar-2) are the rotation in the XY plane and the translations along the X-and Y-axes; while the rotations in the XZ and YZ planes and translational motion along the Z-axis are the parasitic motions that are not the interest of this paper.
Kinematic models
The kinematics of the compliant four-bar mechanism is discussed as follows, under the assumption of small angles (i.e., close to the constraint singularity). According to the definition of the location and orientation of the Bar-2 (link BC) with respect to the fixed frame, we can have the primary displacement of the Bar-2 as:
1) Displacement of the centre of Bar-2 along the Y-axis: b-0=l(sin α+sin β)/2 ≈ l(α+β)/2 if small input angles (2) 2) Rotation of Bar-2 about the Z-axis:
ϕ-0= α-β (3)
Using the assumption of small angles, the displacement of the centre of Bar-2 along the X-axis is normally in the second order of magnitude of the rotational angles, which is trivial and can be neglected in this paper. Note that this trivial displacement is also affected by the centre drift of the compliant rotational joints [START_REF] Zhao | A novel compliant linear-motion mechanism based on parasitic motion compensation[END_REF].
Design of a multi-mode compliant gripper 3.1 Compliant gripper with multiple modes
In this section, a multi-mode compliant gripper using the compliant four-bar mechanism presented in Fig. 4 as a gripper jaw mechanism is proposed (shown in Fig. 7). Instead of the cross-spring joints in the compliant four-bar mechanism, the commonly-used rectangular short beams as the rotational joints (with rotation axis approximately in the centre) are adopted for the final gripper designed in this paper, as shown in Fig. 7(a). The reason for using the rectangular short beams is mainly twofold. Firstly, comparing with the cross-spring joints, the rectangular joints are compact, simple enough, and easy to fabricate. Secondly, the rectangular joints have larger motion range than that of the circular notch joints (as used in Appendix A). In addition, using the rectangular short-beams allow not to work as pure rotational joints as discussed in Section 2.1.
In order the make the whole mechanism more compact, the compliant gripper is a two-layer structure with two linear actuators to control the two rotational displacements (α and β) in each jaw. The top layer actuator is for determining β, and the bottom layer is for determining α. The design of the compliant gripper is further explained in Fig. 8, with all dominant geometrical parameters labelled except the identical out-of-plane thickness (u) of each layer, and the gap (g) between the two layers.
As can be seen in Figs.
Kinetostatic modelling
Under the assumption of small rotations, the relationship between the linear actuation and the rotational actuation in the slider-crank mechanism (the left jaw is taken for studying) can be modelled below:
bb or / a r a r
( 8
)
tt or / a r a r (9) where at and ab represent the input displacements of the top and bottom actuators along the X-axis, respectively. A minus sign means that the positive linear actuation causes a negative rotational actuation (based on the coordinate system illustrated in Fig. 8). Here, r is the lever arm as shown in Fig. 8.
The rotational displacement of RJ-4 in the added slider-crank mechanism can be approximately obtained as follows. The rotational displacement of RJ-5 in each layer can be ignored due to the specific configuration of the added slider-crank mechanism as shown in Fig. 8, where the crank parallel to the Y-axis is perpendicular to the coupler so that the coupler is approximately straight over motion under the condition of small rotations. 2)-( 9), the input-output kinematic equations of the compliant gripper can be obtained:
tb () 2 l b a a r ( 12
) tb aa r (13)
As indicated in Eqs. ( 12) and ( 13), the amplification ratio is a function of design parameter r denoted in Fig. 8. Using the above kinematic equations, the kinetostatic models of the compliant gripper can be derived from the principle of virtual work [START_REF] Howell | Compliant Mechanisms[END_REF], with at and ab being the generalised coordinates.
t t b b t b tb d d d d UU F a F a a a aa (14)
where Ft and Fb represent the actuation forces of the top and bottom linear actuators along the X-axis, corresponding to at and ab, respectively. U is the total elastic potential energy of the compliant gripper, which is calculated as below:
2 2 2 2 2 2 2 0 0 1 1 2 2 3 3 4 4t 4 4b p t p b 2 2 2 2 2 2 b t b t t t b 0 1 2 3 4 4 22 p t p b 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 ( ) ( ) ( 2 ) ( ) ( ) ( ) 2 2 2 2 2 2 11 22
U k k k k k k k a k a a a a a a a a k k k k k k r r r r r r r k a k a (15)
where k0, k1, k2, k3 correspond to the rotational stiffnesses of RJ-0, RJ-1, RJ-2, RJ-3 in the compliant four-bar mechanism, respectively. k4 is the rotational stiffness of the RJ-4 in each layer. kp is the translational stiffness of the prismatic joint in each layer.
Note that the reaction forces from gripping objects [START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF] can be included in Eq. ( 14), which, however, is not considered in this paper.
Combining the results of Eqs. ( 14) and ( 15) with the substitution of Eqs. ( 4)-( 11), we have
t b t t t 1 2 3 4 p t 1 2 3 4 2 t p b 2 2 2 2 t t 2 t 1 2 1 1 ( ) ( 2 )( ) ( ) ( ) 2 8 2 2 4 ( 2 ) 2 /2 (
) / a a a a a k k k k k a r r r r r r r r r kk U F a k k k a k a r r r F rr (16)
b b t b 0 2 4 p b 0 2 4 2 b p t 2 2 2 2 b b b 1 1 1 ( ) ( 2 ) ( ) 2 2 2 4 ( 2 ) ( /
) 2 /2 a a a a k k k k a r r r r r r r k k k k a k a r r r r U F a F (17)
Equations ( 16) and ( 17) can determine the required forces for given input displacements, which can be rearranged in a matrix form:
t 11 12 b 21 2 b 2 t a kk a kk F F (18)
where the stiffness coefficients of the system associated with the input forces and input displacements are
k k k k kk r r r r k kk r k k k kk r r r
Therefore, the input displacements can be represented with regard to the input forces as:
FF c c k k FF c c k a a k (19)
We can further obtain the following stiffness equations for all compliant joints used in this paper [START_REF] Hao | Designing a monolithic tip-tilt-piston flexure manipulator[END_REF], which can be substituted into Eqs. ( 18) and ( 19) to solve the load-displacement equations.
EI Eut k k k ll EI Eut k k k ll EI Eut k ll ( 20
)
where E is the Young's modulus of the material and I is the second moment of inertia of the cross-section areas.
With the help of Eqs. ( 12), ( 13) and ( 19), we can have the required output displacements for given input displacements/forces:
(21)
Note that in the above kinetostatic modelling, a linear assumption is made. However, in order to capture accurate load-dependent kinetostatic characteristics for larger range of motion, nonlinear analytical models should be used [START_REF] Ma | Modeling large planar deflections of flexible beams in compliant mechanisms using chained beam-constraint-Model[END_REF].
Case study
In this section, a case study with assigned parameters as shown in Finite element analysis (FEA) simulation was carried out to show the four operation modes of the compliant gripper, in comparison to the 3D printed prototype (Fig. 9). Here, Solidworks 2017, with a meshing size of 1.0 mm and other settings, in default is used for FEA. Figure 10 illustrates the comprehensive kinetostatic analysis results of the proposed compliant gripper including the comparison between the analytical modelling and FEA. It can be observed that linear relations of all figures have been revealed where lines in either model are parallel each other. The FEA results have the same changing trends as the analytical models, but deviate from the analytical models in certain degrees. The discrepancy between the two theoretical models may be due to the assumptions used in the analytical modelling such as neglecting centre drift of rotational joints. It should be pointed out that although stress analysis is not the interest of this paper, the maximal stress (29 MPa) was checked in FEA, which is much less than the yield strength of the material during all simulations for the case study.
In Fig. 10(a), with the increase of ab, the difference between the two models (analytical and FEA) goes up if at=0, while the difference between the two models decreases if at=0.50 mm and 1.0 mm. This is because of different line slops of two models. Generally speaking, the larger at, the larger the deviation of two models, where the maximal difference in Fig. 10(a) is about 20%. In Fig. 10(b), the line slops of two models are almost same, meaning that the increase of increase of ab has no influence of the discrepancy of two models for any value of at. Also, the larger at, the larger the deviation of two theoretical models. A real prototype made of polycarbonate was fabricated using CNC milling machining, which is shown in Fig. 11(a). Each layer of the compliant gripper (Fig. 7) was made at first and then two layers were assembled together. The gripper prototype was tested in a customised-built testing rig as shown in Fig. 11(b). The singleaxis force loading for both the top layer actuation and bottom layer actuation was implemented. The micrometer loads displacement on the force sensor that is directly contacted with the top layer input or bottom layer input. The force sensor reads the force exerted on one input of the gripper, and the two displacement sensors indicate the displacements of two inputs. Testing results are illustrated in Fig. 12, which are compared with analytical models and FEA results. It is shown that the analytical displacement result is slightly larger than the experimental model, but is slightly lower than the FEA result. The difference among three models is also reasonable, given that FEA always take all parts as elastic bodies (i.e., less rigid system) and testing is imposed on a prototype with fabricated fillets (i.e., more rigid system).
Conclusions
In this paper, we present the first piece of work that employs a constraint singularity of a planar equilateral four-bar linkage to design a reconfigurable compliant gripper with multiple operation modes. The analytical kinetostatic mode of the multi-mode compliant gripper has been derived, which is verified in the case study. It shows that the FEA results comply with the analytical models with acceptable discrepancy. The proposed gripper is expected to be applied in extensive applications such as grasping a variety of shapes or adapting to specific requirements.
The design introduced in this paper uses a two-layer configuration for desirable compactness, which adversely results in non-desired small out-of-plane rotations. However, the out-of-plane rotations can be reduced by optimising currently-used compliant joints or employing different compliant joints with higher out-of-plane stiffness. Note that the compliant gripper can be designed in a single layer for monolithic fabrication and elimination of out-of-plane motion, at the cost of a larger footprint. This can be done by using the remote rotation centre, as shown in Appendix A. A single layer gripper can be more easily designed at micro-scale on a silicon layer for MEMS devices.
Despite the work mentioned above, there are other aspects to be considered in the future, including but not limited to the following: (i) An analytical nonlinear model for a more accurate large-range kinetostatic modelling of the compliant gripper; (ii) Design optimisation the compliant gripper based on a specific application; (iii) Output testing with comparison to the analytical model for a specific application; and (iv) Developing a control system to robotise the compliant gripper.
(a) Resolution of a jaw: ∆Jaw (b) Diameter of micro object: Dmicro (c) Diameter of brittle object: Dbrittle, with a small breaking deformation ∆b ∆Jaw> Dmicro/2 ∆Jaw> ∆b/2 (but Dbrittle can be larger than 2∆Jaw)
Figure 2 :
2 Figure 2: A planar equilateral four-bar linkage
Figure 3 :
3 Figure 3: Constraint singular configuration of the planar equilateral four-bar linkage
Figure 4 :Figure 5 :
45 Figure 4: A compliant four-bar mechanism at its constraint singular position (as fabricated)
Figure 6 :
6 Figure 6: The generic kinematic configuration (close to constraint singularity) of the four-bar linkage
Figure 7 :
7 Figure 7: The synthesized multi-mode compliant gripper
Figure 8 :
8 Figure 8: Design details of the multi-mode compliant gripper
(a) Top layer with two slider-crank mechanisms (indicated by dashed square) (b) Bottom layer with two slider-crank mechanisms (indicated by dashed square) bottom layer
Figure 10 (
10 c) reveals the same conclusion as that of Fig. 10(b).Figure 10(d) shows the similar finding to that of Fig. 10(a), except the larger at, the smaller the deviation of two models. It is clearly shown that in Figs. 10(b), 10(c) and 10(d) the general discrepancy of the two models is much lower than that in Fig. 10(a).
Figure 9 :Figure 12 .
912 Figure 9: Gripper operation modes under input displacement control
(a) Monolithic design of the reconfigurable compliant gripper (b) Actuating the linear actuator 1 only in bi-directions
Table 1 : Geometrical parameters
1 Table 1 is presented to verify the analytical models in Section 3.2. The overall nominal dimension of the compliant gripper is 130 mm × 70 mm. The Young's of compliant gripper is given by E=2.4G Pa, which corresponds to the material of Polycarbonate with Yield Strength of σs > 60 MPa, and Poisson Ratio of v= 0.38.
l t r h l1 l2 w1 w2 u g
25 mm 1 mm 18 mm 25 mm 5 mm 15 mm 24 mm 19 mm 10 mm 3 mm
Acknowledgment
The author would like to thank Mr. Tim Power and Mr. Mike O'Shea from University College Cork for the excellent prototype fabrication work as well as their kind assistance in the experimental testing. The work in this paper was funded by IRC Ulysses 2015/2016 grant.
In order to reach the constraint singularity, i.e. allowing the overlapping of two revolute joints, the design uses isosceles trapezoidal flexure joints with remote center of rotation. However, this one layer mechanism has a quite large footprint, and requires more extra space for incorporating the two actuators. Moreover, due to the use of circular notch joints, operation modes II and III in this gripper may not produce large-range motion. The modelling and analysis of the present monolithic design in this appendix is left for future study.
(e) Fabricated prototype |
01757257 | en | [
"spi.nano"
] | 2024/03/05 22:32:10 | 2018 | https://laas.hal.science/hal-01757257/file/allsensors_2018_2_30_78013.pdf | Aymen Sendi
email: aymen.sendi@laas.fr
Grérory Besnard
email: gbesnard@laas.fr
Philippe Menini
email: menini@laas.fr
Chaabane Talhi
email: talhi@laas.fr
Frédéric Blanc
email: blanc@laas.fr
Bernard Franc
email: bfranc@laas.fr
Myrtil Kahn
email: myrtil.kahn@lcc-toulouse.fr
Katia Fajerwerg
email: katia.fajerwerg@lcc-toulouse.fr
Pierre Fau
email: pierre.fau@lcc-toulouse.fr
Sub-ppm Nitrogen Dioxide (NO 2 ) Sensor Based on Inkjet Printed CuO on Microhotplate with a Pulsed Temperature Modulation
Keywords: NO 2, CuO nanoparticles, temperature modulation, gas sensor, selectivity. I
Nitrogen dioxide (NO 2 ), a toxic oxidizing gas, is considered among the main pollutants found in atmosphere and indoor air as well. Since long-term or short-term exposure to this gas is deleterious for human health, its detection is an urgent need that requires the development of efficient and cost effective methods and techniques. In this context, copper oxide (CuO) is a good candidate that is sensitive and selective for NO 2 at sub-ppm concentrations. In this work, CuO nanoparticles have been deposited by inkjet printing technology on a micro hotplate that can be operated up to 500°C at low power consumption (55 mW). The optimum detection capacity is obtained thanks to a temperature modulation (two -consecutive temperature steps from 100°C to 500°C), where the sensing resistance is measured. Thanks to this operating mode, we report in this study a very simple method for data processing and exploitation in order to obtain a good selectivity for the nitrogen dioxide over few interferent gases. Only four parameters from the sensor response allow us to make an efficient discrimination between individual or mixed gases in humid atmosphere.
INTRODUCTION
Humans spend more than 90% of their time in closed environments, even though this indoor environment offers a wide variety of pollutants [START_REF] Lévesque | Indoor air quality[END_REF] [START_REF] Namiesnik | Pollutants, their sources, and concentration levels[END_REF].
Indoor air pollution is a real health threat, so measuring indoor air quality is important for protecting the health from chemical and gaseous contaminants. Nitrogen dioxide (NO 2 ) is a dangerous pulmonary irritant [START_REF] Lévesque | Indoor air quality[END_REF]. NO 2 is generated by multiple sources of combustion in indoor air, such as smoking and heaters, but it also comes from outside air (industrial sources, road traffic) [START_REF] Cadiergues | Indoor air quality[END_REF]. NO 2 may have adverse effects of shortness of breath, asthma attacks and bronchial obstructions [START_REF] Koistinen | The INDEX project: executive summary of a European Union project on indoor air pollutants[END_REF]. It is also classified as toxic by the "International Agency for Research on Cancer (IARC)" [START_REF] Loomis | The International Agency for Research on Cancer (IARC) evaluation of the carcinogenicity of outdoor air pollution: focus on China[END_REF], hence the necessity for sensor development for accurate NO 2 detection is an acute need. Among sensors techniques, the metal oxide gas (MOX) sensors are promising candidates because of their high performance in terms of sensitivity on top of their low production cost. The copper oxide (CuO) material is highly studied because of its high sensitivity and its ability to detect oxidant gaseous compounds, but also for other indoor air pollutants, such as acetaldehyde (C 2 H 4 O), formaldehyde (CH 2 O), NO 2 , CO, etc. However, CuO suffers from a major disadvantage which is the lack of selectivity with respect to targeted gas.
In this study, our main objective is to develop an innovative and simple pulsed-temperature operating mode associated with an efficient data processing technique, which enables good selectivity toward NO 2 in gas mixtures. This technique is based on few parameters extracted from the dynamic response of sensor versus temperature changes in a gaseous environment. These parameters are: the normalized sensing resistance, the values of the slope at the origin, the intermediate slope and the final slope of the response of NO 2 against different reference gases, such as C 2 H 4 O, CH 2 O and moist air. The selectivity of NO 2 was examined in relation to air moisture with 30% humidity, C 2 H 4 O at 0.5-ppm, CH 2 O at a concentration of 0.5-ppm and the binary mixture of these gases with 0.3-ppm of each.
In Section II of the paper, we describe the materials and methods used in our work. Section III presents our results and the discussion. We conclude this work in Section IV.
II. MATERIALS AND METHODS
The sensitive layer made of CuO nanoparticles is deposited by inkjet printing on a silicon microhotplate [START_REF] Dufour | Technological improvements of a metal oxide gas multi-sensor based on a micro-hotplate structure and inkjet deposition for an automotive air quality sensor application[END_REF]. The ink is prepared with 5% CuO weight, which was dispersed in ethylene glycol by an ultrasonic bath for about one hour. The dispersions obtained were allowed to settle for 24 hours. The final ink was collected and then used for printing using Altadrop equipment control, where the numbers of the deposited drops of ink were controlled [START_REF] Dufour | Technological improvements of a metal oxide gas multi-sensor based on a micro-hotplate structure and inkjet deposition for an automotive air quality sensor application[END_REF]. This technique is simple and allowed us to obtain reproducible layers thicknesses of a few micrometers depending on the number of deposited drops. In addition, this technique permits to have a precisely localized deposit without need of additional complex photolithographic steps [START_REF] Morozova | Thin TiO2 films prepared by inkjet printing of the reverse micelles sol-gel composition[END_REF]. The CuO layer is finally annealed in ambient air from room temperature to 500°C (rate 1°C/min) followed by a plateau at 500°C for 1 hour before cooling to room temperature (1°C/min). This initial temperature treatment is necessary because CuO requires operating temperatures between 100°C<T<500°C. The thermal pretreatment is necessary to generate ionized oxygen species in atomic or molecular form at the oxide surface and therefore to improve the reactivity between the reacting gas and the sensor surface [START_REF] Dufour | Technological improvements of a metal oxide gas multi-sensor based on a micro-hotplate structure and inkjet deposition for an automotive air quality sensor application[END_REF].
In this study, we have used a pulsed temperature profile, presented in a previously published work, which showed that optimized sensitivity can be achieved with the use of two different temperature stages at 100°C and 500°C respectively. This dual temperature protocol also reduces the total power consumption of the device (see Figure 1). The CuO sensor was placed in a 250 ml test chamber and the test conditions were as follow: -A flow rate of 200 ml/min, controlled by a digital flowmeter.
-A relative humidity (RH) level of 30% is obtained by bubbling synthetic air flow controlled by a mass flow controller.
-The measuring chamber is at ambient temperature, controlled by a temperature sensor placed inside the vessel.
-A bias current is applied to the sensitive layer, controlled by a Source Measure Unit (SMU).
We started a test with single gas injections after a phase of two-hours stabilization in humid air, then with injections of binary mixtures for 16-min. During 32-min, moist air is injected between two successive gas injections. This time is enough to clean the chamber and stabilize the sensor to its baseline.
The gas injections concentrations are summarized in Table 1. A schematic representation of these injections is presented in Figure 2. During all the experience (stabilization phase, gases injections and stage between two successive gases) , the sensor is powered by a square signal voltage applied on the heater in order to obtain two temperature steps as shown in Figure 1. To ensure a constant overall flow, we adapted the gas injection sequences duration in correlation with the heating signal period (see Figure 3).
The resistance variation of CuO is measured under a fixed supply current of 100 nA in order to obtain voltage measurements in the range of few volts, far from compliance limit of the measuring device (20V). We also verified that this procedure (temperature cycling) doesn't affect the sensor reproducibility in terms of baseline or the sensor sensitivity.
Under such test conditions, we achieved a continuous 6.5 hours testing period without observing any drift on the raw sensor response. The sampling period is 500 ms, which gives us 60 points on a 30-second response step, this acquisition rate being enough for accurate data processing. Finally, we analyzed the sensor responses at each steps, according to the different gas injections, using a simple method of data processing in order to have the better selectivity of NO 2 .
III. RESULTS AND DISCUSSION
A.
Method of analysis During each gas injection, 16 periods of temperature modulation are applied. After verifying the reproducibility of sensor responses along these cycles, we only present here the responses of the last cycle, which is stabilized and reproducible from one cycle to another. As mentioned previously, we used new simple data treatment methods to obtain the better selectivity toward NO 2 with respect to several interferent gases. Among the multiple possible criteria, we chose representative variables that take into account the dynamic sensor behavior during a change of gaseous conditions and during a pulsed temperature; these criteria are obtained from the sensor resistance slopes during the gas response on each 30-second-steps. The data acquisition relies on the decomposition of the response into three distinct domains (see Figure 4):
-Starting Slope: from the 1st point to the 10th point (in yellow), -Intermediate slope: from the 10th point to the 30th point (in red), -Final slope: from the 30th point to the 60th point (in black).
The normalized resistance is measured from the last point on each step: this is the absolute difference between the resistance of the sensor under a reference gas (like moist air) and the resistance of the sensor under targeted gas(es), at the final cycle of each injection: R n = ((R gas -R air ) / R air ) * 100
(1) By treating these four parameters, a good selectivity of NO 2 can be obtained with respect to moist air, C 2 H 4 O and CH 2 O.
B.
Slope at the origin The slope at the origin is calculated on the first 10 points of each temperature step of the 8th cycle of each injection. Regarding the reference gas (moist air); we took the response of the 8th cycle of the last sequence under humid air before the injection of gases. The values of these slopes (in Ohms/ms) are shown in Figure 5; each bar represents the value of the slope at the origin of each gas at 500 and 100°C. Figure 5 clearly shows that the calculations of this parameter enable us to differentiate NO 2 from the other reference gases by measures on the plateau at 100°C. Regarding the other step at 500°C we note that the response is almost zero, because the transition from a cold to a hot state decreases the detection sensitivity and therefore reduces the sensors resistance variations under gas. We also note that with this criterion we evidence a significant difference between the value of the starting slope under NO 2 compare to others under other gases and mixtures without NO 2 .
C.
Intermediate slope The intermediate slope is calculated from the 10th point to the 30th point of each temperature step of the last cycle during each gas injection and compared with the value under air. The calculated values are shown in Figure 6. According to Figure 6, the value of the intermediate slope of individual NO 2 injection or of the gas mixtures in which there is NO 2 , is not prominent compared with the other reference gases even on the plateau at 100°C. This parameter is less effective than the previous one to detect NO 2 in gas mixtures.
D.
Final slope The final slope is the slope of the second half of the gas response; it is calculated between the 30th point and the 60th point. The response to the different temperature steps of the last cycle during NO 2 injection and the reference gases is presented in Figure 7. It is worth noting that the value of the final slopes of NO 2 or gaseous mixtures, which contain the NO 2 , are very different compared to the other reference gases on the two stages at 100°C only. This parameter allows us to select NO 2 with respect to other interfering gases.
E.
Normalized resistance As previously presented, the normalized resistance is calculated with respect to humid air from each last cycle level. The reference resistance used is the resistance of each stage of the last wet air sequence before the gas injection. The results obtained from these calculations are shown in Figure 8. Figure 8 shows a slight variation in the values of the normalized resistance between two similar and successive temperatures for the same gases. This slight variation can be explained by data dispersion which is +/-2%, due to the fact that the normalized resistance is calculated from the raw values during the gas injection and the raw values of resistances during the injections of humid air that may be slightly different between two similar and successive temperature stages.
The CuO sensor response to sub-ppm NO 2 levels when injected individually or in combination with another gas, is specific when measurements are taken on the low temperature plateau at 100°C.
IV. CONCLUSION
The selectivity and sensitivity of our CuO sensor has been studied by different operating modes and simple methods of analysis. Specific temperature modulation was applied to the metal oxide with the use of temperature steps at 500 and 100°C. The response of CuO sensitive layer toward gases representatives of indoor air pollution (C 2 H 4 O, CH 2 O, NO 2 , humid air) has been studied. These responses were analysed with several parameters, such as the study of the slope of resistance variation at the origin, the intermediate slope, the final slope and the normalized resistance measured at each temperature steps. The study of these different parameters shows that the CuO material is able to detect sub-ppm levels of NO 2 with a good selectivity compared to different interfering gases. To still improve the selectivity of gas sensor device to a larger variety of polluting gases, we plan to integrate these CuO sensors in a multichip system, which will allow us to use in parallel new metal oxide layers with specific temperature profiles and data analysis criteria.
Figure 1 .
1 Figure 1. CuO temperature profile.
TABLE 1 .Figure 2 .
12 Figure 2. Synoptic representative of a sequence of gas injections.
Figure 3 .
3 Figure 3. Diagram of a gas sequence.
Figure 4 .
4 Figure 4. Representative diagram of a response of a gas sensor during an injection cycle and showing the 3 domain slopes.
Figure 5 .
5 Figure 5. Representation of the slopes at the origin under different gases of the 8th cycle in "Ohms/ms" according to the temperature of the sensors.
Figure 6 .
6 Figure 6. Representation of the intermediate slopes under different gases at the 8th cycle in "Ohms/ms" according to the temperature of the sensors.
Figure 7 .
7 Figure 7. Representation of the final slopes under different gases at the 8th cycle in "Ohms/ms" according to the temperature of the sensors.
Figure 8 .
8 Figure 8. The normalized resistance of the different gas injections at the 8th cycle according to the temperature.
Copyright (c) IARIA, 2018. ISBN: 978-1-61208-621-7 ALLSENSORS 2018 : The Third International Conference on Advances in Sensors, Actuators, Metering and Sensing
ALLSENSORS 2018 : The Third International Conference on Advances in Sensors, Actuators, Metering and Sensing
Copyright (c) IARIA, 2018. ISBN: 978-1-61208-621-7
ACKNOWLEDGMENT
The authors express their gratitude to neOCampus, the university project of the Paul Sabatier University of Toulouse, for the financial support and the Chemical Coordination Laboratory of Toulouse for the preparation of the CuO nanoparticle powder. This work was also partly supported by the French RENATECH network. |
01618268 | en | [
"chim.anal",
"chim.mate",
"spi.gproc"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01618268/file/M%20Hijazi%20Sensors%20and%20Actuators%20B%202018.pdf | Mohamad Hijazi
Mathilde Rieu
email: rieu@emse.fr
Valérie Stambouli
Guy Tournier
Jean-Paul Viricelle
Christophe Pijolat
Ambient temperature selective ammonia gas sensor
Keywords: SnO2, Functionalization, APTES, Ammonia gas, Room temperature detection
teaching and research institutions in France or abroad, or from public or private research centers.
Introduction
Breath analysis is considered as noninvasive and safe method for the detection of diseases [START_REF] Broza | Combined Volatolomics for Monitoring of Human Body Chemistry[END_REF].
Gas sensors have shown to be promising devices for selective gas detection related to disease diagnosis [START_REF] Adiguzel | Breath sensors for lung cancer diagnosis[END_REF][START_REF] Righettoni | Breath analysis by nanostructured metal oxides as chemo-resistive gas sensors[END_REF]. These sensors can be used to detect the gases emanated from the human body.
For example, ammonia is a disease marker for liver problems. Indeed, ammonia in humans is converted to urea in the liver and then passes in the urines through the kidney, while unconverted ammonia is excreted in breath of 10 ppb for healthy subjects [START_REF] Capone | Solid state gas sensors: state of the art and future activities[END_REF]. The ammonia concentration increases in case of malfunctioning of liver and kidney reaching more than 1 ppm in presence of renal failure [START_REF] Dubois | Breath Ammonia Testing for Diagnosis of Hepatic Encephalopathy[END_REF][START_REF] Güntner | Selective sensing of NH3 by Si-doped α-MoO3 for breath analysis[END_REF].
SnO2 sensors have been well investigated from a very long time [START_REF] Yamazoe | Effects of additives on semiconductor gas sensors[END_REF][START_REF] Lalauze | A new approach to selective detection of gas by an SnO2 solid-state sensor[END_REF][START_REF] Watson | The tin oxide gas sensor and its applications[END_REF], since they can detect many gases with high sensitivity and low synthesis cost [START_REF] Barsan | Metal oxide-based gas sensor research: How to?[END_REF][START_REF] Korotcenkov | Gas response control through structural and chemical modification of metal oxide films: state of the art and approaches[END_REF]. The interactions of SnO2 material with gases have been extensively studied [START_REF] Gong | Interaction between thin-film tin oxide gas sensor and five organic vapors[END_REF][START_REF] Wang | Metal Oxide Gas Sensors: Sensitivity and Influencing Factors[END_REF]. The chemical reactions of target gases with SnO2 particle surface can generate variations in their electrical resistances. SnO2 is n-type semiconductor, in this case, the adsorbed oxygen on particle surface takes electrons from the conduction band at elevated temperature, generating depletion layer between the conduction band and the surface (space-charge region). Reducing gases such as CO are oxidized on the surface, then they consume the adsorbed surface oxygen giving back the electrons to the conduction band. This decrease in the depletion layer decreases the resistance of whole film [START_REF] Barsan | Conduction mechanism switch for SnO2 based sensors during operation in application relevant conditions; implications for modeling of sensing[END_REF]. However, like other metal oxides, SnO2 sensors have lack of selectivity and operate at high temperature (350-500 °C), except if some particular activation with light for example, is carried out [START_REF] Anothainart | Light enhanced NO2 gas sensing with tin oxide at room temperature: conductance and work function measurements[END_REF][START_REF] Comini | UV light activation of tin oxide thin films for NO2 sensing at low temperatures[END_REF]. Many techniques were applied to enhance the selectivity such as (i) the addition of gas filter [START_REF] Tournier | Selective filter for SnO2-based gas sensor: application to hydrogen trace detection[END_REF], or small amount of noble metals [START_REF] Tian | A low temperature gas sensor based on Pd-functionalized mesoporous SnO2 fibers for detecting trace formaldehyde[END_REF][START_REF] Cabot | Analysis of the noble metal catalytic additives introduced by impregnation of as obtained SnO2 sol-gel nanocrystals for gas sensors[END_REF][START_REF] Trung | Effective decoration of Pd nanoparticles on the surface of SnO2 nanowires for enhancement of CO gas-sensing performance[END_REF], (ii) the use of oxides mixture [START_REF] Zeng | Enhanced gas sensing properties by SnO2 nanosphere functionalized TiO2 nanobelts[END_REF][START_REF] Van Hieu | Enhanced performance of SnO2 nanowires ethanol sensor by functionalizing with La2O3[END_REF], or hybrid film of SnO2 and organic polymers [START_REF] Geng | Characterization and gas sensitivity study of polyaniline/SnO2 hybrid material prepared by hydrothermal route[END_REF][START_REF] Bai | Gas sensors based on conducting polymers[END_REF]. Since several years, there is a high request to develop analytical tools which are able to work at temperature lower than 200 °C in order to incorporate them in plastic devices and to reduce the power consumption [START_REF] Camara | Tubular gas preconcentrators based on inkjet printed micro-hotplates on foil[END_REF][START_REF] Rieu | Fully inkjet printed SnO2 gas sensor on plastic substrate[END_REF]. For such devices, researchers are now focused on the development of room temperature gas sensors. The first reported metal oxide gas sensor was based on palladium nanowires for the detection H2 [START_REF] Atashbar | Room temperature gas sensor based on metallic nanowires[END_REF]. Concerning SnO2 gas sensors, the more recent studies at room temperature are related to NO2 detection [START_REF] Anothainart | Light enhanced NO2 gas sensing with tin oxide at room temperature: conductance and work function measurements[END_REF][START_REF] Comini | UV light activation of tin oxide thin films for NO2 sensing at low temperatures[END_REF] or to formaldehyde sensors [START_REF] Tian | A low temperature gas sensor based on Pd-functionalized mesoporous SnO2 fibers for detecting trace formaldehyde[END_REF].
As exposed previously, for breath analysis, there is also a high demand for ammonia sensors working at room temperature. Among the current studies, it can be mentioned the sensors using tungsten disulfide (WS2) [START_REF] Li | WS2 nanoflakes based selective ammonia sensors at room temperature[END_REF] or carbon nanotubes (CNTs) [START_REF] Van Hieu | Highly sensitive thin film NH3 gas sensor operating at room temperature based on SnO2/MWCNTs composite[END_REF] as sensing material, and especially the ones based on reduced grapheme oxide (RGO) [START_REF] Sun | Facile preparation of polypyrrole-reduced graphene oxide hybrid for enhancing NH3 sensing at room temperature[END_REF][START_REF] Tran | Reduced graphene oxide as an over-coating layer on silver nanostructures for detecting NH3 gas at room temperature[END_REF][START_REF] Su | NH3 gas sensor based on Pd/SnO2/RGO ternary composite operated at room-temperature[END_REF], or on polyaniline (PANI) [START_REF] Kumar | Flexible room temperature ammonia sensor based on polyaniline[END_REF][START_REF] Bai | Polyaniline@SnO2 heterojunction loading on flexible PET thin film for detection of NH3 at room temperature[END_REF][START_REF] Abdulla | Highly sensitive, room temperature gas sensor based on polyaniline-multiwalled carbon nanotubes (PANI/MWCNTs) nanocomposite for trace-level ammonia detection[END_REF][START_REF] Khuspe | SnO2 nanoparticles-modified polyaniline films as highly selective, sensitive, reproducible and stable ammonia sensors[END_REF].
Molecular modification of metal oxide by organic film is another way to enhance the selectivity and to decrease the sensing temperature (room temperature gas sensors) [START_REF] Matsubara | Organically hybridized SnO2 gas sensors[END_REF][START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF][START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF]. The need of selective sensors with high sensitivity in presence of humidity at low gases concentration pushes the research to modify SnO2 sensing element in order to change its interaction with gases. The modifications with organic functional groups having different polarities could change the sensor response to specific gases (e.g. ammonia) depending on their polarity [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF]. In the literature, a functionalization based on APTES (3-aminopropyltriethoxysilane) combined with hexanoyl chloride or methyl adipoyl chloride was investigated on silicon oxide field effect transistors [START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF].
These devices were tested under some aliphatic alcohol and alkanes molecules. The functionalized field effect transistors have shown responses to a wide variety of volatile organic compounds like alcohols, alkanes etc. The response of such sensors to gases is derived from the change in electrostatic field of the molecular layer which can generate charge carriers in the silicon field effect transistor interface. Interactions with volatile organic compounds on the molecular layer can take place in two types: the first type is the adsorption on the surface of the molecular layer and the second type is the diffusion between the molecular layers [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF][START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF].
However, these field effect transistor sensors is still lacking of selectivity.
The aim of the functionalization performed in this work was to passivate the surface states on the SnO2 sensors by molecular layer aiming to optimize their interactions with ammonia gas. One SnO2 sensor was coated with molecules having mostly nonpolar (functional) side groups (SnO2-APTES-alkyl) and two others were coated with molecules having mostly polar side groups (SnO2-APTES and SnO2-APTES-ester) in order to discover their interactions with ammonia which is a polar molecule. It was explored by testing the different functionalized sensors under ammonia which is the target gas. The changes in the response of the modified sensors were compared with pure SnO2. Another objective is to reduce the power consumption by decreasing the operating temperature, since the sensor will be used later for smart devices and portable applications.
In the present work, we focus on the change in sensitivity and selectivity of SnO2 sensors after functionalization with amine (APTES), alkyl (CH3), and ester (COOCH3) end functional groups.
The sensors are firstly functionalized by APTES followed by covalent attachment of alkyl or ester end functional groups molecule. Functionalization is characterized with FTIR analysis, and then detection performances of resulting sensors are investigated in regards of ammonia detection.
Experimental
Fabrication of SnO2 sensors
Thick SnO2 films were deposited on alumina substrate by screen-printing technology. A semiautomatic Aurel C890 machine was used. The procedure for preparing the SnO2 ink and sensor fabrication parameters has been described elsewhere [START_REF] Tournier | Selective filter for SnO2-based gas sensor: application to hydrogen trace detection[END_REF]. SnO2 powder (Prolabo Company) was first mixed with a solvent and an organic binder. SnO2 ink was then screen-printed on an alphaalumina substrate (38×5×0.4 mm 3 ) provided with two gold electrodes deposited by reactive sputtering. The SnO2 material was finally annealed for 10 h at 700 °C in air. Film thickness was about 40 microns. SnO2 particles and agglomerates sizes were found to be between 10 nm and 500 nm. A photographic image of the two sensor faces is presented in Fig. 1.
Molecular modifications of SnO2 sensor
The functionalization was carried out by two-step process. In the first step, 3aminopropyltriethoxysilane (APTES, ACROS Organics) molecules were grafted on SnO2 (silanization). Silanization in liquid phase has been described elsewhere [START_REF] Le | A Label-Free Impedimetric DNA Sensor Based on a Nanoporous SnO2 Film: Fabrication and Detection Performance[END_REF]. SnO2 sensors were immersed in 50 mM APTES dissolved in 95% absolute ethanol and 5% of distilled water for 5 h under stirring at room temperature. Hydroxyl groups present on the surface of SnO2 allow the condensation of APTES. To remove the unbounded APTES molecules, the sensors were rinsed with absolute ethanol and dried under N2 flow (sensor SnO2-APTES). In a second step, SnO2-APTES sensors were immersed in a solution of 10 mM of hexanoyl chloride (98%, Fluka, alkyl: C6H11ClO) or methyl adipoyl chloride (96%, Alfa Aesar, ester: C7H11ClO3) and 5 µL of triethylamine (Fluka) in 5 mL of chloroform as solvent for 12 h under stirring. The terminal amine groups of APTES allow the coupling reaction with molecules bearing acyl chloride group.
The sensors were then rinsed with chloroform and dried under N2 flow (sensors: SnO2-APTESalkyl and SnO2-APTES-ester). The functionalization of SnO2 sensors leads to covalent attachment of amine, ester, and alkyl end functional groups. A schematic illustration of the two steps functionalization is reported in Fig. 2.
Characterization of molecularly modified SnO2
Modified molecular layers were characterized by Attenuated Total Reflectance-Fourier Transform Infrared spectroscopy (ATR-FTIR), the sample being placed face-down on the
R = CH3, COOCH3
diamond crystal, and a force being applied by pressure tip. FTIR spectra were recorded in a wavelength range from 400 to 4000 cm -1 . The scanning resolution was 2 cm -1 . The entire ATR-FTIR spectrums were collected using a Golden Gate Diamond ATR accessory (Bruker Vertex 70).
Sensing measurements of modified sensors
In the test bench, the sensor was installed in 80 cm 3 glass chamber under constant gas flow of 15 l/h. The test bench was provided with gas mass flow controllers which allow controlling the concentrations of different gases at same time by PC computer. A cylinder of NH3 gas (300 ppm, diluted in nitrogen) was purchased from Air Product and Chemicals Company. In addition, the test bench was equipped with water bubbler to test the sensors under different relative humidity (RH) balanced with air at 20 °C. Conductance measurements were performed by electronic unit equipped with voltage divider circuit with 1 volt generator that permits measuring the conductance of the SnO2 film. Before injecting the target gas in the test chamber, the sensors were kept 5 h at 100 °C, and then stabilized under air flow for 5 h at 25 °C. Gas sensing properties were then measured to different concentrations of ammonia gas balanced with air with different RH. The normalized conductance was plotted to monitor the sensor response as well as to calculate response and recovery times. The normalized conductance is defined as G/G0, where G is the conductance at any time and G0 is the conductance at the beginning of the test (i.e. t=0).
Response/recovery times are defined hereafter as the time to reach 90% of steady-state sensor response. In addition, the curve of relative response (GN-GA/GA, with GN: conductance after 20 min of ammonia injection, and GA: conductance under air with 5%RH) versus ammonia concentration was plotted for different sensors to compare their sensitivity. Sensitivity is defined as the slope of the calibration curve. Limit of detection (LOD) was also evaluated and it corresponds to a signal equal 3 times the standard deviation of the conductance baseline noise.
Values above the LOD indicate the presence of target gas. The selectivity of SnO2-APTES-ester was tested versus acetone and ethanol gases.
Results and Discussions
Characterization of modified molecular layers
Molecular characterization of the grafted films on SnO2 sensors was carried out by ATR-FTIR. Spectra of SnO2 (red curve), SnO2-APTES (green curve), SnO2-APTES-alkyl (blue curve), and SnO2-APTES-ester (black curve) are presented in Fig. 3 between 800 and 4000 cm -1 . All the sensors showed similar features in the range between 1950 and 2380 cm -1 which is not exploitable as it corresponds to CO2 gas contributions in ambient air. With respect to first step of functionalization which is the attachment of APTES on SnO2, most significant absorption bands were found between 800 and 1800 cm -1 (Fig. 3, green curve). The peak at 938 cm -1 is attributed to Sn-O-Si bond in stretching mode. The ethoxy groups of APTES hydrolyze and react with the hydroxyl groups presented on the surface of SnO2 grains. In addition, the hydrolyzed ethoxy groups to hydroxyl react with another hydroxyl of the neighbor grafted APTES molecule, which leads to SnO2 surface covered with siloxane network [START_REF] Kim | Formation, structure, and reactivity of aminoterminated organic films on silicon substrates[END_REF]. This feature is shown by the wide band between 978 cm -1 and 1178 cm -1 which is attributed to siloxane groups (Si-O-Si) from polymerized APTES. The -NH3 + and -NH2 vibrational signals of SnO2-APTES are found at 1496 cm -1 and 1570 cm -1 respectively. Regardless the pure SnO2, the CH2 stretch peaks for all modified sensor founded at 2935 cm -1 are related to the backbone of the attached molecules. Thus, from these peaks, the presence of characteristic features of APTES on the surface of SnO2 is confirmed.
The second step of sensors modification was the attachment of a film on SnO2-APTES, ended with alkyl or ester groups. These modifications were carried out by reaction of amines with acyl chlorides leading to the production of one equivalent acid, which forms a salt with unreacted amine of APTES and diminish the yield. The addition of triethylamine base is to neutralize this acid and leads to push the reaction forward. From the FTIR spectra in Fig. 3 (blue and black curves), the alkyl and ester sensors exhibit two peaks at 1547 cm -1 and 1645 cm -1 which correspond to carbonyl stretch mode and N-H bending mode of amide respectively. An additional broad peak between ~ 3000 and ~ 3600 cm -1 corresponds to N-H stretch of amide. These peaks confirm the success of the reaction between amine group of APTES and acyl chloride groups.
Asymmetrical C-H stretching mode of CH3 for SnO2-APTES-alkyl and for SnO2-APTES-ester appears at 2965 cm -1 . The stretching peak of carbon double bounded to oxygen of ester group of SnO2-APTES-ester is found at 1734 cm -1 (Fig. 3, black curve). These results show that SnO2 sensors are modified as expected with alkyl and ester end groups.
As a conclusion, FTIR analysis confirms that functionalization is effectively achieved on SnO2 by showing the presence of attached APTES molecules on SnO2 after silanization, as well as the existence of ester and alkyl molecules on SnO2-APTES after reaction with acyl chloride products.
Sensing measurements of different functionalized SnO2 sensors
The first part of the test under gases was to show the characteristic of the response of different sensors to ammonia gas. SnO2, SnO2-APTES, SnO2-APTES-alkyl and SnO2-APTES-ester sensors were tested under 100 ppm of ammonia balanced with 5% RH air at 25 °C. The four sensors responses (normalized conductance) are reported in Fig. 4. Fig. 4. The sensor response of SnO2 (G0=1.4×10 -5 Ω -1 ), SnO2-APTES (G0=7.9×10 -6 Ω -1 ), SnO2-APTES-alkyl (G0=1.5×10 -5 Ω -1 ), and SnO2-APTES-ester (G0=9.5×10 -6 Ω -1 ) to 100 ppm ammonia gas balanced with humid air (5%RH) at 25 °C.
SnO2 sensor
First we have to mention that the conductance of SnO2 at room temperature is measurable as reported by Ji Haeng Yu et al. [START_REF] Yu | Selective CO gas detection of CuO-and ZnO-doped SnO2 gas sensor[END_REF]. Indeed, stoichiometric SnO2 is known to be insulator at room temperature. But the used SnO2 sensitive film contains defects and the experiments were performed with 5%RH which lead to the formation of hydroxyl groups adsorbed on SnO2 surface. These two effects explain the measurable conductance base line under air balanced with 5%RH of pure SnO2. Furthermore, the conductance was still measurable even when the test was switched to dry air as the hydroxyl groups stayed adsorbed at room temperature.
The conductance of pure SnO2 decreases upon exposure to ammonia gas (Fig. 4). This type of response has been found before by Kamalpreet Khun Khun et al. [START_REF] Khun Khun | SnO2 thick films for room temperature gas sensing applications[END_REF] at temperature between 25 to 200 °C. They supposed that ammonia reacts with molecular adsorbed oxygen ion (O2 -, created as shown in Eq 1) producing nitrogen monoxide gas (NO) according to Eq 2. In presence of oxygen and at low temperature, NO can be easily transformed into NO2 which is very good oxidizing agent (Eq 3). The reaction of NO2 with SnO2 at ambient temperature causes the decrease of sensor conductance. NO2 adsorbs on SnO2 surface adsorption sites (s) and bring out electrons from the conduction band (Eq 4) [START_REF] Maeng | SnO2 Nanoslab as NO2 Sensor: Identification of the NO2 Sensing Mechanism on a SnO2 Surface[END_REF]. Thus, according to published results, the overall reaction of ammonia with SnO2 at room temperature could be written as in Eq 5. Such a mechanism is consistent with a conductance decrease upon ammonia exposure. However, actually, we have no experimental proof of such a mechanism.
Eq. (1) O2 + s + 1e -→ s-O2 - Eq. ( 2) 4NH3 + 5s-O2 -→ 4NO + 6H2O + 5s + 5e - Eq. (3) 2NO + O2 ↔ 2NO2 (equilibrium in air) Eq. (4) NO2 + s + e -→ s-NO2 - Eq. ( 5) 4NH3 + 4s + 7O2 + 4e -→ 4s-NO2 -+ 6H2O where s is an adsorption site.
SnO2-APTES sensor
When SnO2-APTES sensor was exposed to ammonia gas, no response was observed. Formation of APTES film on SnO2 prevents the water molecules to adsorb on the surface because the active sites of SnO2 are occupied by O-Si bond of APTES, and because of the hydrophobic nature of APTES film. Therefore, in the following discussion, the conventional mechanism of interaction of SnO2 with gases cannot be taken in consideration as no reactive sites are available. SnO2-APTES shows no change in conductance upon exposure to ammonia (Fig. 4). This implies that no significant interactions occur between the grafted APTES and ammonia gas. In term of polarity and other chemical properties like acidity, the amine and ammonia groups are almost the same, since amine is one of the derivatives of ammonia. Hence, such result was expected for SnO2-APTES. In addition, this behavior indicates that the SnO2 surface is well covered by APTES molecules because the negative response observed on pure SnO2 is totally inhibited.
SnO2-APTES-alkyl and SnO2-APTES-ester sensors
SnO2-APTES-alkyl and SnO2-APTES-ester exhibit increase in conductance upon exposure to 100 ppm of ammonia gas as shown in Fig. 4. However, the response of SnO2-APTES-ester is more important than for SnO2-APTES-alkyl. These responses could be related to the different polarities of the attached end groups. Indeed, Wang et al. [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF][START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF] reported that the response of some functionalized sensitive films is derived from the change in electrostatic field of the attached molecular layer. Ester is a good electron withdrawing group, while alkyl is mostly considered as nonpolar. Ammonia molecule is a good nucleophilic molecule (donating), thus the interaction is between electron withdrawing (ester) and electron donating groups. In this case, dipole-dipole interaction is taking place. However, in the case of SnO2-APTES-alkyl, the interaction is of induced dipole type because ammonia is a polar molecule and alkyl end group is mostly nonpolar. It is likely that the adsorption process occurs through interaction between the nitrogen of ammonia and the end functional group of the molecular layer (alkyl and ester). It was reported before by B. Wang et al. [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF], that the dipole-dipole interaction is always stronger than induced dipole interaction. This can explain the difference in the response between the SnO2-APTES-alkyl and SnO2-APTESester sensors to ammonia gas. As mentioned previously, the interaction can also result from diffusion of the gas in the molecular layer. This type of interaction is favorable only for SnO2-APTES-alkyl. It is difficult for ammonia molecules to diffuse in the molecular layer of SnO2-APTES-ester, because of the steric hindrance induced by ester end groups. This phenomenon can explain the response of SnO2-APTES-alkyl to ammonia in addition to the induced dipole interaction. These two interactions (i.e. dipole-dipole and induced-dipole) result in a modification in the dipole moment of the whole film. The variation in the molecular layer's dipole moment affects the electron mobility in SnO2 film which modifies the conductance [START_REF] Hoft | Effect of dipole moment on current-voltage characteristics of single molecules[END_REF][START_REF] Young | Effect of group and net dipole moments on electron transport in molecularly doped polymers[END_REF]. The exposure to ammonia leads to increase in electron mobility (proportional to conductance). The same behavior was founded to a selection of polar and non-polar gases but on ester, and alkyl silicon oxide functionalized substrate [START_REF] Wang | Effect of Functional Groups on the Sensing Properties of Silicon Nanowires toward Volatile Compounds[END_REF].
According to the above discussion, the response of the molecular modified sensors does not obey the conventional mechanism of direct interaction with SnO2. The response comes powerfully from the interaction of ammonia molecules with end function group of the attached layer. The response of SnO2-APTES-ester is generated from dipole-dipole interaction, while the response of SnO2-APTES-alkyl is produced from induced dipole interaction which has less significant effect.
Sensors sensitivity
Regarding the different sensors sensitivity against ammonia concentrations, Fig. 5 shows the relative responses versus ammonia concentrations of SnO2-APTES-ester in comparison with pure SnO2, SnO2-APTES and SnO2-APTES-alkyl sensors. Sensitivity is defined as the slope of the relative response curve versus ammonia concentrations, i.e., how large is the change in the sensor signal upon a certain change in ammonia concentration. Pure SnO2 and SnO2-APTES sensors have almost no sensitivity to different ammonia concentrations. In addition, SnO2-APTES-alkyl gives no significant response between 0.5 ppm and 30 ppm, but its sensitivity starts to increase from 30 ppm of ammonia. It can be noticed that SnO2-APTES-ester exhibit constant sensitivity between 0.5 ppm and 10 ppm, around 0.023 ppm -1 . After this concentration the sensor starts to become saturated and the sensitivity continuously decreases down to nearly zero at 100 ppm NH3. However, the sensitivity of SnO2-APTES-ester at concentrations higher than 30 ppm is still more significant than the sensitivity of SnO2-APTES-alkyl. The calculated LOD for ester modified SnO2 was 80 ppb. To summarize this section, SnO2-APTES-ester sensors showed good sensitivity to ammonia gas in a range of concentration compatible with breath analysis applications (sub-ppm). This sensor is studied in more details in the following part.
Focus on SnO2-APTES-ester sensor
Effect of humidity on the response
As known, human breath contains high amount of humidity (100 % RH at 37 °C). molecule presents high polarity, hence it can affect the response to ammonia gas. In order to check this effect, SnO2-APTES-ester was tested to ammonia gas under different amount of relative humidity ranging from 5 to 50% RH. Figure 6 shows the sensor response of SnO2-APTES-ester to 100 ppm in dry and humid air at 25 °C. Upon exposure to ammonia gas the sensor conductance increases in the four cases (dry air, 5%RH, 26%RH, and 50%RH) with fast response and recovery times. In dry air and 5%RH, the sensor shows almost the same response magnitude, 1.46 and 1.45 respectively, which decreases to 1.25 and 1.06 in 26%RH and 50%RH respectively. This means that a small quantity of relative humidity does not affect the sensor response, while at elevated amount the response starts to be less significant. A potential explanation is that the attached ester film is saturated by water molecules or adsorbed hydroxyl groups at high RH. Hence, during exposure to ammonia, there is a limitation of response due to adsorption competition between water and ammonia. For the future tests, the humidity will be kept at 5%RH. In these conditions, 5%RH at 25 °C, the response and recovery times (as defined in section 2.4) are 98 s and 130 s respectively, which is quite noticeable for a tin oxide based sensors working at room temperature.
Effect of operating temperature
In the most cases, the increase of the operating temperature of SnO2 sensors increases the sensitivity to gases. In contrast, in the case of SnO2-APTES-ester, the increase of temperature generates decrease of ammonia sensor response. As shown in Fig. 7, the response of SnO2-APTES-ester to ammonia decreases from 1.45 (25 °C) to 1.18 and 1.06 when the operating temperature is increased to 50 °C and 100 °C respectively. Therefore, the interaction of ammonia with ester attached to SnO2 is more significant at low temperature (25 °C) than at higher one (50 and 100 °C).
So, interesting conclusion is that SnO2-APTES-ester has to be operated close to room temperature (25 °C), without any power consumption for the sensitive detection of ammonia.
Effect of ammonia concentration on sensor response
The influence of ammonia concentrations on sensor response was studied at the optimum conditions defined previously, 5%RH at 25 °C. Figure 8 shows the change in conductance of SnO2-APTES-ester upon exposure to different concentrations of ammonia gas (0.2-100 ppm).
The curve at low concentrations (Fig. 8a) was used to calculate the limit of detection (LOD) of the sensor to ammonia gas which is around 80 ppb. Such a value confirms the potentiality of ammonia detection for breath analysis application.
Selectivity
Human breath contains a wide variety of volatile organic compounds which are polar or nonpolar, oxidant or reductant. It's well known that SnO2 sensor which usually operates at temperature between 350-500 °C can give a response to most of these types of gases, unfortunately without distinction (selectivity) or even with compensating effect for oxidant/reducing gases [START_REF] Pijolat | Application of membranes and filtering films for gas sensors improvements[END_REF]. Field effect transistors functionalized with ester end group have shown responses to a wide variety of volatile organic compounds like alcohols and alkanes [START_REF] Wang | Artificial Sensing Intelligence with Silicon Nanowires for Ultraselective Detection in the Gas Phase[END_REF]. This implies that these sensors also have a lack of selectivity. To check the selectivity to ammonia of the developed SnO2-APTES-ester, such sensors were tested with respect to ethanol and acetone gases.
Figure 9 shows that the SnO2-APTES-ester sensors have almost no change in conductance upon exposure to 50 ppm of acetone and ethanol at 25 °C. This means that ester modified sensor is relatively selective to ammonia, at least in regards of the two tested gases. This particular selectivity derives from the way of interactions of the grafted layer on SnO2 with ammonia gas.
As mentioned previously, interactions occur between the ester end group which is strongly electron withdrawing, and the ammonia molecule which is electron donnating. Hence, electrons are withdrawed by the attached ester end group from the ammonia molecules adsorbed on it during exposure. These interactions lead to the significant response of SnO2-APTES-ester sensor.
Other molecules like ethanol and acetone do not have this high affinity to donate the electrons to SnO2-APTES-ester, explaining minor changes in conductance upon exposure to these gases.
Conclusion
Molecularly modified SnO2 thick films were produced by screen printing and wet chemical processes. The functionalizations were carried out first by grafting of 3aminopropyltriethoxysilane followed by reaction with hexanoyl chloride or methyl adiapoyl chloride. Then, tests under gases were performed. Pure SnO2 sensor and APTES modified SnO2 didn't show any significant sensitivity to NH3 (0.5-100 ppm) at 5%RH, 25 °C, while the sensitivity to NH3 gas starts to increase from 30 ppm for alkyl modified sensor. On the contrary, ester modified sensor exhibit fast response and recovery time to NH3 gas with a limit of detection estimated to 80 ppb at 5%RH, 25 °C. In addition, this sensor shows constant sensitivity between 0.5 and 10 ppm of NH3 (0.023 ppm -1 ). Moreover, ester modified sensor is selective to NH3 gas with respect to reducing gases like ethanol and acetone. However, the relative humidity higher than 5% decreases the response. Working at room temperature, ester modified sensor may be a good candidate for breath analysis applications for the diagnosis of diseases related to ammonia gas biomarker. Such sensor could be coupled with a condenser to reduce the amount of humidity in the analyzed breath sample to 5 %RH. The sensing mechanism of ester modified SnO2 and the selectivity in regards of various volatile organic compounds have to be investigated in further work.
Fig. 1 .
1 Fig. 1. Photograph of the two SnO2 sensor sides deposited by screen printing.
Fig. 2 .
2 Fig. 2. Schematic illustration of SnO2-APTES, SnO2-APTES-alkyl, and SnO2-APTES-ester synthesis steps. Hexanoyl chloride represent the molecule C5H8ClOR with R: CH3 and methyl adipoyl chloride is the molecule C5H8ClOR with R: COOCH3.
Fig. 3 .
3 Fig. 3. ATR-FTIR spectra of SnO2 (red curve), SnO2-APTES (green curve), SnO2-APTES-alkyl (blue curve), and SnO2-APTES-ester (black curve) films.
Fig. 5 .
5 Fig. 5. Relative response of pure SnO2, SnO2-APTES, SnO2-APTES-alkyl, and SnO2-APTESester sensors versus ammonia concentrations balanced with 5% RH air at 25 °C.
Fig. 6 .
6 Fig. 6. The sensor response curves of SnO2-APTES-ester to 100 ppm of ammonia gas balanced with dry air, 5%RH, 26%RH, and 50%RH at 25 °C.
Fig. 7 .
7 Fig. 7. The sensor response of SnO2-APTES-ester to 100 ppm ammonia gas balanced with humid air (5%RH) at 25 °C, 50 °C, and 100 °C.
Figure
Figure 8b also shows the stability of the sensor with time. The stability of metal oxide is a challenge mostly for room temperature gas sensors. Present results show quite good stability of the baseline. The drift is around 0.98% over 16 hours.
Fig. 8 .
8 Fig. 8. The change in conductance of SnO2-APTES-ester upon exposure to different concentrations of ammonia gas in humid air (5%RH) at 25 °C, a) [NH3] ranging from 0.2 to 5 ppm and b) [NH3] ranging from 5 to 100 ppm.
Fig. 9 .
9 Fig. 9. The sensor response of SnO2-APTES-ester upon exposure to 50 ppm of ammonia, ethanol, and acetone gases in humid air (5%RH) at 25 °C. |
01757269 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2016 | https://hal.science/hal-01757269/file/MMT_4RUU_Nurahmi_Caro_Wenger_Schadlbauer_Husty.pdf | Latifah Nurahmi
email: latifah.nurahmi@irccyn.ec-nantes.fr
Stéphane Caro
email: stephane.caro@irccyn.ec-nantes.fr
Philippe Wenger
email: philippe.wenger@irccyn.ec-nantes.fr
Josef Schadlbauer
email: josef.schadlbauer@uibk.ac.at
Manfred Husty
email: manfred.husty@uibk.ac.at
Reconfiguration analysis of a 4-RUU parallel manipulator
teaching and research institutions in France or abroad, or from public or private research centers.
Introduction
To the best of the authors knowledge, the notion of operation mode was initially introduced by Zlatanov et al. in [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF] to explain the behaviour of the three degree-of-freedom (3-dof ) DYMO robot which can undergo a variety of transformation when passing through singular configurations. In [START_REF] Kong | Reconfiguration Analysis of a 3-DOF Parallel Mechanism Using Euler Parameter Quaternions and Algebraic Geometry Method[END_REF], the author analysed the types of operation modes and the transition configurations of the 3-RER 1 Parallel Manipulator (PM) based upon the Euler parameter quaternions. Walter et al. in [START_REF] Walter | A Complete Kinematic Analysis of the SNU 3-UPU Parallel Robot[END_REF] used the Study's kinematic mapping to show that the 3-UPU PM built at the Seoul National University (SNU) has nine different operation modes. Later in [START_REF] Walter | Kinematic Analysis of the TSAI 3-UPU Parallel Manipulator Using Algebraic Methods[END_REF], the authors revealed five different operation modes of the 3-UPU PM proposed by Tsai in 1996 [START_REF] Tsai | Kinematics of a 3-DOF Platform with Three Extensible Limbs[END_REF]. By using same approach, Schadlbauer et al. in [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF] found two distinct operation modes of the 3-RPS PM proposed by Hunt in 1983 [START_REF] Hunt | Structural Kinematics of In-Parallel-Actuated Robot-Arms[END_REF]. Later in [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF], the authors characterized the motion type in both operation modes by using the axodes. The self-motions of this manipulator were classified in [START_REF] Schadlbauer | Self-Motions of 3-RPS Manipulators[END_REF]. Another PM of the 3-RPS family is the 3-RPS Cube PM and was proposed by Huang et al. in 1995 [10]. Nurahmi et al. in [START_REF] Nurahmi | Kinematic Analysis of the 3-RPS Cube Parallel Manipulator[END_REF][START_REF] Nurahmi | Motion Capability of the 3-RPS Cube Parallel Manipulator[END_REF] found that this manipulator has only one operation mode in which the 3-dof general motion and 1-dof Vertical Darboux Motion occur inside the same operation mode.
Accordingly, a general methodology for the type synthesis of reconfigurable mechanisms has been proposed and several new reconfigurable mechanisms have been generated. In [START_REF] Kong | Type Synthesis of Parallel Mechanisms with Multiple Operation Modes[END_REF][START_REF] Kong | Type Synthesis of Variable Degrees-of-Freedom Parallel Manipulators with Both Planar and 3T1R Operation Modes[END_REF], the authors proposed a general method based upon the screw theory to synthesize a PM that can perform two operation modes. In [START_REF] He | Kinematic Analysis of a Single-Loop Reconfigurable 7R Mechanism with Multiple Operation Modes[END_REF], a novel 1-dof single-loop reconfigurable 7-R mechanism with multiple operation modes based upon the Sarrus mechanism, was proposed. The following year, the reconfiguration analysis of this mechanism based on the kinematic mapping and the algebraic geometry method was presented in [START_REF] Kong | Type Synthesis and Reconfiguration Analysis of a Class of Variable-DOF Single-Loop Mechanisms[END_REF].
By using the theory of the displacement groups, the lower-mobility PM with multiple operation modes and different number of dof were presented in [START_REF] Fanghella | Parallel Robots that Change Their Group of Motion[END_REF]. Refaat et al. in [START_REF] Refaat | Two-Mode Overconstrained Three DOFs Rotational-Translational Linear-Motor-Based Parallel-Kinematics Mechanism for Machine Tool Applications[END_REF] introduced a family of 3-dof PM that can exhibit two 1T1R modes by using Lie-group theory. In [START_REF] Gogu | Maximally Regular T2R1-Type Parallel Manipulators with Bifurcated Spatial Motion[END_REF], Gogu introduced several PM with two 2T1R modes. In [START_REF] Gan | Mobility Change in Two Types of Metamorphic Parallel Mechanisms[END_REF], a new joint was presented and added in the manipulator architecture hence it allows the moving platform to change the motion types. By adding a rTPS limb which has two phases, a new metamorphic parallel mechanism is introduced in [START_REF] Gan | Unified Kinematics and Singularity Analysis of a Metamorphic Parallel Mechanism with Bifurcated Motion[END_REF]. The link-coincidence-based geometric-constraint method is proposed in [START_REF] Dai | Mobility in Metamorphic Mechanisms of Foldable/Erectable Kinds[END_REF] to obtain reconfigurable mechanisms originated from carton folds and packaging dated back to 1996. At the same year, Wohlhart in [START_REF] Wohlhart | Kinematotropic linkages[END_REF] showed mechanisms that changed mobility through singularities.
In [START_REF] Li | Parallel Mechanisms with Bifurcation of Schoenflies Motion[END_REF], Li and Hervé investigated several PM with two distinct Schönflies modes. The Schönflies motion contains three independent translations and one pure rotation about an axis of fixed direction, namely 3T1R. The authors continued in [START_REF] Lee | Isoconstrained Parallel Generators of Schoenflies Motion[END_REF] to present the systematic approach to synthesize the iso-constrained parallel Schönflies motion generators with two identical 5-dof limbs.
The type synthesis of the 3T1R PM with four identical limb structures was performed in [START_REF] Kong | Type Synthesis of Parallel Mechanisms[END_REF], which leads to a kinematic architecture with four revolute actuators, namely the 4-RUU PM. In [START_REF] Masouleh | Solving the Forward Kinematic Problem of 4-DOF Parallel Mechanisms (3T1R) with Identical Limb Structures and Revolute Actuators Using the Linear Implicitization Algorithm[END_REF], eight solutions of the direct kinematics were enumerated by using the linear implicitization algorithm. Amine et al. in [START_REF] Amine | Singularity Analysis of the 4-RUU Parallel Manipulator Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators with Identical Limb Structures[END_REF] investigated the singularity conditions of the 4-RUU PM by using the Grassmann-Cayley Algebra and the Grassmann Geometry. It is shown that the 4-RUU PM is an over-constrained manipulator and it shares some common properties among the constraint wrenches.
By using an algebraic description of the manipulator and the Study's kinematic mapping based upon [START_REF] Husty | Algebraic Methods in Mechanism Analysis and Synthesis[END_REF], a characterization of the operation modes of the 4-RUU PM are discussed in more details in this paper. Due to the unique topology of the RUU limb that comprises two links with one revolute actuator attached to the base, the actuated joint angle always appears in every constraint equation. This kinematic issue does not allow to compute a primary decomposition because the constraint equation changes for every joint inputs. As a consequence, the 4-RUU PM is decomposed into two iso-constrained 2-RUU PM. The constraint equations of each 2-RUU PM are initially derived and the primary decomposition is computed. It turns out that the 2-RUU PM has three 4-dof operation modes. By combining the results of primary decomposition from both 2-RUU PM, the operation modes of the 4-RUU PM can be characterized. It reveals that the 4-RUU PM has two 4-dof operation modes and one 2-dof operation mode. The singularities are examined by deriving the determinant of the Jacobian matrix of the constraint equations with respect to the Study parameters. It is shown that the manipulators are able to change from one operation mode to another operation mode by passing through the configurations that belong to both modes. The singularity conditions are mapped onto the joint space. Eventually, the changes of operation modes are illustrated.
This paper is organized as follows: A detailed description of the manipulator architecture is given in Section 2. The constraint equations of the manipulators are expressed in Section 3. These constraint equations are used to identify the operation modes in Section 4. In Section 5, the singularity conditions and self-motions are presented. Eventually, the operation modes changing of the 4-RUU PM is discussed in Section 6.
Nurahmi, Caro, Wenger, Schadlbauer, and Husty, submitted to Mech. and Mach. Theory 4 The 4-RUU PM shown in Fig. 1, is composed of a square base, a square moving platform, and four identical limbs. The origin O of the fixed frame Σ 0 and the origin P of the moving frame Σ 1 are located at the center of the square base and the square moving platform, respectively. Each limb is composed of five R-joints such that the second and the third ones, as well as the forth and the fifth ones, are built with intersecting and perpendicular axes. Thus they are assimilated to U-joint. The printed model of this manipulator is shown in Fig. 2. The first R-joint is attached to the base and is actuated. Its rotation angle is defined by θ 1i (i = 1, ..., 4). The axes of the first and the second joints are directed along Z-axis. The axis of the fifth joint is directed along z-axis. The second axis and the fifth axis are denoted by v i and n i (i = 1, ..., 4), respectively. The axes of the third and the forth joints are parallel. The axis of the third joint is denoted by s i (i = 1, ..., 4) and it changes instantaneously as a function of θ 2i (i = 1..4) as shown in Fig. 3, hence: The first R-joint of the i-th limb is located at point A i with distance a from the origin O of the fixed frame Σ 0 . The first U-joint is denoted by point B i with distance l from point A i . Link A i B i always moves in a plane normal to v i . Hence the coordinates of points A i and B i expressed in the fixed frame Σ 0 are:
PSfrag A 1 A 2 A 3 A 4 C 1 C 2 C 3 C 4 B 1 v 1 n 1 s 1 s 1 O P r X Y Z x y z Σ 0 Σ 1
s i = ( 0, cos(θ 2i ), sin(θ 2i ), 0 ) T , i = 1, ..., 4 (1)
r 0 A 1 = ( 1, a, 0, 0 ) T r 0 B 1 = ( 1, l cos(θ 11 ) + a, l sin(θ 11 ), 0 ) T r 0 A 2 = ( 1, 0, a, 0 ) T r 0 B 2 = ( 1, l cos(θ 12 ), l sin(θ 12 ) + a, 0 ) T r 0 A 3 = ( 1, -a, 0, 0 ) T r 0 B 3 = ( 1, l cos(θ 13 ) -a, l sin(θ 13 ), 0 ) T r 0 A 4 = ( 1, 0, -a, 0 ) T r 0 B 4 = ( 1, l cos(θ 14 ), l sin(θ 14 ) -a, 0 ) T (2)
The moving platform is connected to the limbs by four U-joints, of which the intersection point of the R-joint axes is denoted by C i . The length of the moving platform from the origin P of the moving frame Σ 1 to point C i is defined by b. The length of link B i C i is defined by r. The coordinates of point C i expressed in the moving frame Σ 1 are:
r 1 C 1 = ( 1, b, 0, 0 ) T r 1 C 3 = ( 1, -b, 0, 0 ) T r 1 C 2 = ( 1, 0, b, 0 ) T r 1 C 4 = ( 1, 0, -b, 0 ) T (3)
As a consequence, there are four design parameters a, b, l, and r; and four joint variables θ 11 , θ 12 , θ 13 , and θ 14 that determine the motions of the 4-RUU PM.
Constraint equations
In this section, the constraint equations are derived whose solutions illustrate the possible poses of the moving platform (coordinate frame Σ 1 ) with respect to Σ 0 . To obtain the coordinates of points C i and vectors n i expressed in Σ 0 , the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) based on [START_REF] Husty | Algebraic Methods in Mechanism Analysis and Synthesis[END_REF] is used.
M = x 2 0 + x 2 1 + x 2 2 + x 2 3 0 T 3×1 M T M R (4)
where M T and M R represent the translational and rotational parts of the transformation matrix M, respectively, and are expressed as follows:
M T = 2(-x 0 y 1 + x 1 y 0 -x 2 y 3 + x 3 y 2 ) 2(-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) 2(-x 0 y 3 -x 1 y 2 + x 2 y 1 + x 3 y 0 ) M R = x 2 0 + x 2 1 -x 2 2 -x 2 3 2(x 1 x 2 -x 0 x 3 ) 2(x 1 x 3 + x 0 x 2 ) 2(x 1 x 2 + x 0 x 3 ) x 2 0 -x 2 1 + x 2 2 -x 2 3 2(x 2 x 3 -x 0 x 1 ) 2(x 1 x 3 -x 0 x 2 ) 2(x 2 x 3 + x 0 x 1 ) x 2 0 -x 2 1 -x 2 2 + x 2 3 (5)
Nurahmi, Caro, Wenger, Schadlbauer, and Husty, submitted to Mech. and Mach. Theory 7
The parameters x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 , which appear in matrix M, are called Study parameters. These parameters make it possible to parametrize SE(3) with dual quaternions. The Study's kinematic mapping maps each spatial Euclidean displacement of SE(3) via transformation matrix M onto a projective point X [x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ] in the 6-dimensional Study quadric S ∈ P 7 [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF], such that: SE(3) → X ∈ P 7 (x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ) T = (0 : 0 : 0 : 0 : 0 : 0 : 0 : 0) T
Every projective point X will represent a spatial Euclidean displacement, if it fulfils the following equation and inequality:
x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0, x 2 0 + x 2 1 + x 2 2 + x 2 3 = 0 (7)
Those two conditions will be used in the following computations to simplify the algebraic expressions. First of all, the tangent half-angle substitutions are performed to rewrite the trigonometric functions of θ 1i and θ 2i (i = 1, ..., 4) in terms of rational functions of a new variable t ij . However, the tangent half-angle substitutions will increase the degree of variable and make the computation quite heavy.
cos(θ
ij ) = 1 -t 2 ij 1 + t 2 ij , sin(θ ij ) = 2t 2 ij 1 + t 2 ij , i = 1, 2, j = 1, ..., 4 (8)
where
t ij = tan( θ ij 2
). The coordinates of points C i and vectors n i expressed in the fixed frame Σ 0 are obtained by:
r 0 C i = M r 1 C i , n 0 i = M n 1 i , i = 1, ..., 4 (9)
The coordinates of all points are given in terms of the Study parameters and the design parameters. The constraint equations can be obtained by examining the design of RUU limb. The link connecting points B i and C i is coplanar to the vectors v i and n 0 i . Accordingly, the scalar triple product of vectors (r 0 C ir 0 B i ), v i and n 0 i vanishes, namely:
(r 0 C i -r 0 B i ) T . (v i × n 0 i ) = 0 , i = 1, ..., 4 (10)
After computing the corresponding scalar triple product and removing the common denominators, the following constraint equations come out:
g 1 : (at 2 11 -bt 2 11 -lt 2 11 + a -b + l)x 0 x 1 + 2lt 11 x 0 x 2 -(2t 2 11 + 2)x 0 y 0 + 2lt 11 x 3 x 1 + (-at 2 11 -bt 2 11 + lt 2 11 -a -b -l)x 3 x 2 + (-2t 2 11 -2)y 3 x 3 = 0 (11a) g 2 : (l -lt 2 12 )x 0 x 1 + (at 2 12 -bt 2 12 + 2lt 12 + a -b)x 0 x 2 -(2t 2 12 + 2)x 0 y 0 + (at 2 12 + bt 2 12 +2lt 12 + a + b)x 3 x 1 + (lt 2 12 -l)x 3 x 2 -(2t 2 12 + 2)y 3 x 3 = 0 (11b) g 3 : (at 2 13 -bt 2 13 + lt 2 13 + a -b -l)x 0 x 1 -2lt 13 x 0 x 2 + (2t 2 13 + 2)x 0 y 0 -2lt 13 x 1 x 3 + (-at 2 13 -bt 2 13 -lt 2 13 -a -b + l)x 2 x 3 + (2t 2 13 + 2)x 3 y 3 = 0 (11c) g 4 : (lt 2 14 -l)x 0 x 1 + (at 2 14 -bt 2 14 -2lt 14 + a -b)x 0 x 2 + (2t 2 14 + 2)x 0 y 0 + (at 2 14 + bt 2 14 -2lt 14 + a + b)x 1 x 3 + (-lt 2 14 + l)x 2 x 3 + (2t 2 14 + 2)x 3 y 3 = 0 (11d)
To derive the constraint equations corresponding to the link length r of link B i C i , the distance equation can be formulated as: (r 0
C i -r 0 B i ) 2 = r 2 .
As a consequence, the following four equations are obtained:
g 5 : (a 2 t 2 11 -2abt 2 11 -2alt 2 11 + b 2 t 2 11 + 2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 -2ab + 2al + b 2 -2bl + l 2 - r 2 )x 2 0 -8blt 11 x 0 x 3 + (4at 2 11 -4bt 2 11 -4lt 2 11 + 4a -4b + 4l)x 0 y 1 + 8lt 11 x 0 y 2 + (a 2 t 2 11 -2a bt 2 11 -2alt 2 11 + b 2 t 2 11 + 2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 -2ab + 2al + b 2 -2bl + l 2 -r 2 )x 2 1 -8 blt 11 x 1 x 2 + (-4at 2 11 + 4bt 2 11 + 4lt 2 11 -4a + 4b -4l)x 1 y 0 -8lt 11 x 1 y 3 + (a 2 t 2 11 + 2abt 2 11 -2 alt 2 11 + b 2 t 2 11 -2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 + 2ab + 2al + b 2 + 2bl + l 2 -r 2 )x 2 2 -8lt 11 x 2 y 0 +(4at 2 11 + 4bt 2 11 -4lt 2 11 + 4a + 4b + 4l)x 2 y 3 + (a 2 t 2 11 + 2abt 2 11 -2alt 2 11 + b 2 t 2 11 -2blt 2 11 + l 2 t 2 11 -r 2 t 2 11 + a 2 + 2ab + 2al + b 2 + 2bl + l 2 -r 2 )x 2 3 + 8lt 11 x 3 y 1 + (-4at 2 11 -4bt 2 11 + 4l t 2 11 -4a -4b -4l)x 3 y 2 + (4t 2 11 + 4)y 2 0 + (4t 2 11 + 4)y 2 1 + (4t 2 11 + 4)y 2 2 + (4t 2 11 + 4)y 2 3 = 0 (12a) g 6 : (a 2 t 2 g 7 : (a 2 t 2 13 -2abt 2 13 + 2alt 2 13 + b 2 t 2 13 -2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 -2ab -2al + b 2 + 2bl + l 2 - r 2 )x 2 0 + 8blt 13 x 0 x 3 + (-4at 2 13 + 4bt 2 13 -4lt 2 13 -4a + 4b + 4l)x 0 y 1 + 8lt 13 x 0 y 2 + (a 2 t 2 13 -2 abt 2 13 + 2alt 2 13 + b 2 t 2 13 -2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 -2ab -2al + b 2 + 2bl + l 2 -r 2 )x 2 1 + 8 blt 13 x 1 x 2 + (4at 2 13 -4bt 2 13 + 4lt 2 13 + 4a -4b -4l)x 1 y 0 -8lt 13 x 1 y 3 + (a 2 t 2 13 + 2abt 2 13 + 2al t 2 13 + b 2 t 2 13 + 2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 + 2ab -2al + b 2 -2bl + l 2 -r 2 )x 2 2 -8lt 13 x 2 y 0 + ( -4at 2 13 -4bt 2 13 -4lt 2 13 -4a -4b + 4l)x 2 y 3 + (a 2 t 2 13 + 2abt 2 13 + 2alt 2 13 + b 2 t 2 13 + 2blt 2 13 + l 2 t 2 13 -r 2 t 2 13 + a 2 + 2ab -2al + b 2 -2bl + l 2 -r 2 )x 2 3 + 8lt 13 x 3 y 1 + (4at
+ a 2 + 2ab + b 2 + l 2 -r 2 )x 2 1 + (-4blt 2 14 + 4 bl)x 1 x 2 + (4lt 2 14 -4l)x 1 y 0 + (4at 2 14 + 4bt 2 14 -8lt 14 + 4a + 4b)x 1 y 3 + (a 2 t 2 14 -2abt 2 14 + b 2 t 2 14 +l 2 t 2 14 -r 2 t 2 14 -4alt 14 + 4blt 14 + a 2 -2ab + b 2 + l 2 -r 2 )x 2 2 + (4at 2 14 -4bt 2 14 -8lt 14 + 4a -4b)x 2 y 0 + (-4lt 2 14 + 4l)x 2 y 3 + (a 2 t 2 14 + 2abt 2 14 + b 2 t 2 14 + l 2 t 2 14 -r 2 t 2 14 -4alt 14 -4blt 14 + a 2 + 2ab + b 2 + l 2 -r 2 )x 2
3 + (-4at 2 14 -4bt 2 14 + 8lt 14 -4a -4b)x 3 y 1 + (4lt 2 14 -4l)x 3 y 2 + (4t 2 14 + 4)y 2 0 + (4t 2 14 + 4)y 2 1 + (4t 2 14 + 4)y 2 2 + (4t 2 14 + 4)y 2 3 = 0 (12d) To derive the constraint equations corresponding to the axes s i of each limb, the scalar product of vector --→ B i C i and vector s i should vanish, as : (r 0 C ir 0 B i ) T s i = 0. Hence, the following constraint equations are obtained: 7) is added since all solutions have to be within the Study quadric, i.e.: g 13 : x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0. To exclude the exceptional generator (x 0 = x 1 = x 2 = x 3 = 0), we add the following normalization equation:
g 9 : (-at
g 14 : x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0.
It assures that there is no point of the exceptional generators appears as a solution.
Operation modes
The 4-RUU PM is an over-constrained mechanism [START_REF] Amine | Singularity Analysis of the 4-RUU Parallel Manipulator Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators with Identical Limb Structures[END_REF], therefore it can be decomposed into two iso-constrained 2-RUU PM as shown in Fig. 4. The printed model of the 4-RUU PM presented in Fig. 2, can also be decomposed into 2-RUU PM as shown in Fig. 5. The first mechanism consists of the 1st and the 3rd limbs, hence it is named the 2-RUU (I) PM. The second mechanism consists of the 2nd and the 4th limbs, hence it is named the 2-RUU (II) PM. The moving platforms of both mechanisms move independently. When the moving frames of both mechanisms are coincident (accordingly P I ≡ P II ), we obtain the 4-RUU PM. As a consequence, the operation modes of the 4-RUU PM are determined by the linear combination of the results of primary decomposition of the 2-RUU (I) and the 2-RUU (II) PM, as presented in the following. The operation modes and the self-motions of the 2-RUU PM are presented in more detail in [START_REF] Nurahmi | Operation Modes and Self-motions of a 2-RUU Parallel Manipulator[END_REF].
A 1 A 2 A 3 A 4 C 1 C 2 C 3 C 4 X Y Z x x
The 2-RUU (I) PM
In the 2-RUU (I) PM, the first and the second R-joints in each limb are actuated. The design parameters are assigned as a = 2, b = 1, l = 1, r = 2 (in units that need nod be specified). The set of eight constraint equations is written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[t 11 , t 13 , t 21 , t 23 ], defined as: I (I) = g 1 , g 3 , g 5 , g 7 , g 9 , g 11 , g 13 , g 14 . At this point, the following ideal is examined:
J (I) = g 1 , g 3 , g 13 .
The primary decomposition is computed to verify if the ideal J (I) is the intersection of several smaller ideals. Indeed, the ideal J (I) is decomposed into three components as: J (I) = 3 k=1 J k(I) , with the results of primary decomposition:
J 1(I) = x 0 , x 3 , x 1 y 1 + x 2 y 2 J 2(I) = x 1 , x 2 , x 0 y 0 + x 3 y 3 J 3(I) = (
Accordingly, the 2-RUU (I) PM under study has three operation modes. The computation of the Hilbert dimension of ideal J k(I) with t 11 , t 13 , t 21 , t 23 treated as variables shows that: dim(J k(I) ) = 4 (k = 1, ..., 3). To complete the analysis, the remaining equations are added by writing:
K k(I) : J k(I) ∪ g 5 , g 7 , g 9 , g 11 , g 14 , k = 1, ..., 3 (15)
It follows that the 2-RUU (I) PM has three 4-dof operation modes. This type of manipulator is called invariable-dof PM in [START_REF] Kong | Type Synthesis of Variable Degrees-of-Freedom Parallel Manipulators with Both Planar and 3T1R Operation Modes[END_REF]. Each system K k(I) corresponds to a specific operation mode that will be discussed in the following.
System K 1(I) : 1st Schönflies mode
In this operation mode, the moving platform is reversed about an axis parallel to the XY -plane of Σ 0 by 180 degrees from the "identity condition". The identity condition is when the moving frame and the fixed frame are coincident, i.e. Σ 1 ≡ Σ 0 and the transformation matrix is an identity matrix. The condition x 0 = 0, x 3 = 0, x 1 y 1 + x 2 y 2 = 0 are valid for all poses and are substituted into the transformation matrix M, such that:
M 1(I) = 1 0 0 2(x 1 y 0 -x 2 y 3 ) x 2 1 -x 2 2 2x 1 x 2 0 2(x 1 y 3 + x 2 y 0 ) 2x 1 x 2 -x 2 1 + x 2 2 0 - 2y 2 x 1 0 0 -1 (16)
From the transformation matrix M 1(I) , it can be seen that the 2-RUU (I) PM has 3-dof translational motions, which are parametrized by y 0 , y 2 , y 3 and 1-dof rotational motion, which is parametrized by x 1 , x 2 in connection with x 2 1 + x 2 2 -1 = 0 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. The z-axis of frame Σ 1 attached to the moving platform is always pointing downward in this operation mode and the moving platform remains parallel to the base.
System K 2(I) : 2nd Schönflies mode
In this operation mode, the condition x 1 = 0, x 2 = 0, x 0 y 0 + x 3 y 3 = 0 are valid for all poses. The transformation matrix in this operation mode is written as:
M 2(I) = 1 0 0 0 -2(x 0 y 1 -x 3 y 2 ) x 2 0 -x 2 3 -2x 0 x 3 0 -2(x 0 y 2 + x 3 y 1 ) 2x 0 x 3 x 2 0 -x 2 3 0 - 2y 3 x 0 0 0 1 (17)
From the transformation matrix M 2(I) , it can be seen that the 2-RUU (I) PM has 3-dof translational motions, which are parametrized by y 1 , y 2 , y 3 and 1-dof rotational motion, which is parametrized by x 0 , x 3 in connection with x 2 0 + x 2 3 -1 = 0 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF]. In this operation mode, the z-axis of frame Σ 1 attached the moving platform is always pointing upward and the moving platform remains parallel to the base.
The systems K 1(I) and K 2(I) have the same motion type, i.e. Schönflies motion, however they do not have configurations in common. It occurs since the orientation of the moving platform is not the same from one operation mode to the other. The z-axis of frame Σ 1 attached to the moving platform in system K 1(I) is always pointing downward (the moving platform is always titled by 180 degrees), while in the system K 2(I) , the z-axis of frame Σ 1 attached to the moving platform is always pointing upward.
System K 3(I) : Third mode
In this operation mode, the moving platform is no longer parallel to the base. The variables x 3 , y 0 , y 1 can be solved linearly from the ideal J 3(I) and are shown in Eq. [START_REF] Refaat | Two-Mode Overconstrained Three DOFs Rotational-Translational Linear-Motor-Based Parallel-Kinematics Mechanism for Machine Tool Applications[END_REF]. Since solving the inverse kinematics of t 11 , t 13 are quite computationally expensive, the joint variables t 11 , t 13 are considered to be the independent parameters of this mode. Then the parameters y 2 , y 3 can be solved in terms of (x 0 , x 1 , x 2 , t 11 , t 13 ). Substituting back the parameters y 2 , y 3 into Eq. ( 18), then the Study parameters x 3 , y 0 , y 1 , y 2 , y 3 are now parametrized by (x 0 , x 1 , x 2 , t 11 , t 13 ). Accordingly, the 2-RUU (I) PM will perform two translational motions, which are parametrized by variables t 11 , t 13 and two rotational motions, which are parametrized by variables x 0 , x 1 , x 2 in connection with the normalization equation g 14 .
x 3(I) = (t ). As a consequence, in this operation mode, the links B i C i (i = 1, 3) from both limbs are always parallel to the same plane and the axes s i (i = 1, 3) from both limbs are always parallel too.
The 2-RUU (II) PM
In the 2-RUU (II) PM, the first and the second R-joints in each limb are also actuated. The design parameters are assigned with the same values as a = 2, b = 1, l = 1, r = 2 (in units that need not be specified). The set of eight constraint equations is written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[t 12 , t 14 , t 22 , t 24 ], defined as: I (II) = g 2 , g 4 , g 6 , g 8 , g 10 , g 12 , g 13 , g 14 . At this point, the following ideal is examined:
J (II) = g 2 , g 4 , g 13 .
The primary decomposition is computed and it turns out that the ideal J (II) is decomposed into three components as: J (II) = 3 k=1 J k(II) , with the results of primary decomposition:
J 1(II) = x 0 , x 3 , x 1 y 1 + x 2 y 2 J 2(II) = x 1 , x 2 , x 0 y 0 + x 3 y 3 J 3(II) = (3t
Accordingly, the 2-RUU (II) PM under study has three operation modes. The computation of the Hilbert dimension of ideal J k(II) with t 12 , t 14 , t 22 , t 24 treated as variables shows that: dim(J k(II) ) = 4 (k = 1, ..., 3). To complete the analysis, the remaining equations are added by writing:
K k(II) : J k(II) ∪ g 6 , g 8 , g 10 , g 12 , g 14 , k = 1, ..., 3 (20)
It follows that the 2-RUU (II) PM has three 4-dof operation modes too. The system K 1(II) is identical with the system K 1(I) (as explained in Section 4.1.1), in which the moving platform is titled about an axis of parallel to XY -plane of Σ 0 by 180 degrees and it can exhibit the Schönflies motion with pure rotation about Z-axis. The system K 2(II) is identical with the system K 2(I) (as explained in Section 4.1.2), where the moving platform can exhibit 3-dof independent translations and one pure rotation about Z-axis. In the system K 3(II) , the moving platform is no longer parallel to the base. The variables x 3 , y 0 , y 1 can be solved linearly from the ideal J 3(II) and are shown in Eq. [START_REF] Gan | Unified Kinematics and Singularity Analysis of a Metamorphic Parallel Mechanism with Bifurcated Motion[END_REF]. Since solving the inverse kinematics of t 12 , t 14 are quite computationally expensive, the joint variables t 12 , t 14 are considered to be the independent parameters in this third mode. Then the parameters y 2 , y 3 can be solved in terms of (x 0 , x 1 , x 2 , t 12 , t 14 ). Substituting back the parameters y 2 , y 3 into Eq. ( 21), the Study parameters x 3 , y 0 , y 1 , y 2 , y 3 are now obtained and parametrized by (x 0 , x 1 , x 2 , t 12 , t 14 ). Hence the moving platform of the 2-RUU (II) PM will perform two translational motions which are parametrized by t 12 , t 14 and two rotational motions which are parametrized by x 0 , x 1 , x 2 in connection with the normalization equation g 14 .
Under the system K 3(II) , the joint angles t 22 and t 24 can be computed from the equations g 10 , g 12 . It reveals that no matter the value of the first actuated joint (t 12 , t 14 ) in each limb, these equations (g 10 , g 12 ) vanish for two real solutions, namely (1.)
t 22 = - 1 t 24 (θ 22 = π + θ 24 )
and (2.) t 22 = t 24 (θ 22 = θ 24 ). It means that in this operation mode, the links B i C i (i = 2, 4) from both limbs are always parallel to the same plane and the axes s i (i = 2, 4) from both limbs are always parallel too.
Noticeably, the third mode of the 4-RUU PM is a 2-dof operation mode since two input joint angles are sufficient to define the pose of the manipulator. This operation mode was referred to coupled motion in [START_REF] Amine | Singularity Analysis of the 4-RUU Parallel Manipulator Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators with Identical Limb Structures[END_REF]. Since the system K 3 is a lower dimension operation mode, namely 2-dof , this type of manipulator is called variable-dof PM in [START_REF] Kong | Type Synthesis of Variable Degrees-of-Freedom Parallel Manipulators with Both Planar and 3T1R Operation Modes[END_REF].
X Y Z O A 1 A 2 A 3 A 4
Singularities and Self-motions
The 4-RUU PM reaches a singularity condition when the determinant of its Jacobian matrix vanishes. The Jacobian matrix is the matrix of all first order partial derivative of the constraint equations with respect to the Study parameters. Since the 4-RUU PM has more than one operation mode, the singular configurations can be classified into two different types, i.e. the singularity configurations that belong to a single operation mode and the singularity configurations that belong to more than one operation mode. The common configurations than belong to more than one operation mode allow the 4-RUU PM to switch from one operation mode to another operation mode, which will be discussed in Section 6.
The singular poses are examined by taking the Jacobian matrix from each system of polynomial and computing its determinant. From practical point of view, the singularity surface is desirable also in the joint space. Hence the expression of the Jacobian determinant is added The second factor is y 3 = 0, when the moving platform is coplanar to the base, the 4-RUU PM is always in a singular configuration. Finally, the last factor of S 2 : det(J 2 ) = 0 is analysed. Due to the heavy elimination process, the actuated joint angles are assigned as t 11 = 0, t 12 = 1, and t 13 = 0. The elimination gives a univariate polynomial of degree 18 in t 14 as: 99544625t 18 14 -1042686200t 17 14 + 4293155895t 16 14 -9293913184t 15 14 + 10513736564t 14 14 -175591 6864t 13 14 -14239053636t 12 14 + 24856530336t 11 14 -20314694418t 10 14 + 4683758224t 9 14 + 92888105 78t 8 14 -13708185120t 7 14 + 10456187332t 6 14 -5370369152t 5 14 + 1960220428t 4 14 -507121440t 3 14 +89099433t 2 14 -9580248t 14 + 476847 = 0 (31) One singularity configuration in the 2nd Schönflies mode can be obtained by solving Eq. (31), for example t 14 = 1. Then the direct kinematics of at least one singularity pose can be computed with θ 11 = 0 • , θ 12 = 90 • , θ 13 = 0 • , θ 14 = 90 • and it is shown in Fig. 8.
X Y Z O A 1 A 2 A 3 A 4
The determinant of Jacobian det(J 2 ) also vanishes in two particular conditions, namely when all the actuated joint angles have the same values and when the first links of each limb are pointing inward toward the origin O of the fixed frame Σ 0 . In the first condition, when all the actuated joint angles have the same values, the moving platform gains 1-dof self-motion. During the motion, the first links of each limb stay fixed and the moving platform can perform a rotational motion. Let us consider the actuated joint angles being t 11 = 0, t 12 = 0, t 13 = 0, t 14 = 0 and the 1-dof self-motion is parametrized by x 3 , as shown in Fig. 9. 5.3 Self-motions in Third mode (K 3 )
X Y Z O A 1 A 2 A 3 A 4
X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4
Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 Figure 10: First translation of self-motion in K 2 . X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4
Before computing the self-motion of the 4-RUU PM in the third mode K 3 , the singularity conditions of the 2-RUU (I) and 2-RUU (II) PM in K 3(I) and K 3(II) , respectively, are first discussed. The determinants of the Jacobian matrices are computed in each system K 3(I) and K 3(II) . The determinants of these Jacobian matrices consist of eleven factors that are defined as:
S 3(I) : det(J
X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4
Figure 13: Self-motion when θ 11 = θ 14 and θ 12 = θ 13 in K 3 .
X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4 X Y Z A 1 A 2 A 3 A 4
Operation mode changing
There exist common configurations where the mechanism, i.e. the 4-RUU PM, can switch from one operation mode to another operation mode. These configurations are well known as transition configurations. Transition configuration analysis is an important issue in the design process and control of the parallel manipulators with multiple operation modes [START_REF] Kong | Reconfiguration Analysis of a 3-DOF Parallel Mechanism Using Euler Parameter Quaternions and Algebraic Geometry Method[END_REF]. However, the 1st Schönflies mode and the 2nd Schönflies mode do not have configurations in common, since the variables x 0 , x 1 , x 2 , x 3 can never vanish simultaneously. It means that the 4-RUU PM cannot switch from the 1st Schönflies mode to the 2nd Schönflies mode directly.
To change from the 1st Schönflies mode to the 2nd Schönflies mode, the 4-RUU PM should pass through the third mode, namely system K 3 . There exist some configurations in which the manipulator can switch from the 1st Schönflies mode to the third mode or vice versa, and these configurations belong to both operation modes. Noticeably, these configurations are also singular configurations since they lie in the intersection of two operation modes.
In the following, the conditions on the actuated joint angles for the 4-RUU PM to change from one operation mode to another are presented. Each pair of ideals {K i ∪ K j } is analysed and the Study parameters are eliminated to find common solutions.
6.1 1st Schönflies mode (K 1 ) ←→ Third mode (K 3 )
To switch from the 1st Schönflies mode (K 1 ) to the third mode (K 3 ) or vice versa, one should find the configurations of the 4-RUU PM that fulfil the condition of both operation modes, namely (J 1 ∪ J 3 ). Then all Study parameters are eliminated to find an equation in terms of the actuated joint angles t 11 , t 12 , t 13 , t 14 , written as: 9t ... = 0 (34) In this transition configurations, the moving platform is twisted about an axis parallel to XY -plane of Σ 0 by 180 degrees and the actuated joint angles fulfil Eq. (34). The three conditions of the self-motions (1.) t 11 = t 12 and t 13 = t 14 , (2.) t 11 = -1/t 13 and t 12 = -1/t 14 , and (3.) t 11 = t 14 and t 12 = t 13 given in Section 5, are contained in Eq. (34). It shows that the moving platform is in a transition configuration of the 1st Schönflies mode K 1 and the third mode K 3 that amounts to a self-motion.
2nd Schönflies mode (K 2 ) ←→ Third mode (K 3 )
To switch from the 2nd Schönflies mode (K 2 ) to the third mode (K 3 ) or vice versa, one should find the configurations of the 4-RUU PM that fulfil the condition of both operation modes, namely (J 2 ∪ J 3 ). Then all Study parameters are eliminated to find an equation in terms of the actuated joint angles t 11 , t 12 , t 13 , t 14 , written as: 9t The moving platform of the 4-RUU PM is in a transition configuration between K 2 and K 3 when the moving platform is parallel to the base and the actuated joint angles fulfil Eq. (35). The three conditions of the self-motions (1.) t 11 = t 12 and t 13 = t 14 , (2.) t 11 = -1/t 13 and t 12 = -1/t 14 , and (3.) t 11 = t 14 and t 12 = t 13 given in Section 5, are contained in Eq. ( 35). It means that the moving platform is in a transition configuration of the 2nd Schönflies mode K 2 and the third mode K 3 that amounts to a self-motion. As a consequence, the transition between systems K 1 and K 2 occurs through the third system K 3 , which is lower dimension operation mode and amounts to a self-motion. The transition from K 2 to K 1 through the third mode K 3 with the condition of the actuated joint angles t 11 = t 12 and t 13 = t 14 , is shown in Fig. 15(a)-15(f).
Conclusions
In this paper, the method of algebraic geometry was applied to characterize the type of operation modes of the 4-RUU PM. The 4-RUU PM is initially decomposed into two 2-RUU PM. The constraint equations corresponding to two 2-RUU PM are derived and the primary decomposition is computed. It reveals that the 2-RUU PM have three 4-dof operation modes. However, when they are assembled to be the 4-RUU PM, its operation modes are composed of two 4-dof Schönflies modes and one 2-dof operation mode.
The singularity conditions were computed and represented in the joint space, i.e., the actuated joint angles (t 11 , t 12 , t 13 , t 14 ). It turns out that every configuration in the 4-dof third modes of both 2-RUU PM, is always in singularity and it amounts to a self-motion. However, every configuration in the 2-dof third mode of the 4-RUU PM is not always in singularity, i.e., self-motion. The self-motion in this operation mode occurs if the actuated joint angles fulfil some particular conditions, namely (1.) t 11 = t 12 and t 13 = t 14 , (2.) t 11 = -1/t 13 and t 12 = -1/t 14 , and (3.) t 11 = t 14 and t 12 = t 13 . The 4-RUU PM is able to switch from the 1st Schönflies mode to the 2nd Schönflies mode by passing through the third mode, which contains self-motion.
Figure 1 :
1 Figure 1: The 4-RUU PM.
Figure 2 :
2 Figure 2: Printed model of 4-RUU PM.
2 A 3 A 4 B 1 B 2 B 3 B 4 Figure 3 :
23412343 Figure 3: Parametrization of the first two joint angles in each leg from top view.
Figure 4 :
4 Figure 4: The 4-RUU PM decomposed into two 2-RUU PM.
Figure 5 :
5 Figure 5: Printed model of 4-RUU PM decomposed into two 2-RUU PM.
Figure 6 :
6 Figure 6: The third mode K 3 .
Figure 7 :
7 Figure 7: Singularity pose in the 1st Schönflies mode K 1 .
Figure 8 :
8 Figure 8: Singularity pose in the 2nd Schönflies mode K 2 .
Figure 9 :
9 Figure 9: Self-motion when θ 11 = θ 12 = θ 13 = θ 14 = 0 in K 2 .
X
Figure 11 :
11 Figure 11: Second translation of self-motion in K 2 .
Figure 14 :
14 Figure 14: Self-motion when θ 11 = π + θ 13 and θ 12 = π + θ 14 in K 3 .
Figure 15 :
15 Figure 15: Transition from the 2nd Schönflies mode to the 1st Schönflies mode via the third mode with θ 11 = θ 12 and θ 13 = θ 14 .
2 11 t 2 13 x 1 -t 2 11 t 13 x 2 + t 11 t 2 13 x 2 + 2t 2 13 x 1 + t 11 x 2 -t 13 x 2 + x 1 )x 0 (3t 2 11 t 2 13 x 2 + t 2 11 t 13 x 1 -t 11 t 2 13 x 1 + 2t 2 11 x 2 + 4t 2 13 x 2 -t 11 x 1 + t 13 x 1 + 3x 2 ) + t 2 11 t 13 x 1 -t 11 t 2 13 x 1 + 2t 2 11 x 2 + 4t 2 13 x 2 -t 11 x 1 + t 13 x 1 + 3x 2 13 x 2 y 3 -t 11 t 2 13 x 2 1 -2t 11 t 2 13 x 2 2 + t 11 t 2 13 x 2y 3 + 2t 2 13 x 1 y 3 -t 11 x 2 2 + t 11 x 2 y 3 -t 13 x 2 1 -2t 13 x 2 2 -t 13 x 2 y 3 -x 1 x 2 + x 1 y 3 ) + t 2 11 t 13 x 1 -t 11 t 2 13 x 1 + 2t 2 11 x 2 + 4t 2 13 x 2 -t 11 x 1 + t 13 x 1 + 3x 2 )x 1 -t 11 x 0 x 2 2 + t 11 x 1 x 2 y 2 -t 13 x 0 x 2 1 -2t 13 x 0 x 2 2 -t 13 x 1 x 2 y 2 -x 0 x 1 x 2 -3x 2 2 y 2 )(18)Under this operation mode, the joint angles t 21 and t 23 can be computed from the equations g 9 , g 11 . It turns out that no matter the value of the first actuated joints (t 11 , t 13 ) in each limb, these equations vanish for two real solutions, namely (1.) t 21 = -1 t 23
y 0(I) = - 1 13 x 2 (t 2 11 t 2 3t 2 11 t 2 13 x 1 x 2 +
t 2 11 t 2 13 x 1 y 3 -t 2 11 t 13 x 2 2 -t 2 11 t y 1(I) = (3t 2 11 t 2 13 x 2 (t 2 1 11 t 2 13 x 0
x 1 x 2 -3t 2 11 t 2 13 x 2 2 y 2 -t 2 11 t 13 x 0 x 2 2 -t 2 11 t 13 x 1 x 2 y 2 -t 11 t 2 13 x 0 x 2 1 -2t 11 t 2 13 x 0 x 2 2 + t 11 t 2 13 x 1
x 2 y 2 -2t 2 11 x 2 2 y 2 -4t 2 13 x 2 2 y 2
(θ 21 = π + θ 23 ) and (2.) t 21 = t 23 (θ 21 = θ 23
3(I) ) = (t 21 t 23 + 1)(-t 23 + t 21 )x 0 (t 2 13 + 1) 3 (t 2 11 + 1) 3 ... = 0 S 3(II) : det(J 3(II) ) = (t 22 t 24 + 1)(-t 24 + t 22 )x 0 (t 2 It can be seen that the first two factors of S 3(I) and S 3(II) in Eq. (32) are the necessary conditions for the 2-RUU (I) and 2-RUU (II) PM to be in the systems K 3(I) and K 3(II) , respectively (as explained in Sections 4.1 and 4.2). They are (1.) t 21 = -1 t 23 (θ 21 = π+θ 23 ) and (2.) t 21 = t 23 To investigate the self-motion in the third mode K 3 of the 4-RUU PM, let us recall the first every configuration in the third mode K 3 of the 4-RUU PM is not always in self-motion. The self-motion of the third mode K 3 occur if and only if the actuated joint angles (t 11 , t 12 , t 13 , t 14 ) fulfil particular conditions.
14 + 1) 3 (t 2 12 + 1) 3 ... = 0 (32)
(θ 21 = θ 23 ) for the 2-RUU (I) PM, and (1.) t 22 = -1 t 24 (θ 22 = π+θ 24 ) and (2.) t 22 = t 24 (θ 22 = θ 24 )
for the 2-RUU (II) PM. It means that each configuration in the systems K 3(I) and K 3(II) amounts to a self-motion.
R, P, E, U, S denote revolute, prismatic, planar, universal and spherical joints, respectively.
The 4-RUU PM
In the 4-RUU PM, the first R-joint in each limb is actuated. The design parameters are assigned with the same values as a = 2, b = 1, l = 1, r = 2. The set of ten constraint equations is written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[t 11 , t 12 , t 13 , t 14 ], defined as: I = g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 , g 13 , g 14 . At this point, the following ideal is examined: J = g 1 , g 2 , g 3 , g 4 , g 13 . Since the 4-RUU PM can be assembled by the 2-RUU (I) and 2-RUU (II) PM, the ideal J can be written as the linear combination of the results of primary decomposition from the 2-RUU (I) and 2-RUU (II) PM. It is noteworthy that the first and second components of the 2-RUU (I) and 2-RUU (II) PM are identical, so that J 1(I) = J 1(II) and J 2(I) = J 2(II) .
with:
As a consequence, the 4-RUU PM has four operation modes. To complete the analysis, the remaining equations are added by writing:
The systems K 1 and K 2 are 4-dof operation modes, which correspond to the 1st Schönflies mode and the 2nd Schönflies mode, as explained in Sections 4.1.1 and 4.1.2, respectively. However, the characterization of the system K 3 needs to be discussed further, as presented hereafter.
System K 3 : Third mode
The third mode of the 4-RUU PM is characterized by the system K 3 . In this mode, the primary decomposition leads to the ideal J 3 and all polynomial equations in this ideal should vanish simultaneously. Hence the variables x 3 , y 0 , y 1 , y 2 , y 3 can be obtained in cascade from the ideal J 3 , such that:
)
Not all polynomial in the ideal J 3 vanishes and it remains two polynomial equations, as follows:
Nurahmi, Caro, Wenger, Schadlbauer, and Husty, submitted to Mech. and Mach. Theory 18
x 2 1 -... = 0 (26) As two iso-constrained 2-RUU PM are assembled to be the 4-RUU PM by combining the results of primary decomposition into J 3 , one of the 2-RUU PM is dependent to another. Accordingly, one of the 2-RUU PM can be selected to represent the third mode of the 4-RUU PM. The new ideals are defined corresponding to the two 2-RUU parallel manipulators as follows:
Both ideals in Eq. ( 27) are solved separately to show that they lead to the same results. The variables x 3 , y 0 , y 1 , y 2 , y 3 obtained in Eq. ( 25) are then substituted into ideals L I , L II in Eq. [START_REF] Masouleh | Solving the Forward Kinematic Problem of 4-DOF Parallel Mechanisms (3T1R) with Identical Limb Structures and Revolute Actuators Using the Linear Implicitization Algorithm[END_REF]. The variable x 0 can be solved from g 14 and the equations g 5 , g 6 , g 7 , g 8 split into two factors. The first factors of these equations have the same mathematical expression and lead to the 1-dof self-motion, which will be discussed further in Section 5.3.
The second factors are analysed thereafter. The variable x 1 is solved from each ideal and yields two equations in terms of (x 2 , t 29). Back substitution into Eqs. ( 25)-( 28), all Study parameters can be solved and one of the manipulator poses can be obtained as shown in Fig. 6 with θ 11 = -84
Singularities in 1st Schönflies mode (K 1 )
The determinant of the Jacobian matrix is computed in the system K 1 , which consists of five constraint equations over five variables (x 1 , x 2 , y 0 , y 2 , y 3 ). Hence the 5 × 5 Jacobian matrix can be obtained. The factorization of the determinant of this Jacobian matrix S 1 : det(J 1 ) = 0 yields three factors. The inspection of the first factor shows the singularity configurations that lie in the intersection with the system K 2 . However, this factor is neglected since the systems K 1 and K 2 do not have configurations in common, i.e. x 0 , x 1 , x 2 , x 3 can never vanish simultaneously.
The second factor is y 2 = 0, when the moving platform is coplanar to the base, the 4-RUU PM is always in a singular configuration. Eventually, the third factor of S 1 : det(J 1 ) = 0 is analysed. This factor is added to the system K 1 and the remaining five Study parameters are eliminated. Due to the heavy elimination process, the actuated joint angles are assigned as t 11 = -2, t 12 = -1, and t 13 = 1/2. The elimination yields a univariate polynomial of degree 14 in t 14 as:
Singularities and Self-motions in 2nd Schönflies mode (K 2 )
The determinant of the Jacobian matrix is computed in the system K 2 , which consists of five constraint equations over five variables. Therefore the 5 × 5 Jacobian matrix can be obtained. The determinant of this Jacobian matrix S 2 : det(J 2 ) = 0 consists of three factors too. The investigation of the first factor gives the condition in which the mechanism is in the intersection between the systems K 1 and K 2 . As explained in Section 5.1, this factor is removed. The variable x 1 can be solved and it yields two equations in terms of the actuated joint angles only. The first equation is similar to Eq. ( 28) and the second equation takes the form: 3t = 0 (33) The variable x 2 is an independent parameter. As a consequence, when all the corresponding actuated joints are actuated, there is an additional 1-dof rotational motion exhibited by the moving platform. This motion is a self-motion and it is parametrized by the variable x 2 . Two equations in Eq. ( 28) and Eq. (33) are solved to find the relations among the actuated joint angles in the self-motion, namely: Since two 2-RUU PM are assembled perpendicularly as in the 4-RUU PM, only one example between the self-motion of solutions 1 and 2 is shown. The example of self-motion of solution 2 is shown in Fig. 13 with θ 11 = θ 14 = 90 • and θ 12 = θ 13 = 0 • . Figure 14 shows the example of self-motion of solution 3 with θ 11 = 90 • , θ 12 = 0 • , θ 13 = -90 • , θ 14 = 180 • . Every configuration in the third modes of the 2-RUU (I) and 2-RUU (II) PM amounts to a selfmotion. However, when the 2-RUU (I) and 2-RUU (II) PM are assembled to obtain the 4-RUU PM, |
01757286 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01757286/file/MMT2015_Wu_Caro_Wang_HAL.pdf | Guanglei Wu
Stéphane Caro
email: stephane.caro@irccyn.ec-nantes.fr
Jiawei Wang
Design and Transmission Analysis of an Asymmetrical Spherical Parallel Manipulator
Keywords: Asymmetrical spherical parallel manipulator, transmission wrench screw, transmissibility, universal joint
This paper presents an asymmetrical spherical parallel manipulator and its transmissibility analysis. This manipulator contains a center shaft to both generate a decoupled unlimited-torsion motion and support the mobile platform for high positioning accuracy. This work addresses the transmission analysis and optimal design of the proposed manipulator based on its kinematic analysis. The input and output transmission indices of the manipulator are defined for its optimum design based on the virtual coefficient between the transmission wrenches and twist screws. The sets of optimal parameters are identified and the distribution of the transmission index is visualized. Moreover, a comparative study regarding to the performances with the symmetrical spherical parallel manipulators is conducted and the comparison shows the advantages of the proposed manipulator with respect to its spherical parallel manipulator counterparts.
Introduction
Three degree-of-freedom (3-DOF) spherical parallel manipulators (SPMs) are most widely used as camera-orientating device [START_REF] Gosselin | The Agile Eye: a high-performance three-degree-of-freedom camera-orienting device[END_REF], minimally invasive surgical robots [START_REF] Li | Design of spherical parallel mechanisms for application to laparoscopic surgery[END_REF] and wrist joints [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF] because of their large orientation workspace and high payload capacity. Since they can generate three pure rotations, another potential application is that they can work as a tool head for complicated surface machining. However, the general SPM can only produce a limited torsion motion under a certain tilt orientation, whereas an unlimited torsion is necessary in some common material processing such as milling or drilling. The co-axial input SPM reported in [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF] can achieve unlimited torsion, however, its unique structure leads to a complex input mechanism. Moreover, the general SPMs result in low positioning accuracy [START_REF] Wu | Mobile platform center shift in spherical parallel manipulators with flexible limbs[END_REF] without a ball-and-socket joint as the center of rotation. In this paper, an asymmetrical SPM (AsySPM) is proposed, which can generate unlimited torsion motion with enhanced positioning accuracy. This manipulator adopts a universal joint as the center of rotation supported by an input shaft at the center, which simplifies the manipulator architecture.
The design of 3-DOF SPMs can be based on many criteria, i.e., workspace [START_REF] Gosselin | The optimum kinematic design of a spherical three-degree-offreedom parallel manipulator[END_REF][START_REF] Bai | Optimum design of spherical parallel manipulator for a prescribed workspace[END_REF], dexterity [START_REF] Gosselin | A global performance index for the kinematic optimization of robotic manipulators[END_REF][START_REF] Bai | Modelling of a special class of spherical parallel manipulators with Euler parameters[END_REF][START_REF] Wu | Multiobjective optimum design of a 3-RRR spherical parallel manipulator with kinematic and dynamic dexterities[END_REF], singularity avoidance [START_REF] Bonev | Singularity loci of spherical parallel mechanisms[END_REF], stiffness [START_REF] Wu | Mobile platform center shift in spherical parallel manipulators with flexible limbs[END_REF][START_REF] Bidault | Structural optimization of a spherical parallel manipulator using a two-level approach[END_REF], dynamics [START_REF] Staicu | Recursive modelling in dynamics of Agile Wrist spherical parallel robot[END_REF][START_REF] Wu | Dynamic modeling and design optimization of a 3-DOF spherical parallel manipulator[END_REF], and so on. The prime function of mechanisms is to transmit motion/force between the input joint and the output joint. Henceforth, we will focus on the transmissibility analysis of the proposed SPM. In the design procedure, the performance index is of importance for performance evaluation of the manipulator. A number of transmission indices, such as the transmission angle, the pressure angle, and the transmission factor, have been proposed in the literature to evaluate the quality of motion/force transmission. The transmission angle was introduced by Alt [START_REF] Alt | Der Üertragungswinkel und seine bedeutung für das konstruieren periodischer getriebe[END_REF], developed by Hain [START_REF] Hain | Applied Kinematics[END_REF], and can be applied in linkage synthesis problems [START_REF] Dresner | Definition of pressure and transmission angles applicable to multi-input mechanisms[END_REF][START_REF] Bawab | Rectified synthesis of six-bar mechanisms with welldefined transmission angles for four-position motion generation[END_REF]. Takeda et al. [START_REF] Takeda | A transmission index for in-parallel wire-driven mechanisms[END_REF] proposed a transmission index (TI) for parallel mechanisms based on the minimum value of the cosine of the pressure angle between the leg and the moving platform, where all the inputs but one are fixed. Based on the virtual coefficient between the transmission wrench screw (TWS) and the output twist screw (OTS) introduced by Ball [19], Yuan et al. [START_REF] Yuan | Kinematic analysis of spatial mechanism by means of screw coordinates. part 2-analysis of spatial mechanisms[END_REF] used it as an unbounded transmission factor for spatial mechanisms. Sutherland and Roth [START_REF] Sutherland | A transmission index for spatial mechanisms[END_REF] defined the transmission index using a normalized form of the transmission factor, which depends only on the linkages' geometric properties. Chen and Angeles [START_REF] Chen | Generalized transmission index and transmission quality for spatial linkages[END_REF] proposed a generalized transmission index that is applicable to single-loop spatial linkages with fixed output and single or multiple DOFs. Wu et al. [START_REF] Wu | Optimal design of spherical 5R parallel manipulators considering the motion/force transmissibility[END_REF] introduced a frame-free index related to the motion/force transmission analysis for the optimum design of the spherical five-bar mechanism. Wang et al. [START_REF] Wang | Performance evaluation of parallel manipulators: Motion/force transmissibility and its index[END_REF] presented the transmission analysis of fully parallel manipulators based on the transmission indices defined by Sutherland, Roth [START_REF] Sutherland | A transmission index for spatial mechanisms[END_REF] and Takeda [START_REF] Takeda | A transmission index for in-parallel wire-driven mechanisms[END_REF]. Recently, some approaches to identify singularity and closeness to singularity have been reported via transmission analysis [START_REF] Liu | A new approach for singularity analysis and closeness measurement to singularities of parallel manipulators[END_REF][START_REF] Liu | A generalized approach for computing the transmission index of parallel mechanisms[END_REF]. Henceforth, the virtual coefficient based indices will be adopted in this paper for the evaluation of the transmission quality and the optimal design of the proposed manipulator. This paper presents an asymmetrical SPM and its optimum design with regard to its transmission quality. The inverse and forward kinematic problems of the AsySPM are analyzed based on the kinematic analysis of classical SPMs. By virtue of the virtual coefficient between the transmission wrench screw and output twist screw, the input and output transmission indices are defined for the optimum design, of which an optimization problem is formulated to achieve the optimal design of the proposed SPM. The performances of the proposed spherical manipulator are compared with those of its counterparts in order to highlight its advantages and drawbacks. and inner rings connected to each other with a revolute joint, the revolute joint being realized with a revolve bearing. The orientation of the outer ring is determined by two RRR1 legs and constrained in a vertical plane by a fully passive RRS leg or an alternative RPS one. Through a U-joint, the decoupled rotation of the inner ring is driven by the center shaft, which also supports the MP to improve the positioning accuracy. This manipulator can provide an unlimited rotation of the moving-platform, which can be used in milling or drilling operations and among other material processing. It can also be used as the active spherical joint, i.e., wrist or waist joint.
The coordinate system (x, y, z) is denoted in Fig. 1(b), of which the origin is located at the center of rotation, namely, point O. The ith active leg consists of three revolute joints, whose axes are parallel to unit vectors u i , v i , w i . Both of these two legs have the same architecture, defined by α 1 and α 2
angles. The design parameters of the base platform are γ and η. The design parameter of the mobile platform is β. It is noteworthy that the manipulator is symmetrical with respect to the yz plane.
Inverse Geometric Problem
Under the prescribed coordinate system, the unit vector u i is derived as:
u i = (-1) i+1 sin η sin γ cos η sin γ -cos γ T , i = 1, 2 (1)
The unit vector v i of the axis of the intermediate revolute joint in the ith leg is obtained in terms of the input joint angle θ i following the angle-axis representation [START_REF] Angeles | Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms[END_REF], namely,
v i = R(u i , θ i )v * i ; R(u i , θ i ) = cos θ i I 3 + sin θ i [u i ] × + (1 -cos θ i )u i ⊗ u i ( 2
)
where I 3 is the identity matrix, [u i ] × is the cross product matrix of u i and ⊗ is the tensor product.
Moreover, v * i is the unit vector of the axis of the intermediate revolute joint in the ith leg at the original configuration:
v * i = (-1) i+1 sin η sin(γ + α 1 ) cos η sin(γ + α 1 ) -cos(γ + α 1 ) T (3)
The unit vector w i of the top revolute joint in the ith leg, is a function of the MP orientation:
w i = x i y i z i T = Qw * i ; w * i = (-1) i+1 sin β cos β 0 T (4)
where w * i is the unit vector of the axis of the top revolute joint of the ith leg when the mobile platform is located in its home configuration. Moreover, Q = R(x, φ x )R(y, φ y ) is the rotation matrix of the outer ring. Hence, the orientation of the inner ring can be described with Cardan angles (φ x , φ y , φ z ) and its output axis is denoted by:
p = Qz; z = [0, 0, 1] T (5)
According to the motion of the U-joint [START_REF] Weisbach | Mechanics of Engineering and of Machinery[END_REF], the input angle θ 3 of the center shaft is derived as:
θ 3 = tan -1 (tan φ z cos φ x cos φ y ) (6)
Referring to the inverse kinematic problem of the general SPMs, the loop-closure equation for the ith RRR leg is expressed as:
A i t 2 i + 2B i t i + C i = 0, i = 1, 2 (7)
with The input angle displacements can be solved as:
A i = (-1) i+1 x i sη + y i cη s(γ -α 1 ) -z i c(γ -α 1 ) -cα 2 (8a)
B i = (x i cη -y i sη)sα 1 (8b)
C i = (-1) i+1 x i sη + y i cη s(γ + α 1 ) -z i c(γ + α 1 ) -cα 2 (8c)
cos θ i = 1 -t 2 i 1 + t 2 i , sin θ i = 2t i 1 + t 2 i ; t i = tan θ i 2 = -B i ± B 2 i -A i C i A i (9)
The inverse geometric problem has four solutions corresponding to the four working modes characterized by the sign "-/+" of u i × v i • w i , i.e., "-+", "--", "+-" and "++" modes. Here, the "-+" working mode is selected.
Forward Geometric Problem
The forward geometric problem of the AsySPM can be obtained by searching for the angles ϕ i of a spherical four-bar linkages with the given input angles θ i , i = 1, 2, as displayed in Fig. 2, where the input/output (I/O) equation takes the form [START_REF] Yang | Application of dual-number quaternion algebra to the analysis of spatial mechanisms[END_REF][START_REF] Bai | A unified input-output analysis of four-bar linkages[END_REF]:
f (ϕ 1 , ϕ 2 ) = k 1 + k 2 cos ϕ 1 + k 3 cos ϕ 1 cos ϕ 2 -k 4 cos ϕ 2 + k 5 sin ϕ 1 sin ϕ 2 = 0 (10)
with
k 1 ≡ Cα 0 C 2 α 2 -Cβ ; k 2 = k 4 ≡ Sα 0 Sα 2 Cα 2 ; k 3 ≡ Cα 0 S 2 α 2 ; k 5 ≡ S 2 α 2 (11)
where S and C stand for the sine and cosine functions, respectively, and
α 0 = cos -1 (v 1 • v 2 ), β = 2β.
On the other hand, the motion of the unit vector e is constrained in the yz plane due to the passive leg, thus:
g(ϕ 1 , ϕ 2 ) = x 1 + x 2 = 0 ( 12
)
where the unit vector w i can be also represented with angle-axis rotation matrix, namely, Solving Eqs. ( 10) and ( 12) leads to four solutions for the angles ϕ i , i = 1, 2, i.e., the two functions have four common points in the plane z = 0 as shown in Fig. 3. Figure 4 illustrates the four assembly modes corresponding to the four solutions. Then, substituting ϕ i into Eq. ( 13), the unit vector w i and the output Euler angles φ x and φ y can be obtained, and the output angle φ z can be obtained from Eq. ( 6) accordingly.
w i = x i y i z i T = R(v i , ϕ i )R(v 0 , α 2 )v i ; v 0 = v 1 × v 2 / v 1 × v 2 ( 13
) -2 0 2 -2 0 2 -1 0 1 2 z f( 1 , 2 ) g( 1 , 2 ) z=0 f (φ 1 ,φ 2 ) g (φ 1 ,φ 2 ) z=0 φ 1 [rad] φ 2 [rad]
Transmission Index
The main function of the mechanism is to transmit motion from the input element to the output element. As a result, the force applied to the output element is to be transmitted to the input one. The arising internal wrench due to transmission is defined as a transmission wrench, which is characterized by the magnitude of the force and transmission wrench screw (TWS), and the latter is used to evaluate the quality of the transmission. In order to evaluate the transmission performance of the manipulator, some transmission indices (TI) should be defined.
Transmission Wrench and Twist Screw
As shown in Fig. 5(a), the instantaneous motion of a rigid body can be represented by using a twist screw defined by its Plücker coordinates:
$ T = (ω ; v) = ω $T = (L 1 , M 1 , N 1 ; P * 1 , Q * 1 , R * 1 ) ( 14
)
where ω is the amplitude of the twist screw and $T is the unit twist screw. Likewise, a wrench exerted on the rigid body can be expressed as a wrench screw defined by its Plücker coordinates as:
$ W = (f ; m) = f $W = (L 2 , M 2 , N 2 ; P * 2 , Q * 2 , R * 2 ) ( 15
)
where f is the amplitude of the wrench screw and $W is the unit wrench screw.
The reciprocal product between the two screws $ T and $ W is defined as:
$ T • $ W = f • v + m • ω = L 1 P * 2 + M 1 Q * 2 + N 1 R * 2 + L 2 P * 1 + M 2 Q * 1 + N 2 R * 1 ( 16
)
This reciprocal product amounts to the instantaneous power between the wrench and the twist. Subsequently, the transmission index is defined as a dimensionless index [START_REF] Chen | Generalized transmission index and transmission quality for spatial linkages[END_REF]:
TI = | $T • $W | | $T • $W | max (17)
where | $T • $W | max represents the potential maximal magnitude of the reciprocal product between $T and $W . The larger TI, the more important the power transmission from the wrench to the twist, namely, the better the transmission quality.
For a planar manipulator, this index corresponds to the transmission angle, which is the smallest angle between the direction of velocity of the driven link and the direction of absolute velocity vector of output link both taken at the common point [START_REF] Hartenberg | Kinematic Synthesis of Linkages[END_REF]. As illustrated in Fig. 5(b), it is the angle σ between the follower link and the coupler of a four-bar mechanism, also known as forward transmission angle.
Conversely, the angle ψ is the inverse transmission angle. Therefore, the input (λ I ) and output (λ O ) transmission can be expressed as:
λ I = | sin ψ| ; λ O = | sin σ| (18)
Input Transmission Index
The wrench applied to a SPM is usually a pure moment, thus, for a spherical RRR leg, the transmission wrench is a pure torque. As the TWS is reciprocal to all the passive joint screws in the leg, the axis of the wrench in the ith leg is perpendicular to the plane OB i C i and passes through point O, as shown in Fig. 6(a). According to Eq. ( 17), the input transmission index of the ith RRR leg is obtained as:
λ Ii = | $Ii • $W i | | $Ii • $W i | max = |u i • τ i | |u i • τ i | max , i = 1, 2 (19)
with $Ii = (u i ; 0);
$W i = (0; τ i ) = (0; v i × w i / v i × w i ) (20)
When $W i lies in the plane OA i B i , i.e., plane OA i B i being perpendicular to plane
OB i C i , |u i • τ i |
reaches its maximum value. This situation occurs when the angle between the wrench screw and the twist screw is equal to ψ 0 or 180 o -ψ 0 , namely,
|u i • τ i | max = | cos ψ 0 | = | sin α 1 | (21)
From Fig. 6(a), Eq. ( 19) is equivalent to
λ Ii = | cos ψ 1 | | cos ψ 0 | = | cos ψ 2 | = | sin ψ i | = 1 -cos 2 ψ i , i = 1, 2 (22)
where ψ i is the inverse transmission angle, i.e., the angle between planes OA i B i and OB i C i , and
cos ψ i = (v i × u i ) • (v i × w i ) v i × u i v i × w i (23)
Finally, the input transmission index of the manipulator is defined as:
λ I = min{λ Ii }, i = 1, 2 (24)
Output Transmission Index
Referring to the pressure angle at the attached point of the leg with the moving platform [START_REF] Takeda | A transmission index for in-parallel wire-driven mechanisms[END_REF], the output transmission index of a single leg can be defined by fixing the other input joints, where the parallel manipulator thus becoming a 1-DOF system. By fixing the active joint at point A 2 (point B 2 will be fixed) and keeping joint at point A 1 actuated in Fig. 6(b), the transmission wrench $ W 2 becomes a constraint wrench for the mobile platform. The instantaneous motion of the mobile platform will be a rotation about a unique vector constrained by $ W 2 and the vector e in the passive leg, namely,
s 1 = τ 2 × e τ 2 × e ; e = Qj, j = [0, 1, 0] T (25)
Henceforth, the output twist screw can be expressed as: $O1 = (s 1 ; 0). Based on Eq. ( 17), the output transmission index of the first leg is defined by
λ O1 = | $O1 • $W 1 | | $O1 • $W 1 | max = |s 1 • τ 1 | |s 1 • τ 1 | max (26)
when s 1 and τ 1 are parallel, i.e., both planes OC 1 C 2 and OB 1 C 1 being perpendicular to OB 2 C 2 , 26) is rewritten as:
|s 1 • τ 1 | is a maximum, namely, |s 1 • τ 1 | max = cos(0) = 1. Equation (
λ O1 = |s 1 • τ 1 | = |τ 12 • e|/ τ 2 × e ; τ 12 = τ 1 × τ 2 (27)
By the same token, the output transmission index of the second leg is derived as:
λ O2 = |τ 12 • e|/ τ 1 × e (28)
Similarly to Eq. ( 24), the output transmission index of the manipulator is defined as
λ O = min{λ Oi }, i = 1, 2 (29)
Transmission Efficiency of the U joint
The output of the inner ring of the mobile platform is driven by the center shaft through a universal joint, consequently, the TI of the U joint (UTI) is defined as:
λ U = |p • z| = | cos φ x cos φ y | (30)
where the vectors p and z are defined in Eq. ( 5).
Local Transmission Index
On the basis of the ITI, OTI and UTI, the local transmission index (LTI) of the manipulator under study, namely, the transmission index at a prescribed orientation, is defined as:
λ = min{λ I , λ O , λ U } (31)
The higher λ, the higher the quality of the input and output transmission. The distribution of the LTI can indicate the workspace (WS) region with a good transmissibility. Thus, this index can be used for either the evaluation of the transmission quality or the design optimization.
Optimal Design and Analysis
The optimum design of SPMs can be based on many aspects, such as workspace, dexterity, singularity, and so on. However, these criteria are usually antagonistic. In order for the proposed AsySPM to achieve a regular workspace (RWS) with a good transmission quality, the following optimization problem is formulated:
maximize f (x) = λ for θ ∈ [0, θ 0 ] ( 32
)
over x = [α 1 ; α 2 ; β; γ; η] subject to g 1 : 45 o ≤ {α 1 , α 2 } ≤ 120 o g 2 : 15 o ≤ {β, η} ≤ 60 o g 3 : 30 o ≤ γ ≤ 75 o
where θ is the tilt angle and θ 0 defines the workspace region, as shown in Fig. 7. Moreover, the lower and upper bounds of the design variables are assigned in order to avoid mechanical collisions. This problem can be solved with the optimization toolbox of the mathematical software at hand. Hereby, it is solved with the genetic algorithm (GA) toolbox in Matlab. When θ 0 = 60 o , it is found that quality transmission [START_REF] Tao | Applied Linkage Synthesis[END_REF], whence the TI is λ = sin 45 o ≈ 0.7, i.e., the manipulator at a configuration with LTI λ ≥ 0.7 has good motion/force transmission. Henceforth, a set of poses in which LTI is greater than 0.7 is identified as high-transmissibility workspace (HTW), such as the blue dashed line enveloped region displayed in Fig. 8(a). The area of HTW can be used to evaluate the manipulator performance. The larger the HTW, the better the transmission quality of the manipulator.
When the objective function in the optimization problem [START_REF] Tao | Applied Linkage Synthesis[END_REF] is replaced by f (x) = A HTW , where
A HTW is the area of HTW, the optimal parameters for θ 0 = 60 o are found as:
x = [53. geometrical parameters given in Eq. ( 34) is much larger than the HTW of the manipulator with the geometrical parameters given in Eq. (33).
To evaluate the transmissibility of the manipulator within a designated workspace, a transmission index (WTI) similar to GCI [START_REF] Gosselin | A global performance index for the kinematic optimization of robotic manipulators[END_REF] is defined over the workspace W , which is calculated through a discrete approach in practice, namely,
WTI = λdW dW or WTI = 1 W n i=1 λ i ∆W = 1 n n i=1 λ i ( 35
)
where n is the discrete number. The index obtained through the above equation is an arithmetic mean, which can be replaced with a quadratic mean for a better indication of the transmission, subsequently, WTI is redefined as
WTI = 1 n n i=1 λ 2 i ( 36
)
As a consequence, with θ ∈ [0, 60 o ], WTI is equal to 0.66 for the first design and is equal to 0.72 for the second one.
Comparison with Symmetrical SPMs
In this section, a comparative study is conducted between the asymmetrical and symmetrical SPMs.
A general SPM is shown in Fig. 9(a), which consists of three identical RRR legs connected to the base and the mobile platform. Moreover, β and γ define the geometry of two triangular pyramids on the mobile and the base platforms, respectively. A base coordinate system (x, y, z) is located at point O and the z axis is normal to the bottom surface of the base pyramid and points upwards, while the y axis is located in the plane spanned by the z-axis and vector u 1 . Figure 9(b) illustrates the Agile Wrist [START_REF] Bidault | Structural optimization of a spherical parallel manipulator using a two-level approach[END_REF] while Fig. 9(c) shows a co-axial input SPM (CoSPM) of a special case with γ = 0. Their geometrical parameters are given by Table 1.
LTI distributions
Referring to Eqs. ( 19) and ( 26), the ITI and OTI of each leg for the symmetrical SPMs can be obtained.
The difference lies in the output twist screw of Eq. ( 25) in the calculation of OTI, namely,
s i = (v j × w j ) × (v k × w k ) (v j × w j ) × (v k × w k ) ; i, j, k ∈ {1, 2, 3}, i = j = k (37)
Using the index defined in Eq. ( 31 the HTW is extremely small. When α 1 reduces to 47 o which yields better kinematic and dynamic dexterities [START_REF] Wu | Dynamic modeling and design optimization of a 3-DOF spherical parallel manipulator[END_REF], the HTW of full torsion is a spherical cap with θ ∈ [0, 30 o ].
Comparison of Overall Performances
The performance comparison of the asymmetrical SPM with the Agile Wrist and the Co-axial input spherical parallel manipulator is summarized in Table 3, which shows the advantages and drawbacks of the proposed AsySPM with respect to its symmetrical counterparts. The AsySPM can have the advantages of the general and co-axial input SPMs simultaneously except the drawback of evenly distributed power consumption.
Conclusion
This paper introduced an asymmetrical spherical parallel manipulator, whose the mobile platform is composed of an inner ring and an outer ring. The orientation of the outer ring is determined by two RRR legs as well as a fully passive leg, and the inner ring can generate a decoupled unlimited-torsion motion thanks to a center input shaft and a universal joint. Moreover, the center shaft can improve the positioning accuracy of the center of rotation for the manipulator. This manipulator can be used as a tool head for the complicated surface machining, such as milling or drilling, and can also work as As one of the most important performance, transmission analysis for the proposed manipulator was addressed. By virtue of the transmission wrench screw and the output twist screw, the input and output transmission indices are defined and are further used for the optimal design of the proposed manipulator. Two sets of optimal parameters have been identified and the isocontours of the transmission indices were traced to show the quality of the transmission. Moreover, a comparative study dealing with the mechanical design, kinematic and transmission performances was carried out between the proposed manipulator and its symmetrical counterparts, which highlights the advantages and drawbacks of the proposed manipulator with respect to its symmetrical counterparts. Besides the advantages of the general spherical parallel manipulators, such as compactness, low inertia, large regular workspace, the proposed manipulator can outperform in terms of unlimited torsion rotation, positioning accuracy and transmission quality, on the other hand, a main drawback lies in the unevenly distributed power consumption due to its asymmetrical structure.
Figure 1 :Figure 1
11 Figure 1: The asymmetrical spherical parallel manipulator: (a) CAD model; (b) coordinate system.
Figure 2 :
2 Figure 2: The spherical four-bar linkage.
Figure 3 :
3 Figure 3: Graphical representation (four black points) of the four solutions to the forward geometric problem of the AsySPM.
Figure 4 :
4 Figure 4: The four assembly modes of the AsySPM.
Figure 5 :
5 Figure 5: Transmission wrench and twist screw: (a) the twist screw and wrench screw of a rigid body; (b) a planar four-bar linkage.
Figure 6 :
6 Figure 6: The transmission wrench screw and transmission angle: (a) input transmission; (b) output tranmission.
Figure 7 :
7 Figure 7: The spherical surface of a designated regular workspace.
3 o ; 69.5 o ; 15 o ; 75 o ; 60 o ] (34) The corresponding distribution of performance index is shown in Fig. 8(b), from which it is seen that the manipulator can still reach a large RWS with θ = 75 o , whereas, the minimum LTI within θ ∈ [0, 60 o ] reduces to 0.3 compared to Fig. 8(a). In contrast, the HTW of the manipulator with the
Figure 8 :
8 Figure 8: The isocontours of the transmission index of the asymmetrical SPM for workspace θ 0 = 60 o : (a) max(λ); (b) max(HTW).
Figure 9 :
9 Figure 9: The symmetrical SPMs: (a) general SPM; (b) Agile Wrist; (c) co-axial input SPM.
Figure 10 :
10 Figure 10: The LTI isocontours of the Agile Wrist: (a) φ z = 0; (b) φ z = 30 o .
Figure 11 :
11 Figure 11: The LTI isocontours of the CoSPM with torsion φ z = 0: (a) α 1 = 60 o ; (b) α 1 = 47 o .
Table 1 :
1 Geometrical parameters of the Agile Wrist and CoSPM.
Agile Wrist CoSPM
α 1 , α deg] 90 sin -1 ( √ 6/3) 60(47) 90 90
2 [deg] β, γ [rad] α 1 [deg] α 2 [deg] β [
Table 2 :
2 WTI for the three SPMs.
AsySPM Agile Wirst CoSPM
parameters (33) parameters (34) α 1 = 60 o α 1 = 47 o
θ ∈ [0, 45 o ] 0.68 0.77 0.75 0.58 0.79
θ ∈ [0, 60 o ] 0.66 0.72 0.69 0.54
Table 3 :
3 Performance comparison of the asymmetrical SPM with the Agile Wrist and the Co-axial input SPM.
AsySPM Agile Wrist CoSPM
Throughout this paper, R, U, S and P stand for revolute, universal, spherical and prismatic joints, respectively, and an underlined letter indicates an actuated joint. |
01757303 | en | [
"shs.anthro-bio",
"spi.mat"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757303/file/pan_19793.pdf | Lei Pan
email: panlei@ivpp.ac.cn
John Francis Thackeray
Jean Dumoncel
| Cl Ement Zanolli
Anna Oettl
Frikkie De Beer
Jakobus Hoffman
Benjamin Duployer
Christophe Tenailleau
| Jos E Braga
Intra-individual metameric variation expressed at the enameldentine junction of lower post-canine dentition of South African fossil hominins and modern humans
Objectives: The aim of this study is to compare the degree and patterning of inter-and intraindividual metameric variation in South African australopiths, early Homo and modern humans.
Metameric variation likely reflects developmental and taxonomical issues, and could also be used to infer ecological and functional adaptations. However, its patterning along the early hominin postcanine dentition, particularly among South African fossil hominins, remains unexplored.
Materials and Methods: Using microfocus X-ray computed tomography (mXCT) and geometric morphometric tools, we studied the enamel-dentine junction (EDJ) morphology and we investigated the intra-and inter-individual EDJ metameric variation among eight australopiths and two early Homo specimens from South Africa, as well as 32 modern humans.
Results: Along post-canine dentition, shape changes between metameres represented by relative positions and height of dentine horns, outlines of the EDJ occlusal table are reported in modern and fossil taxa. Comparisons of EDJ mean shapes and multivariate analyses reveal substantial variation in the direction and magnitude of metameric shape changes among taxa, but some common trends can be found. In modern humans, both the direction and magnitude of metameric shape change show increased variability in M 2 -M 3 compared to M 1 -M 2 . Fossil specimens are clustered together showing similar magnitudes of shape change. Along M 2 -M 3 , the lengths of their metameric vectors are not as variable as those of modern humans, but they display considerable variability in the direction of shape change.
Conclusion:
The distalward increase of metameric variation along the modern human molar row is consistent with the odontogenetic models of molar row structure (inhibitory cascade model).
Though much remains to be tested, the variable trends and magnitudes in metamerism in fossil hominins reported here, together with differences in the scale of shape change between modern humans and fossil hominins may provide valuable information regarding functional morphology and developmental processes in fossil species. In mammalian teeth, metameric variations observed among species may reflect differences in developmental processes [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF][START_REF] Evans | A simple rule governs the evolution and development of hominin tooth size[END_REF][START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF][START_REF] Weiss | Duplication with variation: Metameric logic in evolution from genes to morphology[END_REF]. For instance, size-related shape variations in human upper molars are consistent with odontogenetic models of molar row structure and molar crown morphology (inhibitory cascade model) (Kavanagh, [START_REF] Kavanagh | Predicting evolutionary patterns of mammalian teeth from development[END_REF]. Metameric variation could also be used to infer ecological conditions and/or functional adaptations [START_REF] Kavanagh | Predicting evolutionary patterns of mammalian teeth from development[END_REF][START_REF] Polly | Development with a bite[END_REF], as well as to distinguish symplesiomorphic traits from autamorphic traits [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF]. Dental metameric studies also yield some insights into primate taxonomy (e.g., Bailey, Benazzi, andHublin, 2014, 2016;[START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF][START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF][START_REF] Singleton | Allometric and metameric shape variation in Pan mandibular molars: A digital morphometric analysis[END_REF][START_REF] Weiss | Duplication with variation: Metameric logic in evolution from genes to morphology[END_REF]. However, few analyzes of metameric variation have yet been conducted at the intraindividual level (i.e., strictly based on teeth from the same individuals) [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF]: most studies were based on teeth representing different sets of individuals [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Olejniczak | Morphology of the enamel-dentine junction in sections of anthropoid primate maxillary molars[END_REF]Skinner, Gunz, & Wood, 2008a;[START_REF] Smith | Modern human molar enamel thickness and enamel-dentine junction shape[END_REF]. Besides the potential usefulness of metameric variation for developmental, functional/ecological and taxonomic inferences, it may also help to identify position of isolated teeth among fossil hominin assemblages.
Previous studies of metameric variation mainly focused on the outer enamel surface (OES) [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Singleton | Allometric and metameric shape variation in Pan mandibular molars: A digital morphometric analysis[END_REF]. Since the assessment of OES features may be obscured by occlusal wear, more recent analyses have investigated the enamel-dentine junction (EDJ). However, only a few studies have yet dealt with the relevance of the EDJ morphology for assessing metameric variation in hominin teeth [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF]Skinner et al., 2008a). As noted by [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF], modern humans exhibit stronger metameric variation between first and second molars, as compared with Au. africanus. Another study based on EDJ shape showed that Au. africanus and P. robustus display similar 3D-EDJ intra-taxon metameric trend along the molar dentition, but P. robustus preserves a marked reduction in the buccolingual breadth of the distal crown between M 2 and M 3 , and a marked interradicular extension of the enamel cap in M 1 and M 2 (Skinner et al., 2008a).
Here we assess the metameric variation within and between individuals and groups at the EDJ along the post-canine dentition in three Plio-Pleistocene hominin taxa (Australopithecus, Paranthropus and early Homo) and in modern humans. We mainly aim to test whether the intra-and inter-individual metameric patterns and scales differ between australopiths and genus Homo. In other words, is the metameric variation observed in modern humans seen in early Homo and/or australopiths? We also test if 3D-EDJ metameric variation is a useful indicator of dental position in lower postcanine teeth.
| MATERIALS A ND METHODS
| Study sample
We selected only the postcanine dentition from mandibular specimens -isolated teeth were excluded-to study the intra-individual metameric variation. Whenever the antimeres were preserved, only the teeth on the better preserved side were used in our analyses. The fossil hominin materials came from collections housed at the Ditsong Museum of Natural History (Pretoria, South Africa). Our fossil sample includes permanent lower post-canine teeth representing P. robustus (N 5 17, six individuals), Au. africanus (N 5 4, one individual) and early Pleistocene Homo specimens, attributed to Homo erectus s.l. [START_REF] Wood | Reconstructing human evolution: Achievements, challenges, and opportunities[END_REF]; but see [START_REF] Clarke | Australopithecus and early Homo in Southern Africa[END_REF][START_REF] Curnoe | Odontometric systematic assessment of the Swartkrans SK 15 mandible[END_REF]. The modern human reference material includes 88 teeth, representing 32 individuals of European, Asian and African origin (Table 1).
Because of unconformable wear stages in many modern specimens that affected the EDJ morphology in molar dentition, we used different individuals for comparisons M 1 -M 2 and M 2 -M 3 (detailed in Table 1). The degree of wear for each fossil specimen is listed in Table 1, according to tooth wear categories proposed by [START_REF] Molnar | Human tooth wear, tooth function and cultural variability[END_REF]. In the majority of cases, our specimens show no dentine horn wear on EDJs, but when necessary the dentine horn tip was reconstructed based on the morphology of the intact cusps.
The specimens were scanned using four comparable X-ray microtomographic instruments: an X-Tek (Metris) XT H225L industrial mXCT Micro-XCT image stacks were imported for semi-automated (modules "magic wand" and "threshold") and automated (module "watershed") segmentations. After the segmentation, the EDJ surface was generated using the "unconstrained smoothing" parameter.
| Analyses
In some cases, especially in fossil specimens, only a tooth from one side is preserved (details in Table 1). Any image data from the left side were flipped to obtain a homogeneous right-sided sample. For each tooth, we defined a set of main landmarks as well as a set of semilandmarks along the marginal ridge between the main dentine horns (DHs), as an approximation of the marginal ridge (Figure 1). Along each semi-landmark section, a smooth curve was interpolated using a Bspline function using the "Create Splines" module. Interpolated curves were then imported into R (R Development Core Team, 2012), and were resampled to collect semi-landmarks that were equally spaced along each section/curve, delimited by traditional landmarks. For premolars, the main landmarks were placed on the tips of the DHs (i.e., 1. protoconid, 2. metaconid), with 55 semi-landmarks in between (15 on the mesial marginal ridge, 30 on the distal marginal ridge and 10 on the essential crest that connects the two DHs). The molar landmark set included four main landmarks on the tips of the DHs (i.e., 1. protoconid, 2. metaconid, 3. entoconid, 4. hypoconid), with 60 semi-landmarks, forming a continuous line, beginning at the tip of the protoconid and moving in a counter-clockwise direction.
In order to investigate intra-taxon metameric variation, the samples were grouped into three pairs, according to tooth position (P 3 -P 4 , M 1 -M 2 , M 2 -M 3 ), and comparisons were performed within each pair.
The landmark sets were imported in R software (R Development Core Team, 2012), and statistical analyses were conducted subsequently. The study of intra-individual metameric variation in shape was completed using R packages ade4 and Morpho; each sample of landmark configurations was registered using generalized Procrustes analysis (GPA; [START_REF] Gower | Generalized procrustes analysis[END_REF], treating semi-landmarks as equally spaced points. The resulting matrix of shape coordinates was analyzed in three ways. First, to visualize the average metameric shape differences for each taxon, mean configurations of each of the tooth classes (except for early Homo and Au. africanus represented by an isolated individual) were created, and superimposed using smooth curves with regard to tooth positons.
A between-group PCA (bgPCA) was performed based on the Procrustes shape coordinates to explore the distribution of each group in shape space [START_REF] Braga | A new partial temporal bone of a juvenile hominin from the site of Kromdraai B (South Africa)[END_REF][START_REF] Gunz | The mammalian bony labyrinth reconsidered, introducing a comprehensive geometric morphometric approach[END_REF][START_REF] Mitteroecker | Linear discrimination, ordination, and the visualization of selection gradients in modern morphometrics[END_REF][START_REF] Pan | Further morphological evidence on South African earliest Homo lower postcanine dentition: Enamel thickness and enamel dentine junction[END_REF][START_REF] Ritzman | Mandibular ramus shape of Australopithecus sediba suggests a single variable species[END_REF]. The bgPCA computes a covariance matrix of the predefined group means and then projects all specimens into the space spanned by the eigenvectors of this covariance matrix. Between-group PCAs were conducted separately for each of the metameric pairs (P 3 -P 4 , M 1 -M 2 , M 2 -M 3 ). Because early Homo and Au. africanus have only one specimen, respectively, representing their groups, they were projected subsequently onto the shape space, ordinated by the modern human and P. robustus tooth means, without being assigned to groups a priori. The first two axes were plotted in order to visualize the trends and vectors of EDJ shape change between metameres of the same individual. As the metameric vectors generated by bgPCA are actually placed in a multidimensional shape space, the bgPC1-bgPC3 axes were also plotted, shown in Supporting Information Figure S1. Our interpretation of the spatial positions and metameric vectors between specimens are mainly based on the first two bgPCs, but we refer to the third bgPC as well. As a complementary method of comparing the magnitude of shape variation between metameres, hierarchical clustering (HC) as well as subsequent dendrograms were We visualized the magnitude of shape variation using dendrograms and a 0 to 25 scale. Ward's minimum variance method was applied in HC, as it aims at finding compact clusters in which individuals are grouped. We chose to use this method because it minimizes the increase of intra-group inertia, and maximizes the inter-group inertia, at each step of the algorithm.
| R E S U LTS
The within-taxon metameric variation of P 3 -P 4 , M 1 -M 2 and M 2 -M 3 mean shapes are illustrated in Figures 2 and3; bgPCA plots and dendrograms are illustrated in Figure 4, and the bgPC1-bgPC3 axes are plotted in Supporting Information Figure S1, with the max-min values along the first two bgPC axes shown as landmark series in Supporting Information Figure S2. Specimens are marked according to species and dental position.
Our analyses observed appreciable variation in the metameric relationships within and between taxa, but some common trends in shape change can be found. From P 3 to P 4 , a distalward displacement of the mesial marginal ridge is shared among taxa (Figure 2); a decrease in the height of metaconid dentine horn is presented in modern humans, P. africanus specimen Sts 52 (Figure 4A, Supporting Information Figure S1A). In the axes represented here, the shapes of their P 3 s resemble those of modern humans (note that Sts 52 P 3 is placed just between the range of modern humans and P. robustus), but their P 4 s are similar to those of P. robustus (Figure 4A, Supporting Information Figure S1A).
But as our analyses mixed interspecific and metameric variation, inferences of EDJ shape differences/similarities between isolated cases should be carefully addressed.
Almost all modern human M 2 s and many M 3 s lack a hypoconulid, and hence hypoconulid dentine horn tip was not included in the homologous landmarks (Figure 1D-F). Therefore, differences in the
| DISCUSSION
Tooth morphology is controlled by the combined effects of biochemical signaling degraded from mesial to distal direction at the tooth row level and at the individual crown level [START_REF] Jernvall | Linking development with generation of novelty in mammalian teeth[END_REF][START_REF] Weiss | Duplication with variation: Metameric logic in evolution from genes to morphology[END_REF]. As suggested by the inhibitory cascade model [START_REF] Evans | A simple rule governs the evolution and development of hominin tooth size[END_REF][START_REF] Kavanagh | Predicting evolutionary patterns of mammalian teeth from development[END_REF], the development of each deciduous and permanent molar is controlled by the balance between inhibitor molecules from mesially located tooth germs and activator molecules from the mesenchyme.
The ratio of genetic activation and inhibition during development determines the relative size of the dental elements in the dental row.
However, permanent premolars are derived independently from deciduous molars, so the tooth germs of P 3 and P 4 are not directly connected, therefore caution should be addressed when linking inhibitory cascade model to the metameric variation in permanent premolars.
Metameric differences are often subtle, and the risk of conflating metameric and taxonomic variation is a general concern [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF][START_REF] Singleton | Allometric and metameric shape variation in Pan mandibular molars: A digital morphometric analysis[END_REF]. However, dental metameric variation in Plio-Pleistocene hominins remains relatively unexplored owing to difficulty in quantifying the complex and subtle shape variation in premolar and But due to the small sample size, further investigation will be needed.
In all three pairs of metameres, hierarchical clustering placed fossil specimens together, showing similar degree of shape change, but the magnitude of metameric variation is quite diversified in modern humans. Moreover, as revealed by our bivariate plots, modern human M 3 s show a large scale of shape variability. This is consistent with previous observations using conventional tools [START_REF] Garn | Third molar agenesis and size reduction of the remaining teeth[END_REF][START_REF] Townsend | Molar intercuspal dimensions: Genetic input to phenotypic variation[END_REF] or geometric morphometrics [START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF][START_REF] Pan | Further morphological evidence on South African earliest Homo lower postcanine dentition: Enamel thickness and enamel dentine junction[END_REF]. It has been suggested that in modern humans, the inter-individual differences are larger for the M 2 s than for the M 1 s [START_REF] Braga | The enamel-dentine junction in the postcanine dentition of Australopithecus africanus: Intra-individual metameric and antimeric variation[END_REF]. This observation is in line with the distalward molar size reduction seen in Pleistocene humans (Berm udez [START_REF] Berm Udez De Castro | Posterior dental size reduction in hominids: The Atapuerca evidence[END_REF][START_REF] Brace | Post-Pleistocene changes in the human dentition[END_REF][START_REF] Brace | Gradual change in human tooth size in the late Pleistocene and post-Pleistocene[END_REF][START_REF] Evans | A simple rule governs the evolution and development of hominin tooth size[END_REF]. Since the shape stability was considered to increase from third molar to first molar [START_REF] Dahlberg | The changing dentition of man[END_REF], it is possible that small molar size results in a more unstable shape rather than larger molar size. Just as Skinner et al. (2008a) observed a small-scale, distal expansion of the marginal ridge in M 2 -FIG URE 4 A-C: Results of bgPCA of the semi-landmark configurations. Axes represent functions of the shape variation, specimens from the same individual are marked using the same symbol (for easier visual inspection, all of the modern individuals are marked using the same symbol), and each color representing the tooth position for each taxon. Metameres are connected using lines, showing metameric vectors. D-F: Dendrograms of intra-individual metameric variation in EDJ shape yielded by Hierachical Clustering (HC), showing the magnitude of metameric variation within and between groups. HC was done between metameric pairs P 3 -P 4 (D), M 1 -M 2 (E), and M 2 -M 3 (F). Scales from 0 to 25 indicate similitude of metameric shape variation between individuals. For easier visual inspection, only fossil specimens are marked M 3 s of P. robustus, we confirm such trend in all of our P. robustus specimens and in a number of modern humans. This trend is weakly expressed in our Au. africanus and is not presented in the early Homo specimen. A previous study observed that, in australopiths there is an increase in the height of mesial dentine horns from M 2 to M 3 (Skinner et al., 2008a), in contrast, we found a reduction in relative dentine horn height from M 1 to M 2 , and from M 2 to M 3 among all the taxa examined here, similar to metameric patterns expressed in Pan [START_REF] Skinner | Discrimination of extant Pan species and subspecies using the enamel-dentine junction morphology of lower molars[END_REF]. We suggest that this is probably because of our small sample size, and the fact that our study is focused on intra-individual variation therefore only metameres from strictly the same individual were investigated. The early Homo dentition SK 15 displays similar degree of metameric variation to other fossil samples, closer to the three P. robustus specimens than to the Au. africanus specimen. In all, the three fossil groups display variable directions in the shape change. In modern humans, the direction and magnitude of metameric vectors show increased variability in metameres M 2 -M 3 , than in M 1 -M 2 . However, in P. robustus, the length and direction of metameric vectors seem more variable in M 1 -M 2 pairs, than in M 2 -M 3 pairs, which show consistent metameric shape change. Further studies including more fossil specimens will be necessary to ascertain whether the metameric patterns observed here are characteristic of these groups.
| C O NC LU S I O N S
While 2D studies based on the OES suggest the existence of a distinctive metameric pattern in modern humans compared with that found in chimpanzees and Au. africanus [START_REF] Hlusko | Identifying metameric variation in extant hominoid and fossil hominid mandibular molars[END_REF], in each comparative pair of EDJ (P 3 -P 4 , M 1 -M 2 and M 2 -M 3 ) that we investigated, we do not observe a specific metameric pattern that belongs only to extant humans, but rather a few common trends shared by groups despite a degree of inter-and intra-group variaiton. As a whole, the EDJ proves to be a reliable proxy to identify the taxonomic identity [START_REF] Pan | Further morphological evidence on South African earliest Homo lower postcanine dentition: Enamel thickness and enamel dentine junction[END_REF]Skinner et al., 2008aSkinner et al., , 2008bSkinner et al., , 2009b[START_REF] Skinner | A dental perspective on the taxonomic affinity of the Balanica mandible (BH-1)[END_REF][START_REF] Zanolli | Brief communication: Molar crown inner structural organization in Javanese Homo erectus[END_REF][START_REF] Zanolli | Brief communication: Two human fossil deciduous molars from the Sangiran Dome (Java, Indonesia): Outer and inner morphology[END_REF], but further research is needed to determine whether the metameric trends in 3D-EDJ observed here could act as one piece of evidence to identify tooth position from isolated specimens. Moreover, the underlying mechanisms remain to be answered. Along the molar dentition, based on the axes examined in this study, our results with regard to modern humans are generally in accordance with morphogenetic models of molar rows and molar crowns (inhibitory cascade model). In P. robustus specimens examined here, trends of meanshape changes from M 1 to M 2 and from M 2 to M 3 differed from each other, instead of a simple gradation, such differential expression of metamerism has been previously reported in modern human upper molars [START_REF] Morita | Exploring metameric variation in human molars: A morphological study using morphometric mapping[END_REF]. It should be noted that our study focuses only on the EDJ marginal ridges, but additional studies of the accessory ridges (e.g. protostylid), and a more global analysis of shape variation among early hominins based on the whole EDJ sur-face and diffeomorphisms [START_REF] Braga | In press. The Kromdraai hominins revisited with an updated portray of differences between Australopithecus africanus and Paranthropus robustus[END_REF] will supplement our understanding on the metameric variation in hominin dentition.
Australopithecus africanus, early Homo, Homo sapiens, metamerism, Paranthropus robustus, tooth internal structure1 | I N T R O D U C T I O N
system at the South African Nuclear Energy Corporation (Necsa) (Hoffman & de Beer, 2012), a Scanco Medical X-Treme micro-XCT scanner at the Institute for Space Medicine and Physiology (MEDES) of Toulouse, a Phoenix Nanotom 180 scanner from the FERMAT Federation from the Inter-university Material Research and Engineering Centre (CIRIMAT, UMR 5085 CNRS), and a 225 kV-mXCT scanner housed at the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP, Chinese Academy of Sciences). Isometric voxel size ranged from 10 to 70 lm. Each specimen was segmented in Avizo 8.0 (Visualization Sciences Group, www.vsg3d.com).
(1949); (3)[START_REF] Broom | Swartkrans ape-man, Paranthropus crassidens[END_REF]; (4)Grine and Daeglin (1993); (5)[START_REF] Grine | New hominid fossils from the Swartkrans formation (1979-1986 excavations): craniodental specimens[END_REF]; (6)Broom and Robinson (1949); (7)Rampont (1994). b Owing to different wear stages, 32 M 2 s were taken into account. 14 of them belong to the same individuals as M 1 s, they were used in the analyses M 1 -M 2 . The other 18 teeth belong to the same individuals as M 3 s, they were used in the analyses M 2 -M 3 . computed. Dendrograms were obtained from the decomposition of the total variance (inertia) in between-and within-group variance. Contrary to the bgPCA, the HC does not require any a priori classification and specimens are aggregated into clusters according to the distances recorded between them. Aimed at finding how the metameric variation between fossils compares to modern human sample in the shape space, we used Euclidean distances between metameric pairs for clustering.
FIG URE1EDJ surface model of a lower premolar (A-C, SKX 21204 from Swartkrans) and a lower molar (D-F, SKW 5 from Swartkrans), illustrating the landmarks collected on the tips of the dentine horns (red spheres) and semi-landmarks that run between the dentine horns (orange spheres) used to capture EDJ shape. Abbreviations: buccal (B), distal (D), lingual (L) and mesial (M). Numbers on the red spheres stand for landmarks collected on the dentine horn tips, numbers next to the ridge curves stand for number of semi-landmarks. Note that relative sizes of premolars to molars are not to scale
hypoconulid dentine horn height and relative position are not recorded in the present study. In metameres M 1 -M 2 , there is a marked reduction in the height of dentine horns of both P. robustus and modern humans, particularly in the talonid, resulting in a flattened topology in M 2 (Figure3A,B); entoconid dentine horn is more centrally placed in M 2 (a pattern that is more marked in modern humans). Unfortunaltely, materials from early Homo and Au. africanus are not avaliable for comparison. With regard to multivariate analyses, similar magnitude of shape change is seen in both species (Figure4B,E; Supporting Information FigureS1B). The bgPC1 axis is driven by the presence of the hypoconulid, and the position and height of the hypoconid dentine horn; while the bgPC2 axis is driven by the height of the lingual dentine horns (Figure4B; Supporting Information FigureS2E-H). It is worth noticing that SK 6 M 1 -M 2 show a unique trend of shape change along the bgPC1, and SK 63 has a much shorter metameric vector in bgPC1-bgPC2 plot, increasing the within-group metameric variation in P. robustus. In shape space bgPC1-bgPC3, P. robustus shows vertically-oriented metameric vectors with similar lengths contrasted with modern humans; along bgPC3, the direction of shape change in modern humans is more variable compared to bgPC1 (Supporting Information FigureS1B). Our results of hierachichal clustering reveal that, although placed separately from modern groups, the P. robustus sample displays closer affinity to a few modern individuals, indicating a similarity in intra-individual metameric distances (Figure4E).For metameres M 2 -M 3 , a slight distal expansion of the marginal ridges is observed except for the early Homo specimen, SK 15 (Figure3C-F). A reduction in the height of dentine horns on the talonid is seen in modern humans and australopiths (a pattern that is more marked in modern humans; Figure3C,D). Changes in the relative positions of the dentine horns are widely observed: for modern humans, talonid dentine horns are more centrally placed in M 2 than in M 3 buccal view); for australopiths, the hypoconid dentine horn is more centrally placed in M 3 , more markedly in P. robustus, in addition, Au. africanus individual displays a more centrally placed entoconid in M 2 (Figure3D,F; occlusal view); for the early Homo specimen SK 15, a more centrally placed protoconid and entoconid dentine horns are seen in M 3(Figure 3E; occlusal view). With regard to multivariate analyzes, the bgPC1 exhibits shape changes in the EDJ outline (oviod to elongated, Supporting Information FigureS2I,J) and changes in the
FIG URE 2
2 FIG URE 2 Comparison of metameric variation between P 3 -P 4 based on mean shapes of the EDJ, after Procrustes superimposition. EDJ ridge curves are shown in occlusal, buccal and lingual views. Colors indicate different dental positions
FIG URE 3
3 FIG URE 3 Comparison of metameric variation between M 1 -M 2 , and between M 2 -M 3 , based on mean shapes of the EDJ, after Procrustes superimposition. EDJ ridge curves are shown in occlusal, buccal and lingual views. Colors indicate different dental positions. A-B: M 1 (red) compared to M 2 (blue); C-F: M 2 (red) compared to M 3 (blue)
TABLE 1
1 Composition of the study sample
Occlusal
Specimen P 3 P 4 M 1 M 2 M 3 Provenance Age wear Citations a
P. robustus
SK 6 1 1 1 1 1 Mb. 1, HR Swartkrans 1.80 6 0.09 Ma-2.19 6 0.08 1-3 1 -2
Ma (Gibbon et al., 2014),
2.31-1.64 Ma (Pickering
et al., 2011)
SK 843 1 1 Mb. 1, HR 2-3 1
SK 61 1 1 Mb. 1, HR 1 3
SK 63 1 1 1 1 Mb. 1, HR 1 1
SKW 5 1 1 Mb. 1, HR 1-late 2 4
SKX 4446 1 1 Mb. 2 1.36 6 0.69 Ma 1-3 5
(Balter et al., 2008)
Au. africanus
Sts 52 1 1 1 1 Mb. 4 Sterkfontein 3.0-2.5 Ma (White, Harris, 2-3 1
1977; Tobias, 1978; Clarke,
1994), 2.8-2.4 Ma (Vrba,
1985; but see Berger et al.,
2002), 2.1 6 0.5 Ma
(Schwarcz et al., 1994)
Early Homo
SKX 21204 1 1 Mb. 1, LB Swartkrans 1.80 6 0.09 Ma-2.19 6 0.08 1 4
Ma (Gibbon et al., 2014),
2.31-1.64 Ma (Pickering
et al., 2011)
SK 15 1 1 Mb. 2 1.36 6 0.69 Ma 2-3 6
(Balter et al., 2008)
Extant H. sapiens 12 12 14 14/18 b 18 South Africa/Europe/East Asia 1-late 2 7
a Citations: (1)
[START_REF] Robinson | The dentition of the Australopithecinae[END_REF]
; (2) Broom
ACKNOWLEDGMENTS
This work was supported by the Centre National de la Recherche Scientifique (CNRS), the French Ministry of Foreign Affairs, the French Embassy in South Africa through the Cultural and Cooperation Services, National Natural Science Foundation of China and the China Scholarship Council. For access to specimens we thank the following individuals and institutions: Stephany Potze (Ditsong National Museum of Natural History, Pretoria), Jean-Luc Kahn (Strasbourg University), Dr. S. Xing (Institute of Vertebrate Paleontology and Paleoanthropology, Beijing), Dr. M. Zhou (Institute of Archeology and Cultural Relics of Hubei Province, Wuhan), and Dr.
C. Thèves (UMR 5288 CNRS). We thank Dr. A. Beaudet for her technical support during the imaging processing of data and the statistical analysis. We are also grateful to the Associated Editor, and two anonymous reviewers of this manuscript, for their insightful comments and suggestions.
SUPPORTING INFORMATION
Additional Supporting Information may be found in the online version of this article at the publisher's website. |
01757334 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01757334/file/ASME_JMR_2015_Nurahmi_Schadlbauer_Caro_Husty_Wenger_HAL.pdf | Latifah Nurahmi
email: latifah.nurahmi@irccyn.ec-nantes.fr
Josef Schadlbauer
email: josef.schadlbauer@uibk.ac.at
Stéphane Caro
email: stephane.caro@irccyn.ec-nantes.fr
Manfred Husty
email: manfred.husty@uibk.ac.at
Philippe Wenger
email: philippe.wenger@irccyn.ec-nantes.fr
Caro
Kinematic Analysis of the 3-RPS Cube Parallel Manipulator
Keywords: 3-RPS-Cube, parallel manipulators, singularities, operation mode, motion type, Darboux motion
teaching and research institutions in France or abroad, or from public or private research centers.
Introduction
Since the development of robot technology, the lower-mobility parallel manipulators have been extensively studied. One parallel manipulator of the 3-dof family is the 3-RPS Cube and was proposed by [START_REF] Huang | Motion Characteristics and Rotational Axis Analysis of Three DOF Parallel Robot Mechanisms[END_REF] [START_REF] Huang | Motion Characteristics and Rotational Axis Analysis of Three DOF Parallel Robot Mechanisms[END_REF]. The 3-RPS Cube parallel manipulator, shown in Fig. 1, is composed of a cube-shaped base, an equilateral triangular-shaped platform, and three identical legs. Each leg is composed of a revolute joint, an actuated prismatic joint and a spherical joint mounted in series.
By referring to the design of the 3-RPS Cube manipulator, the type synthesis of 3dof rotational manipulators with no intersecting axes was discussed in [START_REF] Chen | Type Synthesis of 3-DOF Rotational Parallel Mechanisms With No Intersecting Axes[END_REF]. The kinematic characteristics of this mechanism were studied in [3][START_REF] Huang | Identification of Principal Screws of 3-DOF Parallel Manipulators by Quadric Degeneration[END_REF][START_REF] Huang | Analysis of Instantaneous Motions of Deficient Rank 3-RPS Parallel Manipulators[END_REF], by identifying the principal screws, and the authors showed that the manipulator belongs to the general third-order screw system, which can rotate in three dimensions and the axes do not intersect.
In [6], Huang et al. showed that the mechanism is able to perform a motion along its diagonal, which is known as the Vertical Darboux Motion (VDM). Several mechanical generators of the VDM were later revealed by Lee and Hervé [START_REF] Lee | On the Vertical Darboux Motion[END_REF], in which one point in the moving platform is compelled to move in a plane.
Later in [START_REF] Huang | A 3DOF Rotational Parallel Manipulator Without Intersecting Axes[END_REF], the authors showed that the manufacturing errors have little impact on the motion properties of the 3-RPS Cube parallel manipulator. By analysing the Instantaneous Screw Axes (ISA), Chen et al. showed in [START_REF] Chen | Axodes Analysis of the Multi DOF Parallel Mechanisms and Parasitic Motion[END_REF] that this mechanism performs parasitic motions, in which the translations and the rotations are coupled.
By using an algebraic description of the manipulator and the Study kinematic mapping, a characterisation of the operation mode, the direct kinematics, the general motion, and the singular poses of the 3-RPS Cube parallel manipulator are discussed in more detail in this paper, which is based on [10][11][START_REF] Schadlbauer | A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Schadlbauer | Operation Modes in Lower Mobility Parallel Manipulators[END_REF][START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF]. The derivation of the constraint equations is the first essential step to reveal the existence of only one operation mode and to solve the direct kinematics problem.
In 1897, Darboux [START_REF] Bottema | Theoretical Kinematics[END_REF] studied the 3-dof motion where the vertices of a triangle are compelled to remain in the planes of a trihedron respectively. The three planes are mutually orthogonal and this is the case of the 3-RPS Cube parallel manipulator. Darboux showed that in this 3-dof motion, the workspace of each point of the moving platform is bounded by a Steiner surface, while the vertices of the moving platform remain in the planes.
Under the condition that the prismatic lengths remain equal, the moving platform of the manipulator is able to perform the VDM. It follows from Bottema and Roth [START_REF] Bottema | Theoretical Kinematics[END_REF] that this motion is the result of a rotation about an axis and a harmonic translation along the same axis. In this motion, all points in the moving platform (except the geometric center of the moving platform) move in ellipses and the path of a line in the moving platform is a right-conoid surface.
The singularities are examined in this paper by deriving the determinant of the Jacobian matrix of the constraint equations with respect to the Study parameters. Based on the reciprocity conditions, Joshi and Tsai in [START_REF] Joshi | Jacobian Analysis of Limited-DOF Parallel Manipulators[END_REF] developed a procedure to express the Jacobian matrix J of lower-mobility parallel manipulators that comprises both actuation and constraint wrenches. In this paper, this matrix is named the extended Jacobian matrix (J E ) of the lower-mobility parallel manipulators, as explained in [START_REF] Amine | Singularity Analysis of the H4 Robot Using Grassmann-Cayley Algebra[END_REF][START_REF] Amine | Lower-Mobility Parallel Manipulators: Geometrical Analysis, Singularities and Conceptual Design[END_REF][START_REF] Amine | Singularity analysis of 3T2R parallel mechanisms using Grassmann Cayley algebra and Grassmann geometry[END_REF][START_REF] Amine | Conceptual Design of Schonflies Motion Generators Based on the Wrench Graph[END_REF][START_REF] Amine | Singularity Conditions of 3T1R Parallel Manipulators With Identical Limb Structures[END_REF]. The rows of J E are composed of n linearly independent actuation wrenches plus (6 -n) linearly independent constraint wrenches.
In a general configuration, the constraint wrench system, W c , must be reciprocal to the twist system of the moving platform of the parallel manipulator. A constraint singularity occurs when the (6-n) constraint wrench system W c degenerates. In such a configuration, at least one of the initial constrained motions will no longer be constrained. As a result, the mechanism gains one or several dof . This can lead to a change in the motion pattern of the mechanism, which then can switch to another operation mode.
By locking the actuated joints of the parallel manipulator, the moving platform must be fully constrained, i.e., the system spanned by the actuation wrench system, W a , and constraint wrench system, W c , must span a 6-system. An actuation singularity hence occurs when this overall wrench system of the manipulator degenerates, i.e., is not a 6-system any more, while the manipulator does not reach a constraint singularity. This concept will be applied in this paper to illustrate the singularities of the extended Jacobian matrix (J E ) of the 3-RPS Cube parallel manipulator. It allows us to investigate the actuation and constraint singularities that occur during the manipulator motion. This paper is organized as follows: A detailed description of the manipulator architecture is given in Section 2. The constraint equations of the manipulator are expressed in Section 3. These equations are used to identify the operation mode(s) and the solutions of the direct kinematics of the manipulator in Section 4. In Section 5, the conditions on the leg lengths for the manipulator to reach a singularity configuration are presented. Eventually, the general motion and the Vertical Darboux Motion (VDM) of the manipulator are reviewed in Sections 6 and 7.
Manipulator Architecture
The 3-RPS Cube parallel manipulator shown in Fig. 1, is composed of a cube-shaped base, an equilateral triangular-shaped platform and three identical legs. Each leg is composed of a revolute joint, an actuated prismatic joint and a spherical joint mounted in series.
The origin O of the fixed frame Σ 0 is shifted along σ 0 = [h 0 , h 0 , h 0 ] from the center of the base in order to fulfill the identity condition (when the fixed frame and the moving frame are coincident), as shown by the large and red dashed box in Fig. 1. Likewise, the origin P of the moving frame Σ 1 is shifted along σ 1 = [h 1 , h 1 , h 1 ] as described by the small and blue dashed box in Fig. 1. The revolute joint in the i-th (i = 1 . . . 3) leg is located at point A i , its axis being along vector s i , while the spherical joint is located at point B i , the i-th corner of the moving platform. The distance between the origin O of the fixed frame Σ 0 and point A i is equal to h 0 √ 2. The axes s 1 , s 2 and s 3 are orthogonal to each other. The moving platform has an equilateral triangle shape and its circumradius is equal to
d 1 = h 1 √ 6/3.
Each pair of vertices
A i and B i (i = 1, 2, 3
) is connected by a prismatic joint. The prismatic length is denoted by r i . Since the i-th prismatic length is orthogonal to the revolute axis s i , the leg A i B i moves in a plane normal to s i .
As a consequence, there are five parameters, namely r 1 , r 2 , r 3 , h 0 , and h 1 . h 0 and h 1 are design parameters, while r 1 , r 2 , and r 3 are joint variables that determine the manipulator motion.
Constraint Equations
In this section, the constraint equations are expressed whose solutions illustrate the possible poses of the moving platform (coordinate frame Σ 1 ) with respect to Σ 0 . In the following, we use projective coordinates to define the position vectors of points A i and B i . The coordinates of points A i and points B i expressed in Σ 0 and Σ 1 are respectively:
r 0 A 1 = [1, 0, -h 0 , -h 0 ] T , r 1 B 1 = [1, 0, -h 1 , -h 1 ] T , r 0 A 2 = [1, -h 0 , 0, -h 0 ] T , r 1 B 2 = [1, -h 1 , 0, -h 1 ] T , r 0 A 3 = [1, -h 0 , -h 0 , 0] T , r 1 B 3 = [1, -h 1 , -h 1 , 0] T (1)
To obtain the coordinates of points B 1 , B 2 and B 3 expressed in Σ 0 , the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) is used as follows:
M = x 2 0 + x 2 1 + x 2 2 + x 2 3 0 T 3×1 M T M R (2)
where M T and M R represent the translational and rotational parts of transformation matrix M, respectively, and are expressed as follows:
M T = 2(-x 0 y 1 + x 1 y 0 -x 2 y 3 + x 3 y 2 ) 2(-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) 2(-x 0 y 3 -x 1 y 2 + x 2 y 1 + x 3 y 0 ) , M R = x 2 0 + x 2 1 -x 2 2 -x 2 3 2(x 1 x 2 -x 0 x 3 ) 2(x 1 x 3 + x 0 x 2 ) 2(x 1 x 2 + x 0 x 3 ) x 2 0 -x 2 1 + x 2 2 -x 2 3 2(x 2 x 3 -x 0 x 1 ) 2(x 1 x 3 -x 0 x 2 ) 2(x 2 x 3 + x 0 x 1 ) x 2 0 -x 2 1 -x 2 2 + x 2 3 (3)
The parameters x 0 , x 1 , x 2 , x 3 , y 0 , y (x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ) T = (0 : 0 : 0 : 0 : 0 : 0 : 0 : 0) T
Every projective point X will represent a spatial Euclidean displacement, if it fulfills the following equation and inequality:
x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0, x 2 0 + x 2 1 + x 2 2 + x 2 3 = 0 (5)
Those two conditions will be used in the following computations to simplify the algebraic expressions. The coordinates of points B i expressed in Σ 0 are obtained by:
r 0 B i = M r 1 B i i = 0, . . . , 3 (6)
As the coordinates of all points are given in terms of Study parameters, design parameters and joint variables, the constraint equations can be obtained by examining the manipulator architecture. The leg connecting points A i and B i is orthogonal to the axis s i of the i-th revolute joint, expressed as follows:
s 1 = [0, 1, 0, 0] T s 2 = [0, 0, 1, 0] T s 3 = [0, 0, 0, 1] T (7)
Accordingly, the scalar product of vector (r 0 B ir 0 A i ) and vector s i vanishes, namely:
(r 0 B i -r 0 A i ) T s i = 0 (8)
After computing the corresponding scalar products and removing the common denom-
inators (x 2 0 + x 2 1 + x 2 2 + x 2 3
), the following three equations come out:
g 1 : -h 1 x 0 x 2 + h 1 x 0 x 3 -h 1 x 1 x 2 -h 1 x 1 x 3 -x 0 y 1 + x 1 y 0 -x 2 y 3 + x 3 y 2 = 0 g 2 : h 1 x 0 x 1 -h 1 x 0 x 3 -h 1 x 1 x 2 -h 1 x 2 x 3 -x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 = 0 g 3 : -h 1 x 0 x 1 + h 1 x 0 x 2 -h 1 x 1 x 3 -h 1 x 2 x 3 -x 0 y 3 -x 1 y 2 + x 2 y 1 + x 3 y 0 = 0 (9)
To derive the constraint equations corresponding to the leg lengths, the joint variables r i are given and we assume that the distance between points A i and B i is constant, i.e.
r i = const. It follows that point B i has the freedom to move along a circle of center A i and the distance equation can be formulated as (r 0
B i -r 0 A i ) 2 = r 2 i .
As a consequence, the following three equations are obtained:
g 4 : 2h 2 0 x 2 0 + 2h 2 0 x 2 1 + 2h 2 0 x 2 2 + 2h 2 0 x 2 3 -4h 0 h 1 x 2 0 + 4h 0 h 1 x 2 1 + 2h 2 1 x 2 0 + 2h 2 1 x 2 1 + 2h 2 1 x 2 2 + 2h 2 1 x 2 3 -8h 0 h 1 x 2 x 3 -r 2 1 x 2 0 -r 2 1 x 2 1 -r 2 1 x 2 2 -r 2 1 x 2 3 -4h 0 x 0 y 2 -4h 0 x 0 y 3 -4h 0 x 1 y 2 + 4h 0 x 1 y 3 + 4h 0 x 2 y 0 + 4h 0 x 2 y 1 + 4h 0 x 3 y 0 -4h 0 x 3 y 1 + 4h 1 x 0 y 2 + 4h 1 x 0 y 3 + 4y 2 0 -4h 1 x 1 y 2 + 4h 1 x 1 y 3 -4h 1 x 2 y 0 + 4h 1 x 2 y 1 -4h 1 x 3 y 0 + 4y 2 1 -4h 1 x 3 y 1 + 4y 2 2 + 4y 2 3 = 0 g 5 : 2h 2 0 x 2 0 + 2h 2 0 x 2 1 + 2h 2 0 x 2 2 + 2h 2 0 x 2 3 -4h 0 h 1 x 2 0 + 4h 0 h 1 x 2 2 + 2h 2 1 x 2 0 + 2h 2 1 x 2 1 + 2h 2 1 x 2 2 + 2h 2 1 x 2 3 -8h 0 h 1 x 1 x 3 -r 2 2 x 2 0 -r 2 2 x 2 1 -r 2 2 x 2 2 -r 2 2 x 2 3 -4h 0 x 0 y 1 -4h 0 x 0 y 3 + 4h 0 x 1 y 0 -4h 0 x 1 y 2 + 4h 0 x 2 y 1 -4h 0 x 2 y 3 + 4h 0 x 3 y 0 + 4h 0 x 3 y 2 + 4h 1 x 0 y 1 + 4h 1 x 0 y 3 + 4y 2 0 -4h 1 x 1 y 0 -4h 1 x 1 y 2 + 4h 1 x 2 y 1 -4h 1 x 2 y 3 -4h 1 x 3 y 0 + 4y 2 1 + 4h 1 x 3 y 2 + 4y 2 2 + 4y 2 3 = 0 g 6 : 2h 2 0 x 2 0 + 2h 2 0 x 2 1 + 2h 2 0 x 2 2 + 2h 2 0 x 2 3 -4h 0 h 1 x 2 0 + 4h 0 h 1 x 2 3 + 2h 2 1 x 2 0 + 2h 2 1 x 2 1 + 2h 2 1 x 2 2 + 2h 2 1 x 2 3 -8h 0 h 1 x 1 x 2 -r 2 3 x 2 0 -r 2 3 x 2 1 -r 2 3 x 2 2 -r 2 3 x 2 3 -4h 0 x 0 y 1 -4h 0 x 0 y 2 + 4h 0 x 1 y 0 + 4h 0 x 1 y 3 + 4h 0 x 2 y 0 -4h 0 x 2 y 3 -4h 0 x 3 y 1 + 4h 0 x 3 y 2 + 4h 1 x 0 y 1 + 4h 1 x 0 y 2 + 4y 2 0 -4h 1 x 1 y 0 + 4h 1 x 1 y 3 -4h 1 x 2 y 0 -4h 1 x 2 y 3 -4h 1 x 3 y 1 + 4y 2 1 + 4h 1 x 3 y 2 + 4y 2 2 + 4y 2 3 = 0 (10)
The Study equation in Eq. ( 5) is added since all solutions have to be within the Study quadric, i.e.:
g 7 : x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 (11)
Under the condition (x 2 0 + x 2 1 + x 2 2 + x 2 3 = 0), we can find all possible points in P 7 that satisfy those seven equations. To exclude the exceptional generator (x 0 = x 1 = x 2 = x 3 = 0), we add the following normalization equation:
g 8 : x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0 (12)
It assures that there is no point of the exceptional generator that appears as a solution.
However, for each projective solution point, we obtain two affine representatives. This has to be taken into account for the enumeration of the number of solutions.
Solving the System
Solving the direct kinematics means finding all possible points in P 7 that fulfill the set of equations {g 1 , ..., g 8 }. Those points are the solutions of the eight constraint equations that represent all feasible poses of the 3-RPS Cube parallel manipulator. They also depend on the design parameters (h 0 , h 1 ) and the joint variables (r 1 , r 2 , r 3 ).
The set of eight constraint equations are always written as a polynomial ideal with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring C[h 0 , h 1 , r 1 , r 2 , r 3 ]. Although the solutions of the direct kinematics can be complex, they are still considered as solutions.
To apply the method of algebraic geometry, the ideal is now defined as:
I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 > (13)
The vanishing set V(I) of the ideal I comprises all points in P 7 for which all equations vanish, namely all solutions of the direct kinematic problem. At this point, the following ideal is examined, which is independent of the joints variables r 1 , r 2 and r 3 :
J =< g 1 , g 2 , g 3 , g 7 > (14)
The primary decomposition is computed to verify if the ideal J is the intersection of several smaller ideals. The primary decomposition returns several J i in which J = i J i .
In other words, the vanishing set is given by V(J ) = i V(J i ). It expresses that the variety V(J ) is the union of some other or simpler varieties V(J i ).
The primary decomposition geometrically tells us that the intersection of those equa-
Paper JMR-14-1262, corresponding author's last name: CARO tions will split into smaller parts. Indeed, it turns out that the ideal J is decomposed into two components J i as:
J = 2 i=1 J i (15)
with the results of primary decompositions 1 as follows:
J 1 = < x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 , ... > J 2 = < x 0 , x 1 , x 2 , x 3 > (16)
An inspection of the vanishing set V(J 2 ∪ g 8 ) yields an empty result, since the set of
polynomials {x 0 , x 1 , x 2 , x 3 , x 2 0 + x 2 1 + x 2 2 + x 2
3 -1 = 0} can never vanish simultaneously over R or C. Therefore, only one component is left and as a consequence, the manipulator has only one operation mode, which is defined by J 1 . To complete the analysis, the remaining equations have to be added by writing:
K i = J i ∪ < g 4 , g 5 , g 6 , g 8 > ( 17
)
Since there is only one component, the vanishing set of I is now defined by:
V(I) = V(K 1 ) (18)
From the primary decomposition, it is shown that the ideal I cannot be split and K 1 is named I hereafter.
Solutions for arbitrary design parameters
The 3-RPS Cube manipulator generally has only one operation mode, which is described by the ideal I. The solutions of the direct kinematic problem in this operation mode will be given for arbitrary values of design parameters (h 0 , h 1 ). To find out the Hilbert dimension of the ideal I, the certain values of design parameters are chosen as h 0 = 2 m
dim(I) = 0 ( 19
)
The dim denotes the dimension over C[h 0 , h 1 , r 1 , r 2 , r 3 ] and shows that the number of solutions to the direct kinematic problems is finite in this general mode. The number of solutions and the solutions themselves were computed via an ordered Gröbner basis, which led to a univariate polynomial of degree 32. As two solutions of a system describe the same pose (position and orientation) of the moving platform, the number of solutions has to be halved into 16 solutions.
| V(I) |= 16 (20)
Therefore, there are at most 16 different solutions for given general design parameters and joint variables, i.e., there are theoretically 16 feasible poses of the moving platform for given joint variables. Notably, for arbitrarily values of design parameters and joint variables, some solutions might be complex.
Solutions for equal leg lengths
In the following subsection, it is assumed that all legs have the same length. The corresponding prismatic lengths are r 1 = r 2 = r 3 = r. Similar computations can be performed which were done in the previous subsection to enumerate the Hilbert dimension of the ideal. The Hilbert dimension is calculated and it follows that:
dim(I) = 0 ( 21
)
This shows that the solutions of the direct kinematics problem with equal leg lengths are finite. When the number of solutions is computed for the system, it has to be halved and the following result is obtained:
| V(I) |= 16 (22)
The number of solutions for equal leg lengths is the same number as the solutions for arbitrary design parameters. Due to the fact that there are fewer parameters, the Gröebner basis can be computed without specifying any value. The solutions of Study parameters in the case of equal leg lengths are x 1 = x 2 = x 3 and y 1 = y 2 = y 3 . One manipulator pose with equal prismatic lengths leads to the following solutions of Study parameters:
x 0 = 1 2 -h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 -2h 1 h1 x 1 = 1 6 √ 3 h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 + 2h 1 h1 y 0 = 1 12 √ 3 h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 + 2h 1 3/2 h1 2 y 1 = -1 12h 1 -h 1 3h 2 0 -2h 2 1 + 3r 2 -3h 0 -2h 1 2h 1 -3h 0 + 3h 2 0 -2h 2 1 + 3r 2 x 1 = x 2 = x 3 y 1 = y 2 = y 3 (23)
Operation mode analysis
In the previous section, the joint variables (r 1 , r 2 , r 3 ) were fixed. In this section, they can change, i.e., the behaviour of the mechanism is studied when the prismatic joints are actuated. The joint variables (r 1 , r 2 and r 3 ) are used as unknowns and the computation of the Hilbert dimension shows that:
dim(I) = 3 ( 24
)
where dim denotes the dimension over C[h 0 , h 1 ] and shows that the manipulator has 3 dof in general motion.
The matrix M ∈ SE(3) in Eq. ( 2) represents a discrete screw motion from the pose corresponding to the identity condition, where Σ 0 and Σ 1 are coincident, to the transformed pose of Σ 1 with respect to Σ 0 . A discrete screw motion is the concatenation of a rotation about an axis and a translation along the same axis. The axis A, the translational distance s, and the rotational angle ϕ of the discrete screw motion can be computed from the matrix M. This information can also be obtained directly from the Study parameters, as they contain the information on the transformation. The Plücker coordinates L = (p 0 : p 1 : p 2 : p 3 : p 4 : p 5 ) of the corresponding discrete screw motion are expressed as:
p 0 = (-x 2 1 -x 2 2 -x 2 3 )x 1 , p 1 = (-x 2 1 -x 2 2 -x 2 3 )x 2 , p 2 = (-x 2 1 -x 2 2 -x 2 3 )x 3 , p 3 = x 0 y 0 x 1 -(-x 2 1 -x 2 2 -x 2 3 )y 1 , p 4 = x 0 y 0 x 2 -(-x 2 1 -x 2 2 -x 2 3 )y 2 , p 5 = x 0 y 0 x 3 -(-x 2 1 -x 2 2 -x 2 3 )y 3 . (25)
The unit vector of an axis A of the corresponding discrete screw motion is given by [p 0 , p 1 , p 2 ] T . The Plücker coordinates of a line should satisfy the following condition [START_REF] Pottmann | Computational Line Geometry[END_REF]:
p 0 p 3 + p 1 p 4 + p 2 p 5 = 0 (26)
The rotational angle ϕ can be enumerated directly from cos ϕ 2 = x 0 , whereas the translational distance s of the transformation can be computed from the Study parameters, as follows:
s = 2y 0 x 2 1 + x 2 2 + x 2 3 ( 27
)
The following example shows the manipulator poses by solving the direct kinematic problem. Arbitrary values are assigned to the design parameters and joint variables as The first solution of the direct kinematics is depicted in Fig. 2(a), with (x 0 : x 1 : x 2 :
follows: h 0 = 2 m, h 1 = 1 m, r 1 = 1.2 m,
x 3 : y 0 : y 1 : y 2 : y 3 ) = (-0.961 : -0.189, -0.153 : 0.128 : -0.007 : 0.304 : -0.250 : 0.089).
The discrete screw motion of the moving platform from identity into the actual pose in
Singularity Conditions of the Manipulator
The manipulator reaches a singular configuration when the determinant of the Jacobian matrix vanishes. The Jacobian matrix is the matrix of all first order partial derivatives of eight constraint equations {g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 } with respect to {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 }.
Since the manipulator has one operation mode, the singular configurations occur within this operation mode only. In the kinematic image space, the singular poses are computed by taking the Jacobian matrix from I:
The vanishing condition det(J) = 0 of the determinant J is denoted by S. The factorization of the equation of the Jacobian determinant splits it into two components, namely S 1 : det 1 (J) = 0 and S 2 : det 2 (J) = 0.
det(J) = 0 det 1 (J) det 2 (J) = 0 (29)
It shows that the overall determinant will vanish if either S 1 or S 2 vanishes or both S 1 and S 2 vanish simultaneously. By adding the expression of the Jacobian determinant into the system I, the new ideal associated with the singular poses can be defined as:
L i = I ∪ S i i = 1, 2 (30)
The ideals now consist of a set of nine equations L i =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 , g 9 >.
The ninth equation is the determinant of the Jacobian matrix.
In mechanics, the singularity surface is desirable also in the joint space R, where R is the polynomial ring over variables r 1 , r 2 and r 3 . To obtain the singularity surface in R, the following projections are determined from ideals L i :
L i → R i = 1, 2 (31)
Algebraically, each projection is an elimination of Study parameters from the ideal L i and is mapped onto one equation generated by r 1 , r 2 and r 3 . It was not possible to compute the elimination in general, thus we assigned some values to the design parameters, namely h 0 = 2 m and h 1 = 1 m. The eight Study parameters x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3
were eliminated to obtain a single polynomial in r 1 , r 2 and r 3 .
For the system L 1 , the elimination yields a polynomial of degree four in r 1 , r 2 and r 3 in Eq. ( 32) and its zero set of polynomial is plotted in Fig. 3. By taking a point on this surface, we are able to compute the direct kinematics of at least one singularity pose.
r 4 1 -r 2 1 r 2 2 + r 4 2 -r 2 1 r 2 3 -r 2 2 r 2 3 + r 4 3 -2r 2 1 -2r 2 2 -2r 2 3 -20 = 0 (32)
Due to the heavy elimination process of Study parameters from ideal L 2 , some arbitrary values have been assigned to the joint variables r 1 = 2 m and r 2 = 1.7 m. Then the elimination can be carried out and the result is a univariate polynomial of degree 64 in
r 3 .
Let us consider one singularity configuration of the manipulator when the moving frame Σ 1 coincides with the fixed frame Σ 0 and all joint variables have the same values.
The system L 2 is now solved by assigning the joint variables as r 1 = r 2 = r 3 . The elimination process returns a univariate polynomial of degree 24 in r 3 . The real solutions of joint variables in this condition is
r 1 = r 2 = r 3 = √ 2 m.
The coordinates of points B 1 , B 2 and B 3 can be determined by solving the direct kinematics. Accordingly, we can form the extended Jacobian matrix (J E ) of the manipulator, which is based on the screw theory. The rows of the extended Jacobian matrix J E are composed of n actuation wrenches W a and (6 -n) constraint wrenches W c . Since the Paper JMR-14-1262, corresponding author's last name: CARO By considering that the prismatic joints are actuated, each leg applies one actuation force whose axis is along the direction of the corresponding actuated joint u i , as follows:
F a1 = [ u 1 , r 0 B 1 × u 1 ] F a2 = [ u 2 , r 0 B 2 × u 2 ] F a3 = [ u 3 , r 0 B 3 × u 3 ] W a = span( F a1 , F a2 , F a3 ) (33)
Due to the manipulator architecture, each leg applies one constraint force, which is perpendicular to the actuated prismatic joint and parallel to the axis s i of the revolute joint, written as:
F c1 = [ s 1 , r 0 B 1 × s 1 ] F c2 = [ s 2 , r 0 B 2 × s 2 ] F c3 = [ s 3 , r 0 B 3 × s 3 ] W c = span( F c1 , F c2 , F c3 ) (34)
By collecting all components of the extended Jacobian matrix, we obtained:
J T E = F a1 F a2 F a3 F c1 F c2 F c3 (35)
The degeneracy of matrix J E indicates that the manipulator reaches a singularity configuration. We can observe the pose of the manipulator when r 1 = r 2 = r 3 = √ 2 m, the matrix J E in this pose is rank deficient, while neither the constraint wrench system nor the actuation wrench system degenerates, i.e. rank(J E ) = 5, rank(W a ) = 3, and rank(W c ) = 3. This means that the manipulator reaches an actuation singularity.
By examining the null space of the degenerate matrix J E , the uncontrolled motion (infinitesimal gain motion) of the moving platform can be obtained. This uncontrolled motion is characterized by a zero-pitch twist that is reciprocal to all constraint and actuation wrenches. It is denoted by s λ and is described in Eq. ( 36). This singularity posture is depicted in Fig. 4, the uncontrolled motion of the moving platform is along the purple line.
s T λ = 1 1 1 0 0 0 (36)
General Motion
The set of eight constraint equations is written as a polynomial ideal I with variables
x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 over the coefficient ring C[h 0 , h 1 , r 1 , r 2 , r 3 ].
Paper JMR-14-1262, corresponding author's last name: CARO I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 >
The general motion performed by the 3-RPS Cube parallel manipulator is characterized by solving the ideal I. The equations g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 from ideal I can be solved linearly for variables y 0 , y 1 , y 2 , y 3 , R 1 , R 2 , R 3 [START_REF] Schadlbauer | A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF], R i being the square of the prismatic lengths, i.e., R i = r 2 i , and
δ = x 2 0 + x 2 1 + x 2 2 + x 2 3 .
Hence, the Study parameters become:
y 0 = h 1 (x 2 1 x 2 + x 2 1 x 3 + x 1 x 2 2 + x 1 x 2 3 + x 2 2 x 3 + x 2 x 2 3 ) δ y 1 = - h 1 (x 2 0 x 2 -x 2 0 x 3 + x 0 x 2 2 + x 0 x 2 3 -x 2 2 x 3 + x 2 x 2 3 ) δ y 2 = h 1 (x 2 0 x 1 -x 2 0 x 3 -x 0 x 2 1 -x 0 x 2 3 -x 2 1 x 3 + x 1 x 2 3 ) δ y 3 = - h 1 (x 2 0 x 1 -x 2 0 x 2 + x 0 x 2 1 + x 0 x 2 2 -x 2 1 x 2 + x 1 x 2 2 ) δ (38)
The terms R i 2 are also expressed in terms of x 0 , x 1 , x 2 , x 3 . The remaining Study parameters are still linked in equation g 8 :
x 2 0 + x 2 1 + x 2 2 + x 2
3 -1 = 0, which amounts to a hypersphere equation in space (x 0 , x 1 , x 2 , x 3 ). Accordingly, the transformation matrix is obtained. However, only the translational part of the transformation matrix depends on parameters (x 0 , x 1 , x 2 , x 3 ).
M T = 2h 1 (x 0 x 2 -x 0 x 3 + x 1 x 2 + x 1 x 3 ) -2h 1 (x 0 x 1 -x 0 x 3 -x 1 x 2 -x 2 x 3 ) 2h 1 (x 0 x 1 -x 0 x 2 + x 1 x 3 + x 2 x 3 ) (39)
This parametrization provides us with an interpretation of the general motion performed by the manipulator. The moving platform of the manipulator is capable of all orientations determined by (x 0 , x 1 , x 2 , x 3 ). The translational motion is coupled to the orientations via Eq. (39).
The position of any point in the moving platform ([1, x, y, z] T ) with respect to the fixed frame Σ 0 ([1, X, Y, Z] T ) during the motion is determined by:
x 0 = 1, x 1 = x 2 = x 3 = 0 : r 0 C 0 = [1, -h 1 , -h 1 , -h 1 ] T x 1 = 1, x 0 = x 2 = x 3 = 0 : r 0 C 1 = [1, -h 1 , h 1 , h 1 ] T x 2 = 1, x 0 = x 1 = x 3 = 0 : r 0 C 2 = [1, h 1 , -h 1 , h 1 ] T x 3 = 1, x 0 = x 1 = x 2 = 0 : r 0 C 3 = [1, h 1 , h 1 , -h 1 ] T (42)
C 0 , C 1 , C 2 and C 3 are the vertices of a tetrahedron C as shown in Fig. 5. Those points correspond to the poses of the moving platform subjected to the actuation singularities.
The uncontrolled motions of the moving platform are characterized by zero-pitch twists that intersect the geometric center of the moving platform and the corresponding vertices.
If two parameters are null, for instance x 2 = x 3 = 0, the motion of point Q will be determined by:
X = -h 1 Y = -h 1 (x 2 0 -x 2 1 )/(x 2 0 + x 2 1 ) Z = -h 1 (x 2 0 -x 2 1 )/(x 2 0 + x 2 1 ) (43)
This means that point Q moves along the edge C 0 C 1 , covering the closed interval between the two vertices. If only one parameter is zero, for instance if x 0 = 0, the point Q will occupy the closed triangle C 1 C 2 C 3 . Eventually, if none of the parameters is null, then point Q will move inside the tetrahedron C.
Let us consider an arbitrary point R in the moving platform such that:
(x + h 1 )(y + h 1 )(z + h 1 ) = 0 (44)
For example, take a point at the geometric center of the triangular-shaped platform, of coordinates
r 1 R = [1, -2 3 h 1 , -2 Figure 6: Pseudo-tetrahedron D. X = -2 3 h 1 Y = -2 3 h 1 (x 2 0 + x 0 x 1 -x 2 1 ) x 2 0 + x 2 1 Z = -2 3 h 1 (x 2 0 -x 0 x 1 -x 2 1 ) x 2 0 + x 2 1 ( 46
)
This represents an ellipse e 01 that passes through the vertices D 0 and D 1 and lies in the plane X = -2 3 h 1 . Accordingly, the four vertices of the pseudo-tetrahedron D are joined by six ellipses, as shown in Fig. 6.
When only one parameter is equal to zero, for instance x 0 = 0, the trajectory of point R will follow a particular surface, called the Steiner surface F 03 . It passes through the Then the expressions of the trajectory of point R are given by:
X = -2 3 h 1 (x 2 1 -x 1 x 2 -x 1 x 3 -x 2 2 -x 2 3 ) (x 2 1 + x 2 2 + x 2 3 ) Y = 2 3 h 1 (x 2 1 + x 1 x 2 -x 2 2 + x 2 x 3 + x 2 3 ) (x 2 1 + x 2 2 + x 2 3 ) Z = 2 3 h 1 (x 2 1 + x 1 x 3 + x 2 2 + x 2 x 3 -x 2 3 ) (x 2 1 + x 2 2 + x 2 3 ) (47)
Therefore, the trajectory of an arbitrary point of the moving platform forms the shape of pseudo-tetrahedron D and contains four vertices D i (i = 0, 1, 2, 3). These vertices are joined by six ellipses and any three of the vertices are linked by a Steiner surface F j (j = 0, 1, 2, 3). Any two Steiner surfaces (F i and F j ) share one ellipse e ij in common.
Let us analyse the motion of a special point S that does not fulfill Eq. (44). For instance, the point S is at one vertex of the triangular-shaped platform, B 3 (Fig. 1). If three parameters (among four parameters x i , i = 0, 1, 2, 3) are equal to zero, the positions of the point S are determined by:
x 0 = 1, x 1 = x 2 = x 3 = 0 : r 0 E 0 = [1, -h 1 , -h 1 , 0] T x 1 = 1, x 0 = x 2 = x 3 = 0 : r 0 E 1 = [1, -h 1 , h 1 , 0] T x 2 = 1, x 0 = x 1 = x 3 = 0 : r 0 E 2 = [1, h 1 , -h 1 , 0] T x 3 = 1, x 0 = x 1 = x 2 = 0 : r 0 E 3 = [1, h 1 , h 1 , 0] T (48)
Those points are coplanar and are the vertices of a rectangle as shown in Fig. 8. If two parameters are zero, for example x 2 = x 3 = 0, the path of point S is along the edge E 0 E 1 .
Accordingly, in a general configuration the point S always moves in the plane Z = 0.
Another special point which does not fulfill Eq. ( 44) is the origin of the moving frame P . According to Eq. ( 40), the positions of point P are given by:
δ = x 2 0 + x 2 1 + x 2 2 + x 2 3 X = 1 δ 2h 1 (x 0 x 2 -x 0 x 3 + x 1 x 2 + x 1 x 3 ) Y = 1 δ 2h 1 (-x 0 x 1 + x 0 x 3 + x 1 x 2 + x 2 x 3 ) Z = 1 δ 2h 1 (x 0 x 1 -x 0 x 2 + x 1 x 3 + x 2 x 3 ) (49)
If three parameters (among four parameters x i , i = 0, 1, 2, 3) are equal to zero, the positions of the point P will be always coincident with the origin of the fixed frame O. Paper JMR-14-1262, corresponding author's last name: CARO linked to the eighth equation in x 2 0 + 3x 2 1 -1 = 0, which is simply an ellipse equation in the space x 0 and x 1 . This ellipse equation can be parametrized by x 0 = cos(u) and
x 1 = 1 3 sin(u) √ 3.
As a result, the workspace of the manipulator performing the VDM is parametrized by the parameter u. Hence, the Study parameters are expressed as:
x 0 = c(u) x 1 = 1 3 s(u) √ 3 y 0 = 2 3 s(u) 3 √ 3 y 1 = - 2 3 c(u)s(u) 2 x 2 = 1 3 s(u) √ 3 x 3 = 1 3 s(u) √ 3 y 2 = - 2 3 c(u)s(u) 2 y 3 = - 2 3 c(u)s(u) 2 (52)
where s(u) = sin(u), c(u) = cos(u).
Therefore, the possible poses of the moving platform can be expressed by the following transformation matrix:
T = 1 0 0 0 a 4 3 c(u) 2 -1 3 -2 3 s(u)(c(u) √ 3 -s(u)) -2 3 s(u)(c(u) √ 3 -s(u)) a -2 3 s(u)(c(u) √ 3 -s(u)) 4 3 c(u) 2 -1 3 -2 3 s(u)(c(u) √ 3 -s(u)) a -2 3 s(u)(c(u) √ 3 -s(u)) -2 3 s(u)(c(u) √ 3 -s(u)) 4 3 c(u) 2 -1 3 (53)
where a = 4 3 sin(u) 2 .
Trajectory of the moving platform performing the Vertical Darboux Motion
Let us consider the point B 1 moving in the plane X = 0 and the geometric center R of the moving platform as shown in Fig. 1. The paths followed by those two points are obtained by setting u = -π 2 . . . π 2 by using the transformation matrix T defined in Eq. ( 53).
It appears that those two paths are different as shown in Fig. 10 4 . Point R moves 4 The animation of the trajectories is shown in: that is parallel to the plane X = 0.
Let us take all segments joining point B 1 to any point of segment B 2 B 3 and plot the paths of all points on those segments. All those paths are planar ellipses, except the path followed by point R. Accordingly, the set of all paths forms a ruled surface called Right-conoid surface, which is illustrated in yellow in Fig. 11
(55)
The instantaneous screw axis of the moving platform is obtained from the components of matrix A as explained in [START_REF] Schadlbauer | Operation Modes in Lower Mobility Parallel Manipulators[END_REF], after normalization:
ISA = 1 √ 3 1 √ 3 1 √ 3
All twists of the manipulator are collinear. As a consequence, the fixed axode generated by the ISA is a straight line of unit vector [1/ √ 3, 1/ √ 3, 1/ √ 3] T . In the moving coordinate frame, the moving axode corresponding to this motion is congruent with the fixed axode as depicted in Fig. 12. However, the moving axode does not appear clearly as it is congruent with the fixed axode. Indeed, the moving axode internally slides and rolls onto the fixed axode.
Conclusions
In this paper, an algebraic geometry method was applied to analyse the kinematics and the operation mode of the 3-RPS Cube manipulator. Primary decomposition of an ideal of eight constraint equations revealed that the manipulator has only one general operation mode. In this operation mode, the direct kinematics was solved and the number of solutions was obtained for arbitrary values of design parameters and joint variables. The singularity conditions were computed and represented in the joint space. It turns out that the manipulator reaches the singularity when the moving frame coincides with the fixed frame and all joint variables are equal. The uncontrolled motion of the moving platform in this singularity configuration was investigated and geometrically interpreted.
Figure 1 :
1 Figure 1: The 3-RPS Cube Parallel Manipulator.
r 2 =
2 2 m, and r 3 = 1.5 m. By considering only the real solutions, the manipulator has two solutions for those design parameters and joint variables.
Fig. 2 (
2 Fig.2(a) is along the axis A 1 . In Plücker coordinates, it is given by (p 0 : p 1 : p 2 : p 3 : p 4 : p 5 ) = (0.014 : 0.011 : -0.009 : 0.021 : -0.020 : 0.007). The rotational angle and translational distance along the screw axis A 1 are ϕ 1 = 5.725 rad and s 1 = -0.057 m, respectively.
Figure 2 (
2 Figure 2(b) illustrates the second solution of the direct kinematic problem, with (x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ) = (0.962 : 0.056 : -0.021 : -0.265 : 0.001 : -0.293 : 0.232 : -0.076). The moving platform is transformed from the identity into the final pose via the axis A 2 in Fig. 2(b), with rotational angle ϕ 2 = 0.552 rad and translational distance s 2 = 0.01 m. The Plücker coordinate vector of the discrete screw motion is defined by (p 0 : p 1 : p 2 : p 3 : p 4 : p 5 ) = (-0.004 : 0.001 : 0.019 : -0.021 : 0.017 : -0.006).
Figure 2 :
2 Figure 2: Solutions of the Direct Kinematics.
Figure 3 :
3 Figure 3: Singularity Surface of L 1 .
Figure 4 :
4 Figure 4: Singularity Pose at the Identity Condition.
Figure 5 :
5 Figure 5: Tetrahedron C.
Figure 7 :
7 Figure 7: Steiner Surface F 0 .
Figure 8 :
8 Figure 8: Rectangle E.
Figure 9 :
9 Figure 9: Steiner Surface G 0 .
Figure 10 :
10 Figure 10: Trajectories of points B 1 and R.
5 .
5 This type of ruled surfaces is generated by moving a straight line such that it intersects perpendicularly a fixed straight line, called the axis of the Right-conoid surface. The fixed straight line followed by point R is axis of the Right-conoid surface.
Figure 11 :
11 Figure 11: Right-conoid Surface of the VDM.
7. 2
2 Axodes of the manipulator performing the Vertical Darboux MotionHaving the parametrization of the VDM performed by the 3-RPS Cube parallel manipulator in terms of Study parameters, it is relatively easy to compute the ISA. The possible poses of the moving platform as functions of time in this special motion only allow the orientations that are given by one parameter u. The ISA are obtained from the entries of the velocity operator:A = Ṫ T -1(54)By setting u = t, matrix A becomes:
Figure 12 :
12 Figure 12: ISA Axodes of VDM.
1 , y 2 , y 3 , which appear in matrix M, are called Study parameters. These parameters make it possible to parametrize SE(3) with dual quaternions. The Study kinematic mapping maps each spatial Euclidean displacement of SE(3) via transformation matrix M onto a projective point X [x 0 : x 1 : x 2 : x 3 : y 0 : y 1 : y 2 : y 3 ]
in the 6-dimensional Study quadric S ∈ P 7 [14], such that:
SE(3) → X ∈ P 7
Paper JMR-14-1262, corresponding author's last name: CARO
h 1 , -2 3 h 1 ] T . If any of the three parameters is zero, then the corresponding positions of point R will become:x 0 = 1, x 1 = x 2 = x 3 = 0 : r 0 D 0 = [1, -2 3 h 1 , -2 3 h 1 , -2 3 h 1 ] T x 1 = 1, x 0 = x 2 = x 3 = 0 : r 0 D 1 = [1, -2 3 h 1 , 2 3 h 1 , 2 3 h 1 ] T x 2 = 1, x 0 = x 1 = x 3 = 0 : r 0 D 2 = [1, 2 3 h 1 , -2 3 h 1 , 2 3 h 1 ] T x 3 = 1, x 0 = x 1 = x 2 = 0 : r 0 D 3 = [1, 2 3 h 1 , 2 3 h 1 , -2 3 h 1 ] T(45)D 0 , D 1 , D 2 and D 3 are the vertices of a pseudo-tetrahedron D as shown in Fig.6and it was verified that these vertices amount to the singularities of the 3-RPS Cube manipulator. If two parameters are equal to zero, for instance x 2 = x 3 = 0, the point Q will move along the edge C 0 C 1 , while the path of point R will be given by:Paper JMR-14-1262, corresponding author's last name: CARO
The motion animation of point R that is bounded by the Steiner surface, is shown in: http://www.irccyn.ec-nantes.fr/ ~caro/ASME_JMR/JMR_14_1262/animation_steiner.gif
http://www.irccyn.ec-nantes.fr/ ~caro/ASME_JMR/JMR_14_1262/animation_trajectories.gif[START_REF] Huang | Analysis of Instantaneous Motions of Deficient Rank 3-RPS Parallel Manipulators[END_REF] The animation of the right-conoid surface is shown in: http://www.irccyn.ec-nantes.fr/ ~caro/ASME_JMR/JMR_14_1262/animation_rightconoid.gif Paper JMR-14-1262, corresponding author's last name: CARO
Acknowledgments
The authors would like to acknowledge the support of the Österreichischer Austauschdienst/OeAD, the French Ministry for Foreign Affairs (MAEE) and the French Ministry for Higher Education and Research (MESR) (Project PHC AMADEUS). Moreover, Prof. Manfred Husty acknowledges the support of FWF grant P 23832-N13, Algebraic Methods in Collision Detection and Path Planning.
T , which is a special point in the cube of a moving frame Σ 1 as shown in Fig. 5. Then, its positions with respect to the fixed frame Σ 0 according to Eq. ( 40) are:
The coordinates of point Q depend on (x 0 , x 1 , x 2 , x 3 ). There are four possible positions corresponding to the three parameters (among four parameters x i , i = 0, 1, 2, 3) are equal to zero. These corresponding positions of Q are:
Paper JMR-14-1262, corresponding author's last name: CARO
Vertical Darboux Motion
The condition for the manipulator to generate the VDM is that all prismatic lengths are equal, i.e., r 1 = r 2 = r 3 . By solving the direct kinematics of the manipulator with the same prismatic lengths, the Study parameters obtained to perform the VDM yield
By substituting those values into the ideal I, the set of eight constraint equations becomes:
It follows from Eq. (50) that the first three constraint equations are the same. Likewise, the next three equations are identical. Mathematically, one has to find the case of 1-dof motion, as known as cylindrical motion, with one parameter that describes the VDM.
Equation (50) can be solved linearly for the variables R i , y 0 , y 1 in terms of x 0 , x 1 , as follows:
From Eq. (51), it is apparent that the manipulator can perform the VDM if and only if all prismatic lengths are the same. The remaining Study parameters x 0 and x 1 are still |
01757510 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757510/file/ARK2018_Rasheed_Long_Marquez_Caro.pdf | Tahir Rasheed
email: tahir.rasheed@ls2n.fr
Philip Long
email: p.long@northeastern.edu
David Marquez-Gamez
email: david.marquez-gamez@irt-jules-verne.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Stéphane Caro Kinematic
Kinematic Modeling and Twist Feasibility of Mobile Cable-Driven Parallel Robots
Keywords: Cable-Driven Parallel Robots, Mobile Bases, Kinematic Modeling, Available Twist Set
come L'archive ouverte pluridisciplinaire
Introduction
A Cable-Driven Parallel Robot (CDPR) is a type of parallel manipulator with limbs as cables, connecting the moving-platform with a fixed base frame. The platform is moved by appropriately controlling the cable lengths or tensions. CDPRs contains numerous advantages over conventional robots, e.g, high accelerations [START_REF] Kawamura | High-speed manipulation by using parallel wire-driven robots[END_REF], large payload capabilities [START_REF] Albus | The nist robocrane[END_REF], and large workspace [START_REF] Lambert | Implementation of an aerostat positioning system with cable control[END_REF].
However, a major drawback in classical CDPRs having fixed cable layout, i.e, fixed exit points and cable configuration, is the potential collisions between the cables and the surrounding environment which can significantly reduce the robot workspace. Better performances can be achieved with an appropriate CDPR architecture. Cable robots with a possibility of undergoing a change in their geometric structure are known as Reconfigurable Cable-Driven Parallel Robots (RCDPRs). Different strategies have been proposed for maximizing the robot workspace or increasing platform stiffness in the recent work on RCDPRs [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. However, reconfigurability is typically performed manually for most existing RCDPRs.
To achieve autonomous reconfigurability of RCDPRs, a novel concept of Mobile Cable-Driven Parallel Robots (MCDPRs) was introduced in [START_REF] Rasheed | Tension distribution algorithm for planar mobile cable-driven parallel robots[END_REF]. The first MCDPR prototype has been designed and built in the context of Echord++ FASTKIT project 1 . The targeted application for such MCDPR prototype is logistics. Some papers deals with the velocity analysis of parallel manipulators [START_REF] Merlet | Efficient computation of the extremum of the articular velocities of a parallel manipulator in a translation workspace[END_REF]. However, few focus on the twist analysis of CDPRs [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF]. This paper deals with the kinematic modeling of MCDPRs that is required to analyze the kinematic performance of the robot. The paper is organized as follows. Section 2 presents the kinematic model of MCDPRs. Section 3 deals with the determination of the Available Twist Set (ATS) for MCDPRs using the kinematic modeling of the latter. The ATS can be used to obtain the twist capacities of the moving-platform. Section 4 presents the twist capacities of the moving-platform for the MCDPRs under study. Finally, conclusions are drawn and future work is presented in Section 5.
Kinematic modeling
A MCDPR is composed of a classical CDPR with m cables and a n degree-offreedom (DoF) moving-platform mounted on p Mobile Bases (MBs). The jth mobile base is denoted as M j , j = 1, . . . , p. The ith cable mounted onto M j is named as C i j , i = 1, . . . , m j , where m j denotes the number of cables carried by M j . u i j denotes the unit vector along the cable C i j . Each jth mobile base along with its m j number of cables is denoted as jth PD (pd j ) module. Each pd j consists of a proximal (prox j ) and a distal (dist j ) module. dist j consists of m j cables between M j and the moving-platform. In this paper, cables are assumed to be straight and massless, thus can be modeled as a Universal-Prismatic-Spherical (UPS) kinematic chain. Generally MBs are four-wheeled planar robots with two-DoF translational motions and one-DoF rotational motion, thus, prox j can be modeled as a virtual RPP kinematic chain between the base frame F 0 and the frame F b j attached to M j . An illustrative example with p = 4 MBs and m = 8 cables is shown in Fig. 1a. A general kinematic architecture of a MCDPR is shown in Fig. 1b.
Kinematics of the Distal Module
A classical CDPR is referred as distal modules in MCDPR. The twist 0 t dist P of the moving-platform due to the latter is expressed as [START_REF] Gouttefarde | Interval-analysis-based determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF][START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF]:
A 0 t dist P = l, (1)
where A is the (m × n) parallel Jacobian matrix, containing the actuation wrenches due to the cables on the mobile platform. The twist 0 t P = [ω ω ω, ṗ] T is composed of the platform angular velocity vector ω ω ω = [ω x , ω y , ω z ] T and linear velocity vector ṗ = [ ṗx , ṗy , ṗz ] T , expressed in F 0 . 0 t dist P denotes the platform twist due to the distal module motion. l is a m-dimensional cable velocity vector. Here, Eq. ( 1) can be expressed as:
A 1 A 2 . . . A j . . . A p 0 t dist P = l1 l2 . . . l j . . . lp , (2)
where l j = [ l1 j , l2 j , . . . , lm j j ] T . A j is expressed as:
A j = [( 0 b 1 j -0 p) × u i j ] T u T 1 j [( 0 b 2 j -0 p) × u 2 j ] T u T 2 j . . . [( 0 b m j j -0 p) × u m j j ] T u T m j j , (3)
where ith row of A j is associated with the actuation wrench of the ith cable mounted onto M j . 0 b i j denotes the Cartesian coordinate vector of the anchor points B i j in F 0 . 0 p denotes the platform position in F 0 .
Kinematic modeling of a MCDPR
The twist 0 t j P of the moving-platform due to pd j can be expressed in F 0 as:
0 t j P = 0 t prox j P + 0 t dist j P (4)
where 0 t prox j P ( 0 t dist j P , resp.) is the twist of the moving-platform due to the motion of the proximal (distal, resp.) module of pd j expressed in F 0 . Equation ( 4) take the form:
0 t j P = b j Ad P 0 t prox j b j + 0 R b j b j t dist j P (5)
where b j Ad P is called the adjoint matrix, which represents the transformation matrix between twists expressed in F b j and twist expressed in F P , b j Ad P = I 3 0 3 -b j rP I 3 [START_REF] Kawamura | High-speed manipulation by using parallel wire-driven robots[END_REF] where b j rP is the cross-product matrix of vector --→ 0 b j P expressed in F 0 . b j t dist j P is the moving-platform twist due to dist j expressed in F b j . The augmented rotation matrix ( 0 R b j ) is used to express b j t
dist j P in F 0 : 0 R b j = 0 R b j 0 3 0 3 0 R b j (7)
where 0 R b j is the rotation matrix between frames F b j and F 0 . As the Proximal module is being modeled as a virtual RPP limb, 0 t prox j b j from Eq. ( 4) can be expressed as:
0 t prox j b j = J b j qb j (8)
where J b j is a (6 × 3) serial Jacobian matrix of prox j and qb j is the virtual joint velocities of the latter, namely,
0 t prox j b j = k 0 0 3 0 3 k 0 × 0 p 0 R b j i 0 0 R b j j 0 θ j ρ1 j ρ2 j (9)
where i 0 , j 0 and k 0 denotes the unit vector along x 0 , y 0 and z 0 respectively. Upon multiplication of Eq. ( 5) with A j :
A j 0 t j P = A j b j Ad P J b j qb j + A j 0 R b j b j t dist j P . (10)
As
A j 0 R b j b j t dist j P
represents the cable velocities of the dist j (see Eq. ( 2)), Eq. ( 10) can also be expressed as:
A j 0 t j P = A j b j Ad P J b j qb j + l j . ( 11
)
The twist of the moving-platform t P and the twists generated by the limbs are the same, namely, 0 t 1 P = 0 t 2 P = 0 t 3 P . . . = 0 t j P . . . = 0 t p P = t P (12)
Thus, the twist of the moving-platform in terms of all the p number of limbs can be expressed as:
A 1 A 2 . . . A p t P = A 1 b1 Ad P J b1 0 0 • • • 0 0 A 2 b2 Ad P J b2 0 • • • 0 . . . . . . . . . . . . 0 0 0 • • • A p bp Ad P J bp qb + l (13)
where qb = [ qb1 , qb2 , . . . , qbp ] T and l = [ l1 , l2 , . . . , lp ] T . Equation ( 13) can be expressed in the matrix form as:
At P = B b qb + l ( 14
)
At P = B q (15)
where B = [B b I m ] is a (m × (3p + m))-matrix while q = [ qb l] T is a (3p + m)dimensional vector containing all joint velocities. Equation (15) represents the first order kinematic model of MCDPRs.
Available Twist Set of MCDPRs
This section aims at determining the set of available twists for MCDPRs that can be generated by the platform. For a classical CDPR, the set of twist feasible poses of its moving platform are known as Available Twist Set (ATS). A CDPR posture is called twist-feasible if all the twists within a given set, can be produced at the platform, within given joint velocity limits [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF].
According to [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF], ATS of a CDPR corresponds to a convex polytope that can be represented as the intersection of the half-spaces bounded by its hyperplanes known as Hyperplane-Shifting Method (HSM) [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. Although HSM can be utilized to determine the ATS of MCDPRs, the approach in [START_REF] Lessanibahri | Twist feasibility analysis of cabledriven parallel robots[END_REF] is not directly applicable due to the difference in the kinematic models of CDPR (Eq. 1) and MCDPR (Eq. 15) as matrix B = I.
The kinematic model of the MCDPRs is used to determine the ATS of the moving-platform. In case m = n, A is square, Eq. ( 15) can be expressed as:
t P = A -1 B q =⇒ t P = J q (16)
where J is a Jacobian matrix mapping the joint velocities onto the platform twist. The ATS will correspond to a single convex polytope, constructed under the mapping of Jacobian J.
In case m = n, matrix A is not square, however there exist in total C n m (n × n) square sub-matrices of matrix A, denoted by A k , k = 1, . . . ,C n m , obtained by removing mn rows from A. For each sub-matrix we can write:
tk p = A k -1 B k =⇒ tk p = J k q, k = {1, . . . ,C n m } (17)
where tk p is the twist generated by the kth sub-matrix A k out of C n m (n × n) square sub-matrices of matrix A. B k is a sub matrix of B using corresponding rows that are chosen in A k from A. HSM in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF] is directly applicable to compute all the hyperplanes for C n m convex polytopes knowing the minimum and maximum joint velocity limits. Thus, the ATS of MCDPRs is the region bounded by all of the foregoing hyperplanes.
Results
This section deals with the twist feasibility analysis of two different case studies. From the ATS acquired using the kinematic model of a given MCDPR configuration, we aim to study the difference in the moving platform twist considering fixed and moving MBs. The first case study is a planar MCDPR with a point mass end-effector shown in Fig. 2a. The MBs have only one degree of freedom along i 0 . The joint velocity limits are defined as:
-0.8 m.s -1 ≤ ρ1 j ≤ 0.8m.s -1 , -2m.s -1 ≤ li j ≤ 2m.s -1 , i = {1, 2}, j = {1, 2}, (18)
x Matrix A has six 2 × 2 sub-matrices Thus, ATS is the region bounded by the hyperplanes formed by these six convex polytopes. The difference of ATS between fixed (correspond to a classical CDPR) and moving MBs can be observed in Figs. 2b and2c. To illustrate the difference, a Required Twist Set (RTS) equal to [1.15m.s -1 , 1.675m.s -1 ] T is considered, depicted by a red point in Figs. 2b and2c. For fixed MBs, it should be noted that RTS is outside the ATS. By considering moving MBs, RTS is within the ATS. The same approach is adopted to determine the ATS for a given FASTKIT configuration in Fig. 3a. The joint velocity limits are defined as:
-0.2 m.s -1 ≤ θ j , ρ1 j , ρ2 j ≤ 0.2 m.s -1 , j = {1, 2}, (19)
-2 m.s -1 ≤ li j ≤ 2 m.s -1 , j = {1, 2}, i = {1, . . . , 4}, (20)
The maximum absolute twist that the platform can achieve in each Cartesian direction by considering fixed and moving MBs are illustrated in Figs. 3b and3c.
The maximum absolute wrench of the moving platform is illustrated in red where f x , f y , f z and m x , m y , m z represent the forces and the moments that can be generated by the cables onto the moving platform. For the analysis, the cable tensions are bounded between 0 and 20 N respectively. It can be observed the twist capacity of the moving-platform is increased when MBs are moving. On the contrary, high velocity capability of the moving-platform in certain directions also results in very less wrench capability in the respective directions. Thus, this velocity is unattainable outside certain dynamic conditions.
Conclusion
This paper dealt with the kinematic modeling of Mobile Cable-Driven Parallel Robots (MCDPRs) that can be used to analyze its kinematic performance. The developed kinematic model was utilized to determine the Available Twist Set (ATS) of MCDPRs. It considers the joint velocity limits for cables and the Mobile Bases (MBs). Using ATS, the twist capacities of the moving-platform was determined. Two case studies have been used in order to illustrate the effect of the moving MBs onto the platform twist. Future work will focus the trajectory planning of of MCD-PRs and experimental validations with FASTKIT prototype.
Fig. 1 :
1 Fig. 1: (a) MCDPR Parameterization (b) Kinematic Architecture of MCDPRs, active joints are highlighted in gray, passive joints are highlighted in white
4. 1
1 Case study: p = 2, m = 4 and n = 2 DoF MCDPR
Fig. 2 :
2 Fig. 2: (a) Configuration under study of p = 2, m = 4 and n = 2 MCDPR (b) ATS in green for fixed MBs (c) ATS in green for moving MBs
4. 2
2 Case study: p = 2, m = 8 and n = 6 DoF MCDPR
Fig. 3 :
3 Fig. 3: (a) A FASTKIT configuration (b, c) Maximum absolute twist and wrenches that FASTKIT platform can generate about each Cartesian direction
https://www.fastkit-project.eu/
Acknowledgements This research work is part of the European Project ECHORD++ "FASTKIT" dealing with the development of collaborative and mobile cable-driven parallel robots for logistics. |
01757514 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757514/file/3RUUanalysis_final.pdf | Thomas Stigger
email: thomas.stigger@uibk.ac.at
Abhilash Nayak
email: abhilash.nayak@ls2n.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Philippe Wenger
email: philippe.wenger@ls2n.fr
Martin Pfurner
email: martin.pfurner@uibk.ac.at
Manfred Husty
email: manfred.husty@uibk.ac.at
Algebraic Analysis of a 3-RUU Parallel Manipulator
Keywords: 3-RUU, kinematic analysis, direct kinematics, algebraic geometry
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
For theoretical and practical purposes, the kinematic analysis of a parallel manipulator (PM) is essential to understand its motion behavior. Kinematic constraints can be transformed via Study's kinematic mapping into algebraic constraint equations. Every configuration of the PM is thereby mapped to a point in a projective space, P 7 [START_REF] Husty | 21st Century Kinematics, chap. Kinematics and Algebraic geometry[END_REF][START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. Consequently, well developed concepts of algebraic geometry [START_REF] Cox | Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF] can be used to interpret the algebraic constraint equations to obtain necessary information about the PM.
In that vein, many PMs were investigated using algebraic geometry concepts. Resultant methods were adopted to solve the direct kinematics of Stewart-Gough platforms [START_REF] Husty | An algorithm for solving the direct kinematics of general Stewart-Gough platforms[END_REF]. A complete kinematic analysis including the characterization of operation modes, solutions to direct kinematics and determination of singular poses was performed for the 3-RPS PM [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF][START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF], the 3-RPS cube PM [START_REF] Nurahmi | Kinematic analysis of the 3-RPS Cube Parallel Manipulator[END_REF] and 3-PRS PMs with different arrangements of prismatic joints [START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements of P-joints[END_REF]. In the foregoing papers, the prismatic joints were considered to be actuated, which makes the analysis inherently algebraic. A more challenging kinematic analysis of an over-constrained 4-RUU PM with square base and moving platform was accomplished by decomposing it into two 2-RUU PMs [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF]. The constraint equations of a 3-RUU PM are derived in this paper and its direct kinematics problem is solved. Nevertheless, a complete characterization of the manipulator operation modes has not been obtained yet.
The paper is organized as follows: Section 2 describes the manipulator architecture. Section 3 deals with the derivation of algebraic constraint equations with two approaches and their comparison. Section 4 presents the solutions to direct kinematics for arbitrary design parameters and hints the recognition of a translational operation mode.
Manipulator Architecture
The 3-RUU PM is shown in Figure 1a. Each limb consists of a revolute joint and two universal joints mounted in series with the first revolute joint as the active joint. The moving platform and the fixed base form equilateral triangles with vertices C i and A i , respectively, i = 1, 2, 3. The unit vectors of the revolute joint axes within the i-th limb are denoted as s i j , i = 1, 2, 3; j = 1, ..., 5. s i5 and s i1 are tangent to the circumcircles (with centers P and O) of the moving platform and the base triangles, respectively. Vectors s i1 and s i2 are always parallel, so are vectors s i3 and s i4 . The origin of the fixed coordinate frame, F O is at O and the z O -axis lies along the normal to the base plane whereas the origin of the moving coordinate frame F P is at P and the z P -axis lies along the normal to the moving platform plane. x O and x P axes are directed along OA 1 and PC 1 , respectively. r 0 and r 1 are the circumradii of base and the moving platform, respectively. a 1 and a 3 are the link lengths. θ i1 is the angle of rotation of the first revolute joint about the axis represented by vector s i1 measured from the base plane whereas θ i2 is the angle of rotation of the second revolute joint about the axis represented by vector s i2 measured from the first link.
s 35 A 3 B 3 C 3 s x O z O y O O x P z P y P O P F P A 1 B 1 C 1 A 2 B 2 C 2 θ 11 θ 12 h 1 h 2 a 1 F O a 3 (a) The 3-RUU PM in a general configuration s 3 s 2 s 1 s 4 s 5 x 0 y 0 z 0 x 1 y 1 z 1 θ 1 θ 2 A B C F 1 F 0 a 1 a 3 (b) A RUU limb
Constraint Equations
The constraint equations of the 3-RUU PM are derived using a geometrical approach and the Linear Implicitization Algorithm (LIA) [START_REF] Walter | On implicitization of kinematic constraint equations[END_REF]. First, canonical constraint equations for a limb of the PM are derived by attaching fixed and moving coordinate frames to the two extreme joints of a RUU limb as shown in Fig. 1b. Each U-joint is characterized by two revolute joints with orthogonal and intersecting axes and Denavit-Hartenberg (DH) convention is used to parameterize each limb. F 0 and F 1 are the fixed and the moving coordinate frames with their corresponding z-axes along the first and the last revolute joint axes, respectively. Later on, general constraint equations are derived for the whole manipulator.
Derivation Using a Geometrical Approach
Canonical Constraints In order to derive the geometric constraints for a RUU limb, the homogeneous coordinates4 of points A, B,C (a, b, c, respectively) and vectors s j , j = 1, ..., 5, shown in Fig. 1b are expressed as follows:
0 a = [1, 0, 0, 0] T 0 b = [1, a 1 cos(θ 1 ), a 1 sin(θ 1 ), 0] T 1 c = [1, 0, 0, 0] T 0 s 1 = [0, 0, 0, 1] T 0 s 2 = [0, 0, 0, 1] T 0 s 3 = [0, cos(θ 1 + θ 2 ), sin(θ 1 + θ 2 ), 0] T 0 s 4 = [0, cos(θ 1 + θ 2 ), sin(θ 1 + θ 2 ), 0] T 1 s 5 = [0, 0, 0, 1] T (1)
where θ 1 and θ 2 are the angles of rotation of the first and the second revolute joints. Study's kinematic mapping is used to express the vectors c and s 5 in the fixed coordinate frame F 0 , using the transformation matrix 0 T 1 consisting of Study parameters x i and y i , i = 0, 1, 2, 3:
0 c = 0 T 1 1 c and 0 s 5 = 0 T 1 1 s 5 ,
where 0 T 1 = 1 ∆ ∆ 0 0 0 d 1 x 0 2 + x 1 2 -x 2 2 -x 3 2 -2 x 0 x 3 + 2 x 1 x 2 2 x 0 x 2 + 2 x 1 x 3 d 2 2 x 0 x 3 + 2 x 1 x 2 x 0 2 -x 1 2 + x 2 2 -x 3 2 -2 x 0 x 1 + 2 x 2 x 3 d 3 -2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 1 + 2 x 2 x 3 x 0 2 -x 1 2 -x 2 2 + x 3 2 (2)
with
∆ = x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 and d 1 = -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 , d 2 = -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 , d 3 = -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0 .
All vectors are now expressed in the base coordinate frame F 0 and hence the geometric constraints can be derived. The following constraints are already satisfied:
1. The first and the second revolute joint axes are parallel: s 1 = s 2 2. Third and fourth revolute joint axes are parallel:
s 3 = s 4 3.
-→ AB is perpendicular to the first and the second revolute joint axes: (b -a) T s 1 = 0 4. The second revolute joint axis is perpendicular to the third revolute joint axis:
s T 2 s 3 = 0 5. Length of the link AB is a 1 : ||b -a|| 2 = a 1
The remaining geometric constraints are derived as algebraic equations 5 : The second revolute joint axis, the fifth revolute joint axis and link BC lie in the same plane. In other words, the scalar triple product of the corresponding vectors is null:
g 1 : (b -c) T (s 2 × s 5 ) = 0 (3)
Vector -→ BC is perpendicular to the third and the fourth revolute joint axes:
g 2 : (b -c) T s 4 = 0 (4)
The fourth and the fifth revolute joint axes are perpendicular:
g 3 : s T 4 s 5 = 0 (5)
Length of the link BC is a 3 :
g 4 : ||b -c|| -a 3 = 0 (6)
Furthermore, Study's quadric equation S : x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 must be taken into account. The five geometric relations g 1 , g 2 , g 3 , g 4 , S describe the RUU limbs of the PM under study. As a matter of fact, when the first revolute joint is actuated, each limb has four DoF and it should be possible to describe it by only two constraint equations. Eqs. ( 4) and ( 5) contain the passive joint variable v 2 along with the active joint variable v 1 . Eliminating v 2 from g 2 and g 3 results in an equation that amounts to g 1 . Therefore, the two constraint equations in addition to the Study quadric describing a RUU limb are g 1 and g 4 , namely Eqs. ( 3) and [START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements of P-joints[END_REF]. The polynomials g 1 , g 4 and S define an ideal, which is a subset of all polynomials in the Study parameters:
I 1 = g 1 , g 4 , S ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ]. (7)
Explicitly these polynomials take the form:
g 1 := (x 0 x 1 -x 2 x 3 ) (v 1 2 -1) + (-2x 0 x 2 -2x 1 x 3 ) v 1 (x 2 0 + x 2 1 + x 2 2 + x 2 3 )a 1 -2((x 2 0 + x 2 3 )(x 1 y 1 + x 2 y 2 ) + 2(x 2 1 + x 2 2 )(x 0 y 0 + x 3 y 3 ))(v 2 1 -1) = 0, (8)
g 4 := -x 0 2 + x 1 2 + x 2 2 + x 3 2 v 1 2 + 1 a 1 2 + 4 (y 1 x 0 -y 0 x 1 + y 3 x 2 -y 2 x 3 ) v 1 2 + 8 (-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) v 1 + 4 (y 2 x 3 -y 3 x 2 -y 1 x 0 + y 0 x 1 )) a 1 + x 0 2 + x 1 2 + x 2 2 + x 3 2 a 3 2 -4 y 2 2 + y 3 2 + y 0 2 + y 1 2 v 1 2 + 1 = 0. ( 9
)
General Constraints g 1 and g 4 are the constraint equations of an RUU limb with specially adapted coordinate systems. To assemble the PM one has to transform these equations so that the limbs get into the positions of Fig. 1a. It is well known [9] that 5 cosine and sine of angles are substituted by tangent half-angles to render the equations algebraic; cos
(θ i ) = 1-v 2 i 1+v 2 i sin(θ i ) = 2v i 1+v 2 i where v i = tan(θ i /2), i = 1, 2
the necessary transformations are linear in the image space coordinates. Due to lack of space these transformations are only shown for the derivation of the constraint equations using the LIA in Sec.3.2 (Eq.14). One ends with six constraint equations g i1 , g i4 , i = 1, 2, 3 which form together with S = 0 and the normalization condition N :
x 2 0 + x 2 1 + x 2 2 + x 2 3 -1 = 0 an ideal I = g 11 , g 14 , g 21 , g 24 , g 31 , g 34 , S , N ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] (10)
Derivation Using a Linear Implicitization Algorithm
Canonical Constraints The canonical pose of a RUU limb is chosen such that the rotation axes coincide with the z-axes and the common normals of these axes are in the directions of the x-axes of the coordinate systems in order to derive the canonical constraint equations using LIA. It computes implicit equations of lowest possible degree out of parametric equations by comparing coefficients with an arbitrary system of implicit equations with the same degree. An extended explanation is given in [START_REF] Walter | On implicitization of kinematic constraint equations[END_REF]. To describe the RUU kinematic chain using the usual Denavit-Hartenberg (DH) parameters, the following 4 × 4 matrices are defined: T = M i .G i , i = 1, . . . , 5, where the M i -matrices describe a rotation about the z-axis with u i as the rotation angle. The G i -matrices describe the transformation of one joint coordinate system to the next.
M i = 1 0 0 0 0 cos (u i ) -sin (u i ) 0 0 sin (u i ) cos (u i ) 0 0 0 0 1 , G i = 1 0 0 0 a i 1 0 0 0 0 cos (α i ) -sin (α i ) d i 0 sin (α i ) cos (α i ) . (11)
The parameters in G i are DH parameters encoding the distance along x-axis a i , the offset along z-axis d i and the twist angle between the axes α i . The DH parameters for the RUU limb are
α 2 = π 2 , α 4 = -π 2 , d 1 = a 2 = d 2 = d 3 = a 4 = d 4 = α 1 = α 3 = 0.
Computing the Study-Parameters based on the transformation matrix T yields the parametric representation of the limb [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. Applying LIA yields the following quadratic canonical constraint equations S , f 1 and f 2 :
J = f 1 , f 2 , S ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ], (12)
where
f 1 := (x 0 x 1 -x 2 x 3 ) (v 1 2 -1) -(2x 0 x 2 + 2x 1 x 3 ) v 1 a 1 + 2 v 1 2 + 1 (x 0 y 0 + x 3 y 3 ) = 0 f 2 := -x 0 2 + x 1 2 + x 2 2 + x 3 2 v 1 2 + 1 a 1 2 + 4 (y 1 x 0 -y 0 x 1 + y 3 x 2 -y 2 x 3 ) v 1 2 + 8 (-x 0 y 2 + x 1 y 3 + x 2 y 0 -x 3 y 1 ) v 1 + 4 (y 2 x 3 -y 3 x 2 -y 1 x 0 + y 0 x 1 )) a 1 + x 0 2 + x 1 2 + x 2 2 + x 3 2 a 3 2 -4 y 2 2 + y 3 2 + y 0 2 + y 1 2 v 1 2 + 1 = 0 (13)
General Constraints To obtain the constraint equations of the whole mechanism from the canonical constraint equations, coordinate transformations are applied in the base and moving platform. To facilitate the comparison of the constraint equations derived by two different approaches, the coordinate transformations should be consistent with the global frames F O and F P as shown in Fig. 1a. The necessary transformations can be done directly in the image space P 7 [START_REF] Pfurner | Analysis of spatial serial manipulators using kinematic mapping[END_REF] by the mapping
x 0 x 1 x 2 x 3 y 0 y 1 y 2 y 3 → 2 v 0 2 + 1 x 0 -2 v 0 2 x 1 + 4 v 0 x 2 + 2 x 1 2 v 0 2 + 1 x 3 2 v 0 2 x 2 + 4 v 0 x 1 -2 x 2 ((r 0 -r 1 ) x 1 + 2 y 0 ) v 0 2 -2 x 2 (r 0 -r 1 ) v 0 + (-r 0 + r 1 ) x 1 + 2 y 0 ((r 0 -r 1 ) x 0 -2 y 1 ) v 0 2 + 4 v 0 y 2 + (r 0 -r 1 ) x 0 + 2 y 1 ((-r 0 -r 1 ) x 2 + 2 y 3 ) v 0 2 -2 (r 0 + r 1 ) x 1 v 0 + (r 0 + r 1 ) x 2 + 2 y 3 ((r 0 + r 1 ) x 3 + 2 y 2 ) v 0 2 + 4 v 0 y 1 + (r 0 + r 1 ) x 3 -2 y 2 , (14)
where
v 0 = tan(γ i ), i = 1, 2, 3, γ 1 = 0, γ 2 = 2π 3 and γ 3 = 4π 3
. The general constraint equations are obtained by transforming the f i of Eq.12 with Eq.14. The transformed equations are denoted f i1 = f i2 = 0, i = 1, 2, 3, and determine together with S = 0 and N = 0, the ideal J :
J = f 11 , f 12 , f 21 , f 22 , f 31 , f 32 , S , N ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ]
(15)
Ideal Comparison
A careful observation of the ideals spanned by the canonical constraint polynomials of both approaches reveals that g 4 = f 2 and
g 1 = f 1 (x 2 0 + x 2 1 + x 2 2 + x 2 3 ) -2(x 2 0 + x 2 2 )(v 2 1 + 1)S . Since x 2 0 + x 2 1 + x 2 2 + x 2
3 cannot be null, these ideals are the same. Thus, it follows that the ideals I and J spanned by the constraint equations of the whole manipulator are also contained in each other: I ⊆ J ⊆ I . Since I and J determine the same ideal, the variety of the constraint polynomials must be the same [START_REF] Cox | Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF]. Therefore, the set of constraint equations derived in Section 3.2 is used for further computations as it contains only quadratic equations.
Direct Kinematics: Numerical Examples
Because of the complexity of the manipulator, it is not possible to compute the direct kinematics without using some numerical values. In the following subsections, the following arbitrary values are assigned to the design parameters of the manipulator, a 1 = 3, a 3 = 5, r 0 = 11, r 1 = 7.
Identical Actuated Joints Assuming the actuated joint angles are equal, θ i1 = π 2 , i = 1, 2, 3 for simplicity, the system of constraint equations in Eq. (15) yields the following real solutions and the corresponding manipulator poses are shown in Fig. 2.
(a) x 0 = √ 23023 154 , y 3 = - 3 2 x 0 , x 3 = - 3 √ 77 154 , y 0 = 3 2 x 3 , x 1 = x 2 = y 1 = y 2 = 0 , (b) x 0 = √ 23023 154 , y 3 = - 3 2 x 0 , x 3 = 3 √ 77 154 , y 0 = 3 2 x 3 , x 1 = x 2 = y 1 = y 2 = 0 , x O z O y O x P z P y P (a) x O z O y O x P z P y P (b) x O, x P z O, z P y O, y P (c) x O z O y O x P z P y P (d)
Fig. 2: A numerical example: solutions to direct kinematics corresponding to (16)
(c) {x 0 = 1, x 1 = x 2 = x 3 = y 0 = y 1 = y 2 = y 3 = 0} , (d) {x 0 = 1, x 1 = x 2 = x 3 = y 0 = y 1 = y 2 = 0, y 3 = -3} . (16)
Different Actuated Joints Substituting distinct arbitrary inputs, setting x 0 = 1 and computing a Groebner basis of the resulting polynomials with pure lexicographic ordering yields a univariate polynomial
x 3 • P(x 3 ) = 0, where degree(P(x 3 )) = 80.
Translational Operation Mode The univariate polynomial of the previous section shows that this manipulator exhibits two operation modes. The one corresponding to x 3 = 0 yields pure translational motions of the moving platform with the identity as the orientation, similar to the motion of the famous delta robot [START_REF] Clavel | Delta, a fast robot with parallel geometry[END_REF]. From S follows also y 0 = 0. The set of original constraint equations reduces to
[ 3y 3 -y 1 2 -y 2 2 -y 3 2 -4y 1 t 1 2 -6 (y 1 + 2)t 1 -y 1 2 -y 2 2 -y 3 2 -4y 1 -3y 3 , -2t 2 2 + 3t 2 + 2 y 2 √ 3 + -y 1 2 -y 2 2 -y 3 2 + 2y 1 + 3y 3 t 2 2 + (3y 1 -12)t 2 -y 1 2 -y 2 2 -y 3 2 + 2y 1 -3y 3 , 2t 3 2 + 3t 3 + 2 y 2 √ 3 + -y 1 2 -
This system of equations yields a quadratic univariate in one of the y i variables, which gives a parametrization of the motion as a function of the input variables v i1 = tan(θ i1 /2), i = 1, 2, 3.
Conclusion
In this paper, the constraint equations of a 3-RUU PM were derived by two different approaches: geometrical approach, where all possible constraints were listed based on the geometry of the manipulator and through LIA, which yields the constraints by specifying the parametric equations and the desired degree. Both approaches have benefits and disadvantages such that it is possible to miss a constraint by merely observing the manipulator geometry while it is hard to interpret the physical meaning of the equations derived through LIA. However, it turns out that the ideals spanned by the constraint polynomials with both approaches are the same. As a result, the simplest set of equations was chosen for further analysis. Due to the complexity of the mechanism, a primary decomposition of these ideals is not possible and therefore a final answer to possible operation modes can not be given. However, the factorization of the final univariate polynomial of the direct kinematics algorithm gives strong evidence that this manipulator has a translational and a general three DoF operation mode.
left superscript k denotes the vector expressed in coordinate frame F k , k ∈ {0, 1}
Acknowledgements
This work was supported by the Austrian Science Fund (FWF I 1750-N26) and the French National Research Agency (ANR Kapamat #ANR-14-CE34-0008-01). |
01757535 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757535/file/ROMANSY_2018_Baklouti_Caro_Courteille.pdf | Sana Baklouti
email: sana.baklouti@insa-rennes.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Eric Courteille
email: eric.courteille@insa-rennes.fr
Elasto-Dynamic Model-Based Control of Non-Redundant Cable-Driven Parallel Robots
This paper deals with a model-based feed-forward torque control strategy of non-redundant cable-driven parallel robots (CDPRs). The proposed feed-forward controller is derived from an inverse elastodynamic model of the CDPR to compensate for the dynamic and oscillatory effects due to cable elasticity. A PID feedback controller ensures stability and disturbance rejection. Simulations confirm that tracking errors can be reduced by the proposed strategy compared to conventional rigid body model-based control.
Introduction
Cable-driven parallel robots (CDPRs) contain a set of flexible cables that connect a fixed frame to an end-effector (EE) with a coiling mechanism for each cable. They have been used in many applications like pick-and-place [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], rehabilitation [START_REF] Hernandez | Design Optimization of a Cable-Driven Parallel Robot in Upper Arm Training-Rehabilitation Processes[END_REF], painting and sandblasting of large structures [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. Thanks to their low inertia, CDPRs can reach high velocities and accelerations in large workspaces [START_REF] Lamaury | Dualspace adaptive control of redundantly actuated cable-driven parallel robots[END_REF]. Several controllers have been proposed in the literature to improve CDPR accuracy locally or on trajectory tracking [START_REF] Jamshidifar | Adaptive Vibration Control of a Flexible Cable Driven Parallel Robot[END_REF], [START_REF] Zi | Dynamic modeling and active control of a cable-suspended parallel robot[END_REF], [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. In [START_REF] Cuevas | Assumed-Mode-Based Dynamic Model for Cable Robots with Non-straight Cables[END_REF], the control of CDPR in the operational space is presented, where the CDPR model is derived using Lagrange equations of motion for constrained systems, while considering non elastic but sagging cables through the Assumed Mode Method. In [START_REF] Merlet | Simulation of Discrete-Time Controlled Cable-Driven Parallel Robots on a Trajectory[END_REF], a discrete-time control strategy is proposed to estimate the position accuracy of the EE by taking into account the actuator model, the kinematic and static behavior of the CDPR. Multiple papers deal with the problem of controlling CDPRs while considering cable elongations and their effect on the dynamic behavior. A robust H ∞ control scheme for CDPR is described in [START_REF] Laroche | A Preliminary Study for H∞ Control of Parallel Cable-Driven Manipulators[END_REF] while considering the cable elongations into the dynamic model of the EE and cable tension limits.A control strategy is proposed for CDPRs with elastic cables in [START_REF] Khosravi | Dynamic modeling and control of parallel robots with elastic cables: singular perturbation approach[END_REF], [START_REF] Khosravi | Stability analysis and robust PID control of cable driven robots considering elasticity in cables[END_REF], [START_REF] Khosravi | Dynamic analysis and control of cable driven robots with elastic cables[END_REF]. It consists in adding an elongation compensation term to the control law of a CDPR with rigid cables, using singular perturbation theory. It requires the measurement of cables length and the knowledge of the EE pose real-time through exteroceptive measurements.
Feed-forward model-based controllers are used to fulfillaccuracy improvement by using a CDPR reference model. This latter predicts the mechanical behavior of the robot; and then generates an adequate reference signal to be followed by the CDPR. This type of control provides the compensation of the desirable effects without exteroceptive measurements. A model-based control scheme for CDPR used as a high rack storage is presented in [START_REF] Bruckmann | Design and realization of a high rack and retrieval machine based on wire robot technology[END_REF]. This research work takes into account the mechanical properties of cables, namely their elasticity. This strategy, integrating the mechanical behavior of cables in the reference signal, enhances the CDPR performances. However, it compensates for the EE positioning errors due to its rigid body behavior. The mechanical response of the robot is predicted when the mechanical behavior of the cables is not influenced by their interaction with the whole system, namely, the cable elongation is estimated. As a consequence, the main contribution of this paper deals with the coupling of a model-based feed-forward torque control scheme for CDPR with a PID feedback controller. The feed-forward controller is based on the elasto-dynamic model of CDPR to predict the full dynamic and oscillatory behavior of the CDPR and to generate the adequate reference signal for the control loop.
This paper is organized as follows: Section 2 presents the feed-forward modelbased control strategy proposed in this paper in addition to the existing rigid and elasto-static models. In Section 3, the proposed control strategy is implemented for a CDPR with three cables, a point-mass EE and three Degree-Of-Freedom (DOF) translational motions. Simulation results are presented to confirm the improvement of trajectory accuracy when using the proposed control strategy compared to conventional approaches. Conclusions and future work are drawn in Section 4.
Feed-forward model-based controller
The control inputs are mostly obtained by a combination of feed-forward inputs, calculated from a reference trajectory using a reference model of CDPR, and feedback control law, as in [START_REF] Lamaury | Control of a large redundantly actuated cable-suspended parallel robot[END_REF], [START_REF] Bayani | On the control of planar cable-driven parallel robot via classic controllers and tuning with intelligent algorithms[END_REF]. The used control scheme shown in Fig. [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], is composed of a feed-forward block in which the inverse kinematic model is determined based on a CDPR reference model (Red block in Fig. [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF]). This latter is a predictive model of the dynamic behavior of the mechanism. Its input is the motor torque vectorζ rg ∈ R n , and its output is the reference winch rotation angle vectorq ref ∈ R n ;n being the number of actuators.
The relationship between q ref and the cable length vectorl ref ∈ R n is expressed as:
l ref -l 0 = R (q 0 -q ref ) , (1)
whereR ∈ R n×n is a diagonal matrix that is a function of the gear-head ratios and winch radius.l 0 ∈ R n is the cable length vector at the static equilibrium andq 0 ∈ R n is an offset vector corresponding to the cable length when q ref = 0. This offset is compensated at the rest time. The unwound cable length of the ith cable is calculated using the CDPR inverse geometric model.
x rg , x rg + - Reference model of CDPR q ref CDPR PID controller + + ¡ m ¡ rg
Trajectory generation
¡ corr e q .. Dynamic model (Eq. ( 2)) q m
Fig. 1: Feed-forward model-based PID control ζ rg is calculated using the following dynamic model of the CDPR, which depends on the desired EE pose x rg and acceleration ẍrg :
ζ rg = R τ rg , τ rg = W -1 rg ( w g + w e -M ẍrg ) , (2)
whereτ rg ∈ R n is a set of positive tensions.W rg ∈ R m×n is the CDPR wrench matrix,m being the DOF of its moving-platform. It is a function of x rg and maps the EE velocities to the cable velocity vector. M ∈ R m×m is the EE mass matrix,w g ∈ R m is the wrench vector due to gravity acceleration andw e ∈ R m denotes the external wrench.
In each drive, a feedback PID controller sets the vector of corrected motor torqueζ corr ∈ R n . This latter is added to ζ rg to get the motor torqueζ m ∈ R n , which results in the measured winch angular displacementsq m ∈ R n . The PID feedback control law is expressed as follows:
ζ m = ζ rg + K p e q + K d ėq + K i t + i ti e q (t) dt, (3)
whereK p ∈ R n×n is the proportional gain matrix,K d ∈ R n×n is the derivative gain matrix,K i ∈ R n×n is the integrator gain matrix, e q = q refq m is the error to minimize, leading to the correction torque vector:
ζ corr = K p e q + K d ėq + K i t + i ti e q (t) dt. (4)
It is noteworthy that ζ corr depends on the CDPR reference model used to calculate the vector q ref . the best of our knowledge, two CDPR models have been used in the literature for the feed-forward model-based control of CDPRs with non-sagging cables: (i) rigid model and (ii) elasto-static model. As a consequence, one contribution of this paper deals with the determination of the elasto-dynamic model of CDPRs to be used for feed-forward control.
Rigid model
The CDPR rigid model considers cables as rigid links. It is assumed that while applying the motor torque ζ rg , the cables tension vector is equal to τ rg and the reference signal q ref anticipates neither the cable elongation nor the oscillatory motions of the EE. The PID feedback controller uses the motor encoders response q m , which is related to the rolled or unrolled cable length l rg , which corresponds to the winch angular displacement q rg . It should be noted that the cable elongations and EE oscillatory motions are not detected here, and as a consequence, cannot be rejected.
Elasto-static model
The CDPR elasto-static model integrates a feed-forward cable elongation compensation [START_REF] Bruckmann | Design and realization of a high rack and retrieval machine based on wire robot technology[END_REF]. It is about solving a static equilibrium at each EE pose while considering cable elasticity. Assuming that each cable is isolated, the cable elongation vector δl es is calculated knowing the cable tension vector τ rg . The elastostatic cable tension vector τ es is equal to τ rg . The relationship between δl i es and τ i es on the ith cable takes the following form with a linear elastic cable model:
τ i es = τ i rg = ES δl i es δl i es + l i rg , ( 5
)
where E is the cable modulus of elasticity and S is its cross-section area.
When q rg is used as a reference signal in the feedback control scheme, the EE displacement δx es is obtained from cable elongation vector δl es . To compensate for the cable elongation effects, δl es is converted into δq es , which corrects the angular position q rg . Thus, the elasto-static reference angular displacement q es ref becomes:
q es ref = q rg -δq es . (6)
As the CDPR cables tension is always positive, δl es > 0 corresponding to δq es < 0. The reference signal q es ref corresponds to a fake position of the EE for the cable elongation compensation. Here, under the effect of cable elongations, the reference EE pose is estimated to achieve the desired pose. Although the elasto-static reference model takes into account the cable elongations, the non-compensation for the EE pose errors due to the mechanism dynamic and elasto-dynamic behavior is not considered.
Elasto-dynamic model
The CDPR elasto-dynamic model takes into account the oscillatory and dynamic behavior of the EE due to cable elongations. Here, the cables are no-longer isolated and are affected by the EE dynamic behavior. Cable elongations make the EE deviate from its desired pose x rg . The real EE pose is expressed as: x ed = x rg + δx ed . The EE displacement leads to some variations in both cable lengths and cable tensions. Indeed, the ith cable tension τ i ed obtained from the elasto-dynamic model differs from τ i rg :
τ i ed = τ i rg + δτ i ed = ES δl i ed δl i ed + l i rg , (7)
where δl i ed is the ith cable elongation assessed by considering cable elasticity and oscillations. The CDPR elasto-dynamic model takes the form:
M ẍed = W ed τ ed + w g + w e , (8)
where W ed is CDPR wrench matrix expressed at EE pose x ed . Once x ed and the cable tension vector τ ed are calculated, the cable elongation vector δl ed can be determined. This latter is converted into δq ed , which corrects the angular position vector q rg . The reference angular displacement q ed ref becomes:
q ed ref = q rg -δq ed . (9)
The proposed control strategy based on the elasto-dynamic model leads to a feed-forward controller for EE oscillatory motion compensation in addition to the conventional rigid body feedback while considering the measurements from motor encoders. It should be noted that this feed-forward controller will not disrupt the rigid body feedback stability.
3 Control of a spatial CDPR with a point-mass end-effector A spatial CDPR with three cables and three translational-DOF is considered in this section. This CDPR is composed of a point-mass EE, which is connected to three massless and linear cables. A configuration of the CREATOR prototype (Fig. (2a)), being developed at LS2N, is chosen such that the cables are tensed along a prescribed trajectory. The Cartesian coordinate vectors of the cable
(a) B 2 -2 -1 B 1 P 1 P 2 0 y (m)
1 = [2.0, -2.0, 3.5] T m, b 2 = [-2.0, -2.0, 3.5] T m, b 3 = [0.0 , 2.0 , 3.5] T m.
The EE mass is equal to 20 kg. The cables diameter is equal to 1 mm. Their modulus of elasticity is equal to 70 GPa.
Trajectory generation
A circular helical trajectory, shown in Fig. (2b), is used from static equilibrium to steady state to evaluate the efficiency of the feed-forward model-based controller while considering three CDPR reference models. The EE moves from point P 1 of Cartesian coordinate vector p 1 = [0.5, -1.0, 0.25] T m to point P 2 of Cartesian coordinate vector p 2 = [0.5, -1.0, 1.5] T m along a circular helix. The latter is defined by the following equations: The coefficients of the five-order polynomial t α are chosen in such a way that the EE Cartesian velocities and accelerations are null at the beginning and the end of the trajectory. R is the helix radius, p t is helix pitch. β 0 , β 1 and β 2 are constants. Here,
x(t) = R cos(t α ) + β 0 , y(t) = R sin(t α ) + β 1 , z(t) = p t t α + β 2 , (10)
a 5 = 24π, a 4 = -60π, a 3 = 40π, a 2 = a 1 = a 0 = 0, p t = 0.1 m, β 0 = 0.5 m, β 1 = -1.0 m, β 2 = 0.
25 m, R = 0.5 m and t sim = 15 s. The velocity maximum value is 0.8 m/s. The acceleration maximum value is 1.2 m/s 2 .
Controller tuning
The PID feedback controller is tuned using the Matlab3 PID tuning tool. This latter aims at finding the values of proportional, integral, and derivative gains of a PID controller in order to minimize error e q and to reduce the EE oscillations.
In the PID tuner work-flow, a plant model is defined from the simulation data, where the input is e q and the output is ζ corr . The gains obtained for the three control schemes are the following:
• Rigid model based: K p =3, K i =1.5 and K d =1.5
• Elasto-static model based: K p =0.53, K i =0.2 and K d =0.18
• Elasto-dynamic model based: K p =0.33, K i =0.16 and K d =0. [START_REF] Lamaury | Control of a large redundantly actuated cable-suspended parallel robot[END_REF] It is noteworthy that the gains decrease from the rigid model to the elasto-dynmic reference model.
End-effector position errors
The EE position error is defined as the the difference between its desired position x rg and its real one. This latter should be normally determined experimentally. As experimentations are not yet done, a good CDPR predictive model should be used to estimate the EE real pose. The CDPR elasto-dynamic model is the closest to the real CDPR with non sagging cables; so, it is used to predict the real behavior of the CDPR. The input of this model is ζ m , which leads to x m ed . The position error is defined as δp = x rgx m ed . To analyze the relevance of the proposed control strategy, the three control schemes under study were simulated through Matlab-Simulink . Figure (3a) shows the norm of the EE position error δp when the proposed feed-forward control law is applied while using successively the three CDPR models to generate the reference signal.
Figure (3b) illustrates the EE position error along the z-axis δz, which is the main one as the CDPR under study is assembled in a suspended configuration. The red (gree, blue, resp.) curve depicts the EE position error when the elastodynamic (elasto-static, rigid, resp.) model is used as a reference. The root-meansquare (RMS) of δp is equal to 8.27 mm when the reference signal is generated
Conclusions and future work
This paper proposed a model-based feed-forward control strategy for non-redundant CDPRs. An elasto-dynamic model of CDPR was proposed to anticipate the full dynamic behavior of the mechanism. Accordingly, a contribution of this paper deals with a good simulation model of the CDPR, including the vibratory effects, cable elongations and their interaction with the whole system, used as a reference control model. The comparison between the position errors obtained when using the proposed elasto-dynamic model or the classical rigid and elasto-static ones as control references shows meaningful differences. These differences reveal that the proposed control strategy guarantees a better trajectory tracking when adopting the proposed elasto-dynamic model to generate the reference control signal for a non-redundant CDPR. Experimental validations will be carried later on. Future work will also deal with the elasto-dynamic model-based control of redundant actuated CDPRs.
Fig. 2 :
2 Fig. 2: (a) CREATOR prototype CAD diagram (b) End-effector desired path
Fig. 3 :
3 Fig. 3: (a) Position error norm (b) Position error along z-axis of the end-effector
Fig. 4 :
4 Fig. 4: Histogram of the RMS of δp and δz
with
tα = a5 t tsim 5 + a4 t tsim 4 + a3 t tsim 3 + a2 t tsim 2 + a1 t tsim + a0.
www.mathworks.com/help/slcontrol/ug/introduction-to-automatic-pid-tuning.html |
01757541 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757541/file/Romansy2018_Wu_Caro.pdf | Guanglei Wu
Stéphane Caro
email: stephane.caro@ls2n.fr
Torsional Stability of a U-joint based Parallel Wrist Mechanism Featuring Infinite Torsion
Keywords: dynamic stability, parallel wrist mechanism, monodromy matrix, Floquet theory, torsional vibrations
In this paper, the dynamic stability problem of a parallel wrist mechanism is studied by means of monodromy matrix method. This manipulator adopts a universal joint as the ball-socket mechanism to support the mobile platform and to transmit the motion/torque between the input shaft and the end-effector. The linearized equations of motion of the mechanical system are established to analyze its stability according to the Floquet theory. The instable regions are presented graphically in various parametric charts.
Introduction
The parallel wrist mechanisms are intended for camera-orientating [START_REF] Gosselin | The Agile Eye: a high-performance three-degree-of-freedom camera-orienting device[END_REF], minimally invasive surgical robots [START_REF] Li | Design of spherical parallel mechanisms for application to laparoscopic surgery[END_REF] and robotic joints [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF], thanks to their large orientation workspace and high payload capacity. Besides, another potential application is that they can function as a tool head for complicated surface machining [START_REF] Wu | Design and transmission analysis of an asymmetrical spherical parallel manipulator[END_REF], where an unlimited torsional motion is desired to drive the cutting tools in some common material processing such as milling or drilling. For this purpose, a wrist mechanism [START_REF] Wu | Design and transmission analysis of an asymmetrical spherical parallel manipulator[END_REF] as shown in Fig. 1 was proposed with a number of advantages compared to its symmetrical counterparts, such as enhanced positioning accuracy [START_REF] Wu | Mobile platform center shift in spherical parallel manipulators with flexible limbs[END_REF], infinite rotation [START_REF] Asada | Kinematic and static characterization of wrist joints and their optimal design[END_REF], structural compactness and low dynamic inertia [START_REF] Wu | Dynamic modeling and design optimization of a 3-DOF spherical parallel manipulator[END_REF]. The design of the manipulator is simplified by using a universal (U) joint supported by an input shaft to generate infinite input/output rotational motion. On the other hand, the U joint suffers from one major problem, namely, it transforms a constant input speed to a periodically fluctuating one, which may induce vibrations and wear. This paper will investigate the dynamic stability problem, focusing on the aspect of the torsional stability.
To the best of the authors' knowledge, Porter [START_REF] Porter | A theoretical analysis of the torsional oscillation of a system incorporating a hooke's joint[END_REF] was the first to investigate this problem, where a single-degree-of-freedom linearized model was built to plot the stability chart by using the Floquet theory [START_REF] Floquet | Sur les équations différentielles linéaires à coefficients périodiques[END_REF]. Later, similar modeling approaches were adopted to derive the nonlinear equations for the stability analysis of U joint [START_REF] Porter | Non-linear torsional oscillation of a system incorporating a hooke's joint[END_REF][START_REF] Éidinov | Torsional vibrations of a system with hooke's joint[END_REF][START_REF] Asokanthan | Torsional instabilities in a system incorporating a hooke's joint[END_REF][START_REF] Chang | Torsional instabilities and non-linear oscillation of a system incorporating a hooke's joint[END_REF][START_REF] Bulut | Dynamic stability of a shaft system connected through a hooke's joint[END_REF]. Moreover, multi-shaft system consisting of multiple shafts interconnected via Hooke's joints can also be handled using the previous various approaches [START_REF] Zeman | Dynamik der drehsysteme mit kardagelenken[END_REF][START_REF] Kotera | Instability of torsional vibrations of a system with a cardan joint[END_REF]. Besides, lateral and coupled stability problem of the universal joint were studied [START_REF] Ota | Lateral vibrations of a rotating shaft driven by a universal joint: 1st report, generation of even multiple vibrations by secondary moment[END_REF][START_REF] Saigo | Self-excited vibration caused by internal friction in universal joints and its stabilizing method[END_REF][START_REF] Desmidt | Coupled torsion-lateral stability of a shaft-disk system driven through a universal joint[END_REF], too. According to the literature, the previous studies focus on single or multiple Ujoint mechanisms. On the other hand, a U joint working as a transmitting mechanism in a parallel mechanism has not received the attention, which will be the subject in this work. From the reported works, common approaches to analyze the stability problem of the linear/nonlinear dynamic model of the system include Floquet theory, Krylov-Bogoliubov method, Poincaré-Lyapunov method, etc.. As the relationship between the input and output shaft rotating speeds of the U joint is periodic, the Floquet theory will be an effective approach to analyze the stability problem, which will be adopted in this work.
This paper investigates the dynamic stability analysis problem of the wrist mechanism by means of a monodromy matrix method. To this end, a linear model consisting of input and output shafts interconnected via a Hooke's joint is considered, and the linearized equations of motion of the system are obtained. Numerical study is carried out to assess the system stability and the effects of the parameters. Instable regions are identified from various parametric charts.
Wrist Mechanism Architecture
Figure 1 depicts the wrist mechanism, an asymmetrical spherical parallel manipulator (SPM). The mobile platform is composed of an outer and inner rings connected to each other with a revolute joint, the revolute joint being realized with a revolve bearing. The orientation of the outer ring is controlled by two limbs in-parallel, and it is constrained by a fully passive leg that is offset from the center of the mobile platform to eliminate the rotational motion around the vertical axis. Through a universal joint, the decoupled rotation of the inner ring is generated by the center shaft, which also supports the mobile platform to improve the positioning accuracy.
The architecture of the wrist mechanism is displayed in Fig. 2. Splitting the outer ring and the two parallel limbs as well as the passive one, the remaining parts of the manipualtor can be equivalent to a U-joint mechanism. The center shaft is treated as the driving shaft and the inner ring is treated as a driven disk. The bend angle, i.e., the misalignment angle, is denoted by β , and the input/output angles are named as γ 1 /γ 2 , respectively.
Equation of Motion of Torsional Vibrations
The equations of motion for the U-joint mechanism shown in Fig. 2 is deduced via a synthetical approach [START_REF] Bulut | Dynamic stability of a shaft system connected through a hooke's joint[END_REF]. In accordance, the driving shaft and the driven disk are considered as two separate parts, as displayed in Fig. 3, where the cross piece connecting the input/output elements is considered as massless.
The equation of motion of torsional vibrations of the driving part can be written as
J I γ1 = -c 1 γ1 -k 1 γ 1 + M I (1)
where γ 1 is the rotational coordinate of J I , and M I is the reaction torque of the input part of the Hooke's joint. Moreover, k 1 and c 1 depict the torsional stiffness and viscous damper of the driving shaft, respectively. On the other hand, the driven part is under the effect of the reaction torque M O , for which the dynamic equation is written as where γ 2 is the rotational coordinate of J O , and k 2 , c 2 stand for the torsional stiffness and viscous damper of the driven shaft. Moreover, the relationship between the input torque and the output one of the Hooke's joint can be written as
c 2 γ2 + k 2 γ 2 = M O = -J O ( γ2 + ωo ) (2)
M O = M I η(t) , η(t) = cos β 1 -sin 2 β sin 2 (Ω 0 t + γ 1 ) (3)
where Ω 0 denotes the constant velocity of the driving shaft, henceforth, the following equations of motion are derived
J I γ1 + c 1 γ1 + k 1 γ 1 -η(t)c 2 γ2 -η(t)k 2 γ 2 = 0 (4) J O ( γ2 + ωo ) + c 2 γ2 + k 2 γ 2 = 0 (5) with ωo = η(t) γ1 + η(t)(Ω 0 t + γ1 ) (6)
Let τ be equal to Ω 0 t + γ 1 . Some dimensionless parameters are defined as follows:
Ω = Ω 0 k 1 /J I , ζ = c 1 2 √ k 1 J I , µ = c 2 c 1 , ν = J O J I , χ = k 2 k 1 = 1 η(τ) 2 (7)
Equations. ( 4) and ( 5) can be linearized and cast into a matrix form by discarding all the nonlinear terms, namely,
γ1 γ2 + 2ζ Ω -2µζ Ω η(τ) 2η (τ) -2ζ Ω η(τ) 2µζ Ω 1 ν + η 2 (τ) γ1 γ2 + 1 Ω 2 -χ Ω 2 η(τ) η (τ) -1 Ω 2 η(τ) χ Ω 2 1 ν + η 2 (τ) γ 1 γ 2 = 0 -η (τ) (8)
where primes denote differentiation with respect to τ, thus, Eq. ( 8) consists of a set of linear differential equations with π-periodic coefficients.
Dynamic Stability Analysis
The homogeneous parts of Eq. ( 8) should be considered sequentially to analyze the dynamic stability of the manipulator. Equation ( 8) can be expressed as::
γ γ γ + D γ γ γ + Eγ γ γ = 0 (9) with γ γ γ = γ1 γ2 T , γ γ γ = γ1 γ2 T , γ γ γ = γ 1 γ 2 T (10a) D = 2ζ Ω -2µζ Ω η(τ) 2η (τ) -2ζ Ω η(τ) 2µζ Ω 1 ν + η 2 (τ) (10b) E = 1 Ω 2 -χ Ω 2 η(τ) η (τ) -1 Ω 2 η(τ) χ Ω 2 1 ν + η 2 (τ) (10c)
which can be represented by a state-space formulation, namely,
ẋ(t) = A(t)x(t) (11)
with
x(t) = γ γ γ γ γ γ , A(t) = 0 2 I 2 -E -D (12)
whence A(t) is a 4 × 4 π-periodic matrix. According to Floquet theory, the solution to equation system [START_REF] Kotera | Instability of torsional vibrations of a system with a cardan joint[END_REF] can be expressed as
Φ Φ Φ(τ) = P(τ)e τR ( 13
)
where P(τ) is a π-periodic matrix and R is a constant matrix, which is related to another constant matrix H, referred to as monodromy matrix, with R = ln H/π. If the fundamental matrix is normalized so that P(0) = I 4 , then H = P(π). The eigenvalues λ i , i = 1, 2, 3, 4, of matrix H, referred to as Floquet multipliers, govern the stability of the system. The system is asymptotically stable if and only if the real parts of all the eigenvalues λ i are non-positive [START_REF] Chicone | Ordinary Differential Equations with Applications[END_REF]. Here, the matrix H is obtained numerically with the improved Runge Kutta Method [START_REF] Szymkiewicz | Numerical Solution of Ordinary Differential Equations[END_REF] with a step size equal to 10 -6 , and and its eigenvalues are calculated to assess stability of the system. The monodromy matrix method is a simple and reliable method to determine the stability of parametrically excited systems.
Numerical Study on Torsional Stability
This section is devoted to numerical stability analysis, where the stability charts are constructed on the Ω 0 -β and k 1 -β parametric planes to study the effect of parameters onto the system stability. From the CAD model of the robotic wrist, µ = 1, ν = 10, J I = 0.001 kg • m 2 , c 1 = 0.001 Nm/(rad/s).
Figure 4 depicts the stability chart Ω 0 -β to detect the instability of the U-joint mechanism, with a constant stiffness k 1 = 10 Nm/rad, where the dotted zones represent the unstable parametric regions. When the rotating speed Ω 0 of the driving shaft is lower than 11π rad/s, this system is always stable when the misalignment angle β is between 0 and 30 • . On the contrary, angle β should be smaller than 5 • to guarantee dynamic stability of the parallel wrist mechanism when Ω 0 is equal to 19π rad/s. Similarly, the influence of the torsional stiffness of the driving shaft and the misalignment angle to the stability is illustrated in Fig. 5, with the driving shaft speed Ω 0 = 19π rad/s. It is apparent that the higher the torsional stiffness of the input shaft, the more stable the parallel robotic wrist. The system is stable when k 1 > 25 Nm/rad.
Conclusion
This paper dealt with the dynamic torsional stability analysis of a parallel wrist mechanism that contains a universal joint. Differing from the symmetrical counterparts, the asymmetrical architecture of this robotic wrist ensures an infinite torsional movement of the end-effector under a certain tilt angle. This unique feature allows the wrist mechanism to function as an active spherical joint or machine tool head, with a simple architecture.
The stability problem of the wrist mechanism due to the nonlinear input-output transmission of the universal joint is studied, where a linear model consisting of input and output shafts interconnected via a Hooke's joint is considered. The linearized equations of motion of the system are obtained, for which the stability problem is investigated by resorting to a monodromy matrix method. The approach used to analyze the torsional stability of the parallel robotic wrist is numerically illustrated, wherein the instable regions are presented graphically. Moreover, some critical parameters, such as torsional stiffness and rotating speeds, are identified. Future work includes the complete parametric stability analysis of the system as well as its lateral stability.
Fig. 1 .
1 Fig. 1. CAD model of the parallel wrist mechanism.
Fig. 2 .
2 Fig. 2. Kinematic architectures of the wrist mechanism and its U joint.
Fig. 3 .
3 Fig. 3. The driving and driven parts of the U-joint mechanism.
Fig. 4 .
4 Fig. 4. Effects of the driving shaft speed Ω 0 and bend angle β onto the torsional stability of the parallel wrist with stiffness k 1 = 10 Nm/rad (blue point means torsional dynamic instability).
Fig. 5 .
5 Fig. 5. Effects of the driving shaft stiffness k 1 and bend angle β onto the dynamic torsional stability with speed Ω 0 = 19π rad/s.
Acknowledgement
The reported work is supported by the Doctoral Scientific Research Foundation of Liaoning Province (No. 20170520134) and the Fundamental Research Funds for the Central Universities (No. DUT16RC(3)068). |
01757553 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757553/file/ReMAR2018_Nayak_Caro_Wenger.pdf | Abhilash Nayak
Stéphane Caro
Philippe Wenger
A Dual Reconfigurable 4-rRUU Parallel Manipulator
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A Dual Reconfigurable 4-rRUU Parallel Manipulator
Abhilash Nayak1 , Stéphane Caro2 and Philippe Wenger 2 Abstract-The aim of this paper is to introduce the use of a double Hooke's joint linkage to reconfigure the base revolute joints of a 4-RUU parallel manipulator (PM). It leads to an architecturally reconfigurable 4-rRUU PM whose platform motion depends on the angle between the driving and driven shafts of the double-Hooke's joint linkage in each limb. Even when the angle is fixed, the manipulator is reconfigurable by virtue of the different operation modes it can exhibit. By keeping the angle as a variable, the constraint equations of the 4-rRUU PM are derived using Study's kinematic mapping. Subsequently, the ideal of constraint polynomials is decomposed as an intersection of primary ideals to determine the operation modes of the 4-rRUU PM for intersecting and parallel revolute joint axes in the base and the moving platform.
I. INTRODUCTION
Reconfigurability in a PM extends its employability for a variety of applications. A lower mobility PM with dof< 6 is reconfigurable if it has different configuration space regions with possibly different type or number of degrees of freedom. These regions are known as the operation modes of the PM and were first exemplified by Zlatanov et al. [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF] for the 3-URU DYMO robot exhibiting five different types of platform motion. Its closely related SNU 3-UPU [START_REF] Walter | A complete kinematic analysis of the SNU 3-UPU parallel robot[END_REF] and Tsai 3-UPU [START_REF] Walter | Kinematic analysis of the TSAI 3-UPU parallel manipulator using algebraic methods[END_REF] PMs were analyzed by Walter et al. with a complete characterization of their operation modes along with the transition poses. Most of the operation modes of the 3-URU or the 3-UPU PMs are physically distinguishable unlike the 3-[PP]S PMs for which the first two joints in each limb generate a motion equivalent to two coplanar translations followed by a spherical joint. 3-RPS [START_REF] Schadlbauer | The 3-rps parallel manipulator from an algebraic viewpoint[END_REF], 3-PRS [START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements of p-joints[END_REF] and 3-SPR PMs [START_REF] Nayak | Comparison of 3-RPS and 3-SPR Parallel Manipulators Based on Their Maximum Inscribed Singularity-Free Circle[END_REF] are such manipulators that exhibit two operation modes each with coupled motion. Other reconfigurable PMs include the 3-RER PM (E denotes a planar joint) found to have 15 3-dof operation modes [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using euler parameter quaternions and algebraic geometry method[END_REF] and the 4-RUU PM with vertical base and platform revolute joint axes possessing three operation modes [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF].
Besides, a PM can be reconfigurable also by changing the position and/or orientation of one or more of its constituent joints. This type of reconfigurability is named architectural reconfigurability in this paper. MaPaMan [START_REF] Srivatsan | On the position kinematic analysis of mapaman: a reconfigurable three-degrees-of-freedom spatial parallel manipulator[END_REF] is one such manipulator in which the strut can be oriented to have two different architectures of the same PM where it can transition between roll-pitch-heave and roll-pitch-yaw degrees of freedom. Gan et al. [START_REF] Gan | Optimal design of a metamorphic parallel mechanism with reconfigurable 1T2R and 3R motion based on unified motion/force transmissibility[END_REF] introduced a novel reconfigurable {stephane.caro, philippe.wenger}@ls2n.fr revolute joint and proposed a metamorphic 3-rRPS PM. In this paper, a double Hooke's joint linkage is used as a reconfigurable revolute joint and is used to demonstrate different types of reconfigurability of a 4-RUU PM.
The double Hooke's joint linkage is a well-known special over-constrained 6R-mechanism, where the first three and the last three joint axes are mutually perpendicular. It is also known as a Double Cardan Joint and is ubiquitous as a steering column in automobiles. Its architecture is fairly simple compared to a general over-constrained 6R mechanism which makes it easier to derive the input-output relations. There have been different approaches in the literature to derive its input-output relations proving that it is a constant velocity transmitter when the angle between the input shaft and the central yoke is equal to the angle between the central yoke and the output shaft [START_REF] Baker | Displacementclosure equations of the unspecialised doublehooke's-joint linkage[END_REF], [START_REF] Dietmaier | Simply overconstrained mechanisms with rotational joints[END_REF], [START_REF] Mavroidis | Analysis of overconstrained mechanisms[END_REF]. The constant-velocity transmission property of a double-Hooke's joint is exploited in this paper to reconfigure the first revolute joint axis in each limb of a 4-RUU PM.
The organization of the paper is as follows: Section II presents the architecture of the dual reconfigurable 4-rRUU PM along with the architecture of its constituent double-Hooke's joint linkage. Section III deals with the derivation of constraint equations and the determination of operation modes of 4rRUU PMs for some specific orientations of the base and revolute joint axes. Section IV concludes the paper and puts forth some open issues associated with the construction of a functional 4-rRUU PM prototype.
II. THE 4-rRUU PARALLEL MANIPULATOR A. Manipulator Architecture
The architecture of the dual reconfigurable 4-rRUU PM with a square base and a platform is shown in Fig. 1 and its constituent double-Hooke's joint linkage is shown in Fig. 2.
A reconfigurable revolute joint (rR) and two universal joints (UU) mounted in series constitute each limb of the 4-rRUU PM. Point L i , i = 1, 2, 3, 4 lies on the pivotal axis of the double-Hooke's joint linkage as shown in Fig. 2. Point A i lies on the first revolute joint axis of the 4-rRUU PM and it can be obtained from point L i by traversing a horizontal distance of l i along the first revolute joint axis. Points B i and C i are the geometric centers of the first and the second universal joints, respectively. Points L i and C i form the corners of the square base and the platform, respectively. F O and F P are the coordinate frames attached to the fixed base and the moving platform such that their origins O and P lie at the centers of the respective squares. The revolute-joint axes vectors in i-th limb are marked s ij , i = 1, 2, 3, 4; j = 1, ..., 5. Vectors s i1 and s i2 are
A 3 B 3 C 3 A 1 B 1 C 1 A 2 B 2 C 2 B 4 C 4 z P y P P F P x P s 44 s 45 A 4 L 4 L 1 L 2 L 3 A A z O y O O F O x O r 0 r 1 p q s 42
φ 1 φ 6 O 0 O 6 β L i A i M O T O R l i
Fig. 2: Double Hooke's joint always parallel, so are vectors s i3 and s i4 . For simplicity, it is assumed that the orientation of vector s i1 expressed in coordinate frame F O is the same as that of s i5 expressed in coordinate frame F P . The position vectors of points L i , A i , B i and C i expressed in frame F k , k ∈ O, P are denoted as k l i , k a i , k b i and k c i , respectively. r 0 and r 1 are half the diagonals of the base and the moving platform squares, respectively. p and q are the link lengths.
B. Double-Hooke's joint linkage
The double Hooke's joint linkage is shown in Fig. 2. The first three and the last three revolute joint axes intersect at points O 0 and O 6 , respectively. The first revolute joint is driven by a motor with an input angle of φ 1 and the last revolute joint rotates with an output angle of φ 6 and their axes intersect at point L i , i = 1, 2, 3, 4. It is noteworthy that for a constant-velocity transmission, the triangle O 0 O 6 L i must be isosceles with O 0 L i = O 6 L i . The angle between the input and the output shafts is denoted as β ∈ [0, π]. Since the double-Hooke's joint is known to be a constantvelocity transmitter, the following input-output relationship holds [START_REF] Baker | Displacementclosure equations of the unspecialised doublehooke's-joint linkage[END_REF], [START_REF] Dietmaier | Simply overconstrained mechanisms with rotational joints[END_REF], [START_REF] Mavroidis | Analysis of overconstrained mechanisms[END_REF]:
φ 6 = -φ 1 (1)
Figure 3 shows the top view of the 4-rRUU PM without the links. For architectural reconfigurability, the reconfigurable
β 3 β 1 β 2 β 4 y O O F O x O A 3 A 1 A 2 A 4 s 31 s 21 s 11 s 41 L 4 L 1 L 2 L 3
Fig. 3: Possible orientations of the base revolute joint revolute joint axis in the base is allowed to have a horizontal orientation β i , i = 1, 2, 3, 4. It is noteworthy that β i will be changed manually in the prototype under construction.
III. OPERATION MODE ANALYSIS A. Constraint Equations
Since the reconfigurable revolute joint is actuated, a RUU limb must satisfy the following two constraints:
1) The second revolute joint axis, the fifth revolute joint axis and link BC must lie in the same plane. In other words, the scalar triple product of the corresponding vectors must be null:
g i : (b i -c i ) T (s i2 × s i5 ) = 0, i = 1, 2, 3, 4 (2)
2) The length of link BC must be q:
g i+4 : ||b i -c i || -q = 0, i = 1, 2, 3, 4 (3)
Since the length of link BC does not affect the operation modes of the 4-rRUU PM, only the principal geometric constraint from Eq. ( 2) is considered. To express it algebraically, the homogeneous coordinates of the necessary vectors are listed below:
0 l i = R z (λ i ) [1, r 0 , 0, 0] T (4a) 0 a i = 0 l i + R z (λ i + β i ) [0, 0, l i , 0] T (4b) 0 b i = 0 a i + R z (λ i + β i ) [0, p cos(θ i ), 0, p sin(θ i )] T (4c) 0 c i = F R z (λ i )[1, r 1 , 0, 0] T , ( 4d
) 0 s i2 = R z (λ i + β i ) [0, 0, 1, 0] T , ( 4e
) 0 s i5 = F R z (λ i + β i )[0, 0, 1, 0] T , i = 1, 2, 3, 4. (4f)
where R z (•) is the homogeneous rotation matrix about the zaxis, λ i for the i-th limb is given by
λ 1 = 0, λ 2 = π 2 , λ 3 = π, λ 4 = 3π 2
and θ i is the actuated joint angle. F is the transformation matrix consisting of Study parameters x j and y j , j = 0, 1, 2, 3:
F = 1 ∆ ∆ 0 0 0 d 1 r 11 r 12 r 13 d 2 r 21 r 22 r 23 d 3 r 31 r 32 r 33 (5)
with
∆ = x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 and r 11 = x 0 2 + x 1 2 -x 2 2 -x 3 2 r 12 = -2 x 0 x 3 + 2 x 1 x 2 r 13 = 2 x 0 x 2 + 2 x 1 x 3 r 21 = 2 x 0 x 3 + 2 x 1 x 2 r 22 = x 0 2 -x 1 2 + x 2 2 -x 3 2 r 23 = -2 x 0 x 1 + 2 x 2 x 3 r 31 = -2 x 0 x 2 + 2 x 1 x 3 r 32 = 2 x 0 x 1 + 2 x 2 x 3 r 33 = x 0 2 -x 1 2 -x 2 2 + x 3 2 d 1 = -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 , d 2 = -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 , d 3 = -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0 .
Thus, Eq. ( 2) is derived for each limb algebraically by substituting t i = tan( θi 2 ) and w i = tan( βi 2 ), i = 1, 2, 3, 4. The constraint polynomials g i , i = 1, 2, 3, 4 form the following ideal 1 :
I = g 1 , g 2 , g 3 , g 4 ⊆ k[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] (6)
To simplify the determination of the operation modes, the 4-rRUU PM is split into two 2-rRUU PMs [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF] by considering two ideals:
I (I) = g 1 , g 3 (7a) I (II) = g 2 , g 4 (7b)
Even after substituting the design parameters, it was impossible to calculate the primary decomposition of ideals I (I) and I (II) for a general β i and it remains an open issue. Consequently, some special configurations of the 4-rRUU PM are considered and their operation modes are determined as follows:
B. Operation Modes of some Specific 4-RUU PMs 1)
β 1 = β 2 = β 3 = β 4 = π 2 :
For the PM shown in Fig. 4, the constraint equations are derived from Eqs. ( 2) and (4) as 1 The ideal generated by the given polynomials is the set of all combinations of these polynomials using coefficients from the polynomial ring k
[x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ] [14].
g 1 := -pt 1 2 + p x 0 x 2 + 2 pt 1 x 0 x 3 + 2 pt 1 x 1 x 2 + pt 1 2 -p x 1 x 3 + 2 t 1 2 + 2 x 2 y 2 + (2 t 1 2 + 2)x 3 y 3 = 0 (8a) g 2 := -pt 2 2 + r 0 t 2 2 -r 1 t 2 2 + p + r 0 -r 1 x 0 x 2 + 2 t 2 px 0 x 3 + 2 t 2 px 1 x 2 + (pt 2 2 -r 0 t 2 2 -r 1 t 2 2 -p -r 0 -r 1 )x 1 x 3 + 2 t 2 2 + 2 x 2 y 2 + (2 t 2 2 + 2)x 3 y 3 = 0 (8b) g 3 := -pt 3 2 + p x 0 x 2 + 2 pt 3 x 0 x 3 + 2 pt 3 x 1 x 2 + pt 3 2 -p x 1 x 3 + 2 t 3 2 + 2 x 2 y 2 + (2 t 3 2 + 2)x 3 y 3 = 0 (8c) g 4 := -pt 4 2 -r 0 t 4 2 + r 1 t 4 2 + p -r 0 + r 1 x 0 x 2 + 2 t 4 px 0 x 3 + 2 t 4 px 1 x 2 + (pt 4 2 + r 0 t 4 2 + r 1 t 4 2 -p + r 0 + r 1 )x 1 x 3 + 2 t 4 2 + 2 x 2 y 2 + (2 t 4 2 + 2)x 3 y 3 = 0 (8d)
The primary decomposition of ideals I (I) and I (II) shown in Eq. ( 7) leads to three sub-ideals each. Among them, the third sub-ideals I 3(I) and I 3(II) correspond to a mixed mode and are of little importance in this context. The other two sub-ideals I k(I) and I k(II) , k = 1, 2 are as follows:
I (I) = I 1(I) ∩ I 2(I) ∩ I 3(I) ,
where
I 1(I) = x 0 , x 1 , x 2 y 2 + x 3 y 3 I 2(I) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (9a) I (II) = I 1(II) ∩ I 2(II) ∩ I 3(II) ,
where
I 1(II) = x 0 , x 1 , x 2 y 2 + x 3 y 3 I 2(II) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (9b)
As a result, the first two operation modes of the 4-rRUU PM shown in Fig. 4 are:
I 1 =I 1(I) ∪ I 1(II) = x 0 , x 1 , x 2 y 2 + x 3 y 3 (10a) I 2 =I 2(I) ∪ I 2(II) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (10b)
Substituting the condition x 0 = x 1 = x 2 y 2 + x 3 y 3 = 0 in the transformation matrix in Eq. ( 5) yields:
F 1 = 1 0 0 0 - 2y 3 x 2 -1 0 0 2(x 2 y 0 -x 3 y 1 ) 0 x 2 2 -x 2 3 2x 2 x 3 2(x 2 y 1 + x 3 y 0 ) 0 2x 2 x 3 -x 2 2 + x 2 3 (11)
From the transformation matrix, it can be observed that the operation mode is a 4-dof Schönflies mode in which the translational motions are parametrized by y 0 , y 1 and y 3 and the rotational motion is parametrized by x 2 , x 3 and x 2 2 + x 2 3 = 1. In this operation mode, the platform is upside down with the z P -axis pointing in a direction opposite to the z O -axis. The rotational motion is about x O -axis.
Similarly, substituting the condition x 0 = x 1 = x 2 y 2 + x 3 y 3 = 0 in the transformation matrix in Eq. ( 5) yields:
F 2 = 1 0 0 0 - 2y 1 x 0 1 0 0 -2(x 0 y 2 -x 1 y 3 ) 0 x 0 2 -x 1 2 -2 x 0 x 1 -2(x 0 y 3 + 2 x 1 y 2 ) 0 2 x 0 x 1 x 0 2 -x 1 2 (12
) In this case, the operation mode is also a 4-dof Schönflies mode in which the translational motions are parametrized by y 1 , y 2 and y 3 and the rotational motion is parametrized by x 0 , x 1 and x 2 0 + x 2 1 = 1. The platform is in upright position with rotational motion about x O -axis. 2) and (4) as follows:
g 1 := -pt 1 2 + p x 0 x 2 + 2 pt 1 x 0 x 3 + 2 pt 1 x 1 x 2 + pt 1 2 -p x 1 x 3 + 2 t 1 2 + 2 x 2 y 2 + (2 t 1 2 + 2)x 3 y 3 = 0 (13a) g 2 := -pt 2 2 + p x 0 x 1 + 2 pt 2 x 0 x 3 -2 pt 2 x 1 x 2 + -pt 2 2 + p x 2 x 3 + 2 t 2 2 + 2 x 1 y 1 + (2 t 2 2 + 2)x 3 y 3 = 0 (13b) g 3 := -pt 3 2 + p x 0 x 2 + 2 pt 3 x 0 x 3 + 2 pt 3 x 1 x 2 + pt 3 2 -p x 1 x 3 + 2 t 3 2 + 2 x 2 y 2 + (2 t 3 2 + 2)x 3 y 3 = 0 (13c) g 4 := -pt 4 2 + p x 0 x 1 + 2 pt 4 x 0 x 3 -2 pt 4 x 1 x 2 + -pt 4 2 + p x 2 x 3 + 2 t 4 2 + 2 x 1 y 1 + (2 t 4 2 + 2)x 3 y 3 = 0 (13d)
The primary decomposition of ideals I (I) and I (II) shown in Eq. ( 7) leads to three sub-ideals each, of which the two sub-ideals I k(I) and I k(II) , k = 1, 2 are as follows:
I (I) = I 1(I) ∩ I 2(I) ∩ I 3(I) ,
where
I 1(I) = x 0 , x 1 , x 2 y 2 + x 3 y 3 I 2(I) = x 2 , x 3 , x 0 y 0 + x 1 y 1 (14a) I (II) = I 1(II) ∩ I 2(II) ∩ I 3(II) ,
where I 1(II) = x 0 , x 2 , x 1 y 1 + x 3 y 3 I 2(II) = x 1 , x 3 , x 0 y 0 + x 2 y 2 (14b)
As a result, the first two operation modes of the 4-RUU PM are:
I 1 =I 1(I) ∪ I 1(II) = x 0 , x 1 , x 2 , y 3 (15a) I 2 =I 2(I) ∪ I 2(II) = x 1 , x 2 , x 3 , y 0 (15b)
Substituting the condition x 0 = x 1 = x 2 = y 3 = 0 in the transformation matrix in Eq. ( 5) yields:
F 1 = 1 0 0 0 2y 2 x 3 -1 0 0 - 2y 1 x 3 0 -1 0 2y 0 x 3 0 0 1 (16)
From the transformation matrix, it can be deduced that the operation mode is a 3-dof pure translational mode parametrized by y 0 , y 1 and y 2 when x 3 = 1. In this operation mode, the platform is upside down with the z P -axis pointing downwards.
Similarly, substituting the condition x 1 = x 2 = x 3 = y 0 = 0 in the transformation matrix in Eq. ( 5) yields:
F 2 = 1 0 0 0 - 2y 1 x 0 1 0 0 - 2y 2 x 0 0 1 0 - 2y 3 x 0 0 0 1 (17)
In this case, the operation mode is also a 3-dof translational mode parametrized by y 1 , y 2 and y 3 when x 0 = 1. Since the rotation matrix is identity, the platform is in upright position with z P -axis pointing upwards. Fig. 6: A dual reconfigurable 4-rRUU PM with vertical base revolute joint axes
3) Vertical base and platform revolute joint axes: The double-Hooke's joint allows a planar transmission and hence the 4-rRUU PM can have any orientation of the base revolute joints such that β i ∈ [0, π]. Additionally, with the help of a L-fixture, it is possible to have a vertical orientation of the base revolute joint axes as shown in Fig. 6.
Reconfiguration analysis of this mechanism already exists in the literature [START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF], where it was shown to have three operation modes. The first operation mode is a 4-dof Schönflies mode in which the platform is upside down and the rotational axis is parallel to z O -axis. The second operation mode is a 4-dof Schönflies mode with the rotational axis parallel to z Oaxis, but in this case, the posture of the platform is upright. The third operation mode is a 2-dof coupled motion mode and is less relevant from a practical view point.
IV. CONCLUSIONS AND FUTURE WORK
A dual reconfigurable 4-rRUU PM consisting of a reconfigurable revolute joint based on the double-Hooke's joint linkage was presented in this paper. It was shown how a double-Hooke's joint linkage can be exploited to impart an architectural reconfigurability to a 4-RUU PM. The resulting dual reconfigurable 4-rRUU PM was shown to exhibit at least the following operation modes: a pure translational operation mode and Schönflies motion modes with different axes of rotation depending on the orientation of base revolute joint axes.
As a part of the future work, the operation modes will be determined as a function of the angle β which will assist in recognizing all possible platform motion types of the 4-rRUU PM for β ∈ [0, π]. Furthermore, a detailed design of the 4-rRUU PM will be performed in order to construct a functional prototype exhibiting different types of reconfigurability. It should be noted that in the prototype, the orientation of the revolute joint axes in the moving platform will have to be changed manually and the choice of a joint to have all planar orientations is still an open issue.
Fig. 1 :
1 Fig. 1: A 4-rRUU parallel manipulator
Fig. 4 :
4 Fig. 4: A 4-rRUU PM with horizontal and intersecting base revolute joint axes
Fig. 5 :
5 Fig. 5: A 4-RUU PM with horizontal and parallel base revolute joint axes 2) β 1 = β 3 = π 2 , β 3 = β 4 = 0: For the PM shown in Fig. 5 with w 1 = w 3 = 1 and w 2 = w 4 = 0, the
Abhilash Nayak is with École Centrale de Nantes, Laboratoire des Sciences du Numérique de Nantes (LS2N), 1 rue de la Noë, UMR CNRS 6004, 44321 Nantes, France abhilash.nayak@ls2n.fr
Stéphane Caro and Philippe Wenger are with CNRS, Laboratoire des Sciences du Numérique de Nantes (LS2N), École Centrale de Nantes, 1 rue de la Noë, UMR CNRS 6004, 44321 Nantes, France
ACKNOWLEDGMENT
This work was conducted with the support of both Centrale Nantes and the French National Research Agency (ANR project "Kapamat" #ANR-14-CE34-0008-01). We would also like to show our gratitude to Rakesh Shantappa, a former Master's student at Centrale Nantes for his help with the CAD models. |
01757577 | en | [
"info"
] | 2024/03/05 22:32:10 | 2015 | https://inria.hal.science/hal-01757577/file/370579_1_En_15_Chapter.pdf | Renan Sales Barros
Jordi Borst
Steven Kleynenberg
Céline Badr
Rama-Rao Ganji
Hubrecht De Bliek
Landry-Stéphane Zeng-Eyindanga
Henk Van Den Brink
Charles Majoie
Henk Marquering
Sílvia Delgado Olabarriaga
Remote Collaboration, Decision Support, and On-Demand Medical Image Analysis for Acute Stroke Care
Keywords: Acute Care, Cloud Computing, Decision Support, High Performance Computing, Medical Image Analysis, Remote Collaboration, Stroke, Telemedicine
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Acute ischemic stroke is the leading cause of disability and fourth cause of death [START_REF] Go | Heart disease and stroke statistics -2013 update: a report from the American Heart Association[END_REF].
In acute ischemic stroke, a blood clot obstructs blood flow in the brain causing part of the brain to die due to the lack of blood supply. The amount of brain damage and the patient outcome is highly related to the duration of the lack of blood flow ("time is brain"). Therefore, fast diagnosis, decision making, and treatment are crucial in acute stroke management.
Medical data of a stroke patient is collected during the transport by ambulance to the hospital (e.g. vital signs, patient history, and medication). At arrival, various types of image data are acquired following protocols that involve opinions and decisions from various medical experts. Sometimes, a patient needs to be transferred to a specialized hospital and, in this case, it is important that all the data collected in the ambulance and at the referring hospital is available to the caregivers that will continue the treatment. Often, various medical specialists need to collaborate based on available information for determining the correct diagnosis and choosing the best treatment. Usually, this collaboration is based on tools that are not connected to each other and, because of that, they may not deliver the necessary information rapidly enough.
In addition to these challenges, the amount of patient medical data is growing fast [START_REF] Hallett | Cloud-based Healthcare: Towards a SLA Compliant Network Aware Solution for Medical Image Processing[END_REF]. This fast increase is especially observed in radiological image data, which is also a consequence of new medical imaging technologies [START_REF] Alonso-Calvo | Cloud computing service for managing large medical image data-sets using balanced collaborative agents[END_REF][START_REF] Shini | Cloud based medical image exchange-security challenges[END_REF]. The management, sharing, and processing of medical image data is a great challenge for healthcare providers [START_REF] Alonso-Calvo | Cloud computing service for managing large medical image data-sets using balanced collaborative agents[END_REF][START_REF] Shini | Cloud based medical image exchange-security challenges[END_REF] and they can be greatly improved by the usage of cloud technologies [START_REF] Kagadis | Cloud computing in medical imaging[END_REF]. Cloud technologies also enable collaboration and data exchange between medical experts in a scalable, fast, and cost-effective way [START_REF] Kagadis | Cloud computing in medical imaging[END_REF]. Mobile devices, remote collaboration tools, and on-demand computing models and data analysis tools supported by cloud technologies may play an important role to help in optimizing stroke treatment and, consequently, improve outcome of patients suffering from stroke.
In this paper, we present a cloud-based platform for Medical Distributed Utilization of Services & Applications (MEDUSA). This platform aims at improving current acute care settings by allowing fast medical data exchange, advanced processing of medical image data, automated decision support, and remote collaboration between physicians through a secure responsive virtual space. We discuss a case study implemented using the MEDUSA platform for supporting the treatment of acute stroke patients, presenting the technical details of the prototype implementation and commenting on its initial evaluation
2
Related Work
The development of cloud-based platforms for collaboration and processing of medical data is a challenging task. Many authors [START_REF] Shini | Cloud based medical image exchange-security challenges[END_REF][START_REF] Kagadis | Cloud computing in medical imaging[END_REF][START_REF] Jeyabalaraja | Cloud Computing in Medical Diagnosis for improving Health Care Environment[END_REF][START_REF] Pino | A Survey of Cloud Computing Architecture and Applications in Health[END_REF] put forward that these platforms hold the potential to define the future of healthcare services. Also, the analysis of medical data can be an important way to improve quality and efficiency in healthcare [START_REF] Jee | Potentiality of big data in the medical sector: focus on how to reshape the healthcare system[END_REF][START_REF] Murdoch | The inevitable application of big data to health care[END_REF]. The work presented in [START_REF] Kanagaraj | Proposal of an open-source cloud computing system for exchanging medical images of a hospital information system[END_REF][START_REF] Yang | Implementation of a medical image file accessing system on cloud computing[END_REF] focuses on the development of a cloud-based solution aimed at only the storage and sharing of medical data. In other words, they propose solutions based on cloud infrastructures to facilitate medical image data exchange between hospitals, imaging centers, and physicians. A similar solution is presented in [START_REF] Koufi | Ubiquitous access to cloud emergency medical services[END_REF], however focusing on medical data sharing during emergency situations. A cloudbased system is presented in [START_REF] Zhuang | Efficient and robust large medical image retrieval in mobile cloud computing environment[END_REF] for storage of medical data with an additional functionality that enables content-based retrieval of medical images. Still focusing on cloud-based data storage and sharing, [START_REF] Hua | A Cloud Computing Based Collaborative Service Pattern of Medical Association for Stroke Prevention and Treatment[END_REF] presents a solution to help managing medical resources for the prevention and treatment of chronic stroke patients.
In addition to storage and sharing, some studies also include the possibility of using the cloud infrastructure for processing of medical data. A simple cloud-based application is presented in [START_REF] Sharieh | Using cloud computing for medical applications[END_REF] to monitor oxygenated hemoglobin and deoxygenated hemoglobin concentration changes in different tissues. Cloud computing is also used in [START_REF] Parsonson | A cloud computing medical image analysis and collaboration platform[END_REF] not only to support data storage and sharing, but also to visualize and render medical image data. In [START_REF] Dorn | A cloud-deployed 3D medical imaging system with dynamically optimized scalability and cloud costs[END_REF] the authors also propose a cloud application for rendering of 3D medical imaging data. This application additionally manages the cloud deployment by considering scalability, operational cost, and network quality.
Complete cloud-based systems for medical image analysis are presented in [START_REF] Chiang | Bulding a cloud service for medical image processing based on service-orient architecture[END_REF][START_REF] Huang | Medical information integration based cloud computing[END_REF][START_REF] Ojog | A Cloud Scalable Platform for DICOM Image Analysis as a Tool for Remote Medical Support[END_REF]. However, in these systems, image upload and download is manually performed by the user, while the system focuses on the remote processing, storage, and sharing of medical image data. The MEDUSA platform not only provides cloud-based storage, sharing, and processing of medical image data, but also real-time communication between medical experts, real-time collaborative interaction of the medical experts with the medical data, and a real-time decision support system that continuously processes patient data and displays relevant notifications about the patient condition.
The MEDUSA platform also includes a cloud management layer that coordinates the use of resources in the cloud infrastructure. Other studies also present some cloud management features. In [START_REF] Ahn | Autonomic computing architecture for real-time medical application running on virtual private cloud infrastructures[END_REF] the authors propose a cloud architecture that reserves network and computing resources to avoid problems regarding load-balancing mechanisms of cloud infrastructures and to reduce the processing delays for the medical applications. Also, [START_REF] Hallett | Cloud-based Healthcare: Towards a SLA Compliant Network Aware Solution for Medical Image Processing[END_REF] proposes an algorithm to optimize the organization of medical image data and associated processing algorithms in cloud computing nodes to increase the computing performance. Finally, [START_REF] Alonso-Calvo | Cloud computing service for managing large medical image data-sets using balanced collaborative agents[END_REF] presents a cloud-based multi-agent system for scalable management of large collections of medical image data.
The project presented in [START_REF] Holtmann | Medical Opportunities by Mobile IT Usage-A Case Study in the Stroke Chain of Survival[END_REF] tries to speed up current stroke care by integrating and sharing data from stroke patients using mobile networks. In this scenario, a hospital can, for instance, be prepared with the right resources before the arrival of the patient. This project also includes decision support, which suggests a predefined path through the emergency procedures according to the structure of mandatory and other supplementary healthcare protocols. However, differently from MEDUSA, this project does not include any image processing based feature.
Acute Stroke Care
Currently, treatment decision of stroke patients is increasingly driven by advanced imaging techniques. These imaging techniques consist of non-contrast computed tomography (ncCT), computed tomography angiography (CTA), and computed tomography perfusion (CTP). Because of the extensive usage of imaging techniques, it is common to produce gigabytes of image data per patient. The primary treatment for patients with acute ischemic stroke is intravenous administration of alteplase (thrombolysis). Patients who are not eligible for treatment with alteplase or do not respond to the treatment can be treated by mechanical removal of the blood clot via the artery (thrombectomy). Thrombectomy is only available in specialized hospitals and often a patient must be transferred for treatment.
This transfer is arranged via telephone and imaging data created in the initial hospital is not available for the caregivers in the specialized hospital until the patient and imaging data arrive via the ambulance. On a regular basis it happens that the imaging data was wrongly interpreted in the initial hospital and that the patient is not eligible for thrombectomy. Also, often new imaging acquisitions have to be redone due to broken DVDs, wrong data, or insufficient quality. These problems result in futile transfers and loss of valuable time.
MEDUSA Platform
The MEDUSA platform was designed to support remote collaboration and high performance processing of medical data for multiple healthcare scenarios. The platform is accessible to final users through the MEDUSA Collaboration Framework (MCF), which is a web application that is compatible with any web browser that supports HTML5. The MCF is a special type of MEDUSA application that provides to the users an entry point to access other MEDUSA applications. A cloud management layer controls the deployment and execution of all MEDUSA applications in one or more cloud providers. Figure 1 illustrates the architectural design of the MEDUSA platform.
MEDUSA Cloud Applications
The MEDUSA platform has a number of cloud applications that are available in all healthcare scenarios: Audit Trail, which reports the events generated by the other MEDUSA applications; User Manager, which allows assigning roles to users and defining which MEDUSA applications they can use; and Video Call, which allows communication between users of the MEDUSA platform.
Cloud Management Layer
MEDUSA Cloud Applications
MEDUSA Collaboration Framework User Manager
Video Call Audit Trail …
Cloud Provider
The MEDUSA applications are started as part of a MEDUSA session. Multiple users in a session can interact with these applications, and these interactions are visible to all the users in the session. The handling of multiple user interactions is done by each MEDUSA application. The applications in the MEDUSA platform can be web applications or regular desktop applications. The desktop applications are integrated in the MEDUSA platform through a virtualization server that uses the technologies described in [START_REF] Joveski | Semantic multimedia remote display for mobile thin clients[END_REF] and [START_REF] Joveski | MPEG-4 solutions for virtualizing RDP-based applications[END_REF]. The multi-user interaction of the desktop applications is handled by the virtualization server.
Cloud Provider
The MEDUSA applications can be deployed in different cloud providers. Currently, these applications are being deployed in the High Performance Real-time Cloud for Computing (HiPeRT-Cloud) of Bull. The HiPeRT-Cloud is mainly designed for realtime computationally-intensive workloads. This solution is fully compatible with the Cloud Computing Reference Architecture of the National Institute of Standards and Technology (NIST) and provides infrastructure services under any cloud broker solution. The HiPeRT-Cloud is used in the MEDUSA platform because it provides solutions for handling complex applications in the field of real-time computational and dataintensive tasks in the cloud.
Cloud Management Layer
In order to take advantage of the on-demand, flexible, high-performance, and cost-effective options that cloud providers can offer, the cloud management layer, implemented by Prologue, manages the cloud deployment in the MEDUSA platform. This layer orchestrates the allocation and release of resources on the cloud provider's infrastructure. It also oversees the lifecycle of the deployed resources, ensures their availability and scalability, and links the desktop applications from the virtualization server back to the MCF. The cloud management layer is designed according to the Service-Oriented Architecture model and its functionalities are accessible through a Representational State Transfer Application Programming Interface (REST API). The cloud management layer also incorporates a monitoring service that operates by accessing directly the deployed virtual machines (VMs). The technology behind the cloud management layer is aligned with the NIST architecture and based on the Open Cloud Computing Interface specifications.
In the MEDUSA context, technical requirements for computing, storage, network, and security resources have been identified for each MEDUSA application to be deployed. All requirements are then translated into machine-readable code that is used to provision the cloud resources.
The components of the MEDUSA platform are hosted on the cloud through a security-aware, need-based provisioning process. By supporting on-demand hybrid and multi-cloud deployments, as well as monitoring, load balancing, and auto-scaling services through an agent embedded in each VM, the cloud management layer thus ensures a high resilience of the MEDUSA platform.
Security
The security of the MEDUSA platform is currently mainly based in the use of digital certificates, which are used to authenticate MEDUSA applications (VMs), to secure the data exchanges through the network, and to provide strong authentication of MEDUSA users.
The VMs containing the applications are deployed dynamically, and thus server certificates need to be created dynamically, during the deployment. A web service was developed to provide dynamic generation of server certificates for the different VMs in the MEDUSA platform. These server certificates must be created during the deployment of the VMs and there must be one certificate per application and VM (identified by the IP address).
Regarding the user authentication, an authentication module is called when a user opens a MEDUSA session. This module authenticates a user by checking the provided credentials against the user management component, which has access to a special internal directory containing the certificates used for strong authentication of MEDUSA users.
The MEDUSA platform also uses robust image watermarking and fingerprinting methods to prevent and detect unauthorized modification and leaking of medical images by authorized users by. However, due to legal regulations, an important requirement when dealing with medical images is the capability reconstructing the original image data. Because of this, reversible or semantic-sensitive techniques for watermarking and fingerprinting can be used in the MEDUSA platform. These techniques enable to completely recover the original image data or at least the recovery of the regions of these images that are relevant for the user or application.
MEDUSA Stroke Prototype
The MEDUSA platform was designed to support various medical scenarios. Here, we focus on a prototype for supporting acute stroke care. The MEDUSA Stroke Prototype (MSP) is built by combining the default MEDUSA applications with three applications specifically configured to support the treatment of stroke patients: Advanced Medical Image Processing, Decision Support System, and 3D Segmentation Renderer. All the applications of the MSP are executed in VMs running on the HiPeRT-Cloud. The cloud management layer is in charge of the deployment of these VMs.
Advanced Medical Image Processing
For supporting the assessment of the severity of a stroke, several medical image processing algorithms (MIPAs) have been developed. These algorithms perform quantitative analysis of the medical image data and the result of these analyses can be used to support the treatment decisions. The output of these algorithms are, for example, the segmentation of a hemorrhage in the brain [START_REF] Boers | Automatic Quantification of Subarachnoid Hemorrhage on Noncontrast CT[END_REF], the segmentation of a blood clot [START_REF] Santos | Development and validation of intracranial thrombus segmentation on CT angiography in patients with acute ischemic stroke[END_REF], and the segmentation of the infarcted brain tissue [START_REF] Boers | Automated cerebral infarct volume measurement in follow-up noncontrast CT scans of patients with acute ischemic stroke[END_REF]. The MIPAs are linked together into processing pipelines with well-defined input, output, and policies that control their execution. The execution of these pipelines is automatically orchestrated to deliver the lowest execution time based on a set of optimization strategies (e.g. task parallelism, data parallelism, and GPU computing).
The MIPAs are implemented as plugins for the IntelliSpace Discovery (ISD) platform, an enterprise solution for research, developed by Philips Healthcare. Figure 2 shows the output of the plugin for infarct volume calculation in the ISD. The collection of MIPAs specially developed to support acute stroke care that are included in the ISD constitutes the Advanced Medical Image Processing application of the MSP. The ISD is a Windows desktop application developed by using the .NET Framework. The development of the MIPAs is also based in the .NET Framework. For GPUbased computations, OpenCL 1.1 was used. OpenCL is a framework for the development and execution of programs across platforms consisting of different types of processors such as CPUs, GPUs, etc. OpenCL.NET was used to integrate OpenCL with the .NET. Framework.
The data generated by the MIPAs are exported to the DSS by using JavaScript Object Notation (JSON) files through WebSockets. (Anonymized) Patient information is sent to the MIPAs by using the tags of the medical image data used as input. The information about the current session is directly sent to the ISD and forwarded to the MIPAs.
Decision Support System
The Decision Support System (DSS) by Sopheon provides real-time process support to medical professionals collaborating on the stroke case. The DSS is rule-based: the rules specify the conditions under which actions are to be advised (delivered as notifications). The Decision Support rules are part of a medical protocol and thus defined and approved by medical professionals.
In the MSP, the DSS runs a set of rules specifically designed for dealing with stroke patients. It gathers real-time input from vital sign sensors and MIPAs. For instance, a rule could state that an infarct volume larger than 70 milliliters is associated with a poor outcome for the patient. When the DSS detects an infarct volume value of e.g. 80 milliliters, it will display the notification associated with this condition. DSS also selects relevant information from the data generated by the MIPAs and forwards it to the audit trail and to the 3D Segmentation Renderer.
The DSS runs on Node.js, which is a platform built on Google Chrome's JavaScript runtime. The DSS is deployed on Fedora, which is an operating system based on the Linux kernel.
3D Segmentation Renderer
The 3D Segmentation Renderer by Sopheon is responsible for displaying 3D segmentations generated by the MIPAs. This application was developed by using the WebGL library, which enables to render 3D graphics in the browser without installing additional software. Figure 3 shows the GUI of this application rendering the segmentation of brain tissue (in green and blue) and the segmentation of the infarcted region (in red).
Initial Evaluation
As this is an on-going project, the discussion presented below is based upon an evaluation of the first fully-integrated prototype.
The MSP integrates very heterogeneous applications, which run on different operational systems (Windows, Linux) and use different development technologies (Java, OpenCL, C#, C++). These applications are seamlessly available for the user from a single interface. Also, the deployment of the applications is transparently handled by the platform. This solution is provided in a smooth and transparent manner, hiding the complex details from the user.
In the MEDUSA platform, the data and user input need to cross several software layers, which might introduce overheads and decrease performance. However, such poor performance was not noticed in the initial MSP prototype. For instance, the Advanced Medical Image Processing application, which requires data exchange between different architectural components, was almost instantaneously ready for use without noticeable interaction delays.
The MSP implements a complete acute stroke use case, which has been demonstrated live in various occasions. Impressions have been collected informally to assess the potential value of this prototype system. Table 1 compares the current stroke care situation in the Netherlands versus the stroke care that could be supported by the MEDUSA platform based on the functionalities currently present in the MSP.
Because of its complexity, a detailed and quantitative evaluation of the MEDUSA platform involves several software components and requires a careful planning. The design of this evaluation was already defined in the first year of the project. It is scheduled to take place during the last 6 months of the MEDUSA project (end of 2015). Concerning the image processing functionality, most of the MIPAs included in the MSP are too computationally expensive to be executed on a local machine according to the time constraints of an acute stroke patient. HPC capabilities delivered by cloud computing were crucial to improve the processing of these algorithms from hours to minutes, making them suitable for acute stroke care. For instance, the time to run the method used to reduce noise in CTP data was reduced from more than half an hour to less than 2 minutes [START_REF] Barros | High Performance Image Analysis of Compressed Dynamic CT Perfusion Data of Patients with Acute Ischemic Stroke[END_REF].
Discussion and Conclusion
The development of the MEDUSA platform started in 2013. Back then, this kind of cloud-based solutions was not common. Today, however, there is a clear trend in the healthcare industry towards the usage of cloud computing, collaboration, and automated analyses of medical data. In addition, when dealing with processing of medical data constrained by the requirements of acute care situations, a lot of benefits can be derived from the use of cloud computing: scalability, pay-per-use model, high performance computing capabilities, remote access, etc. There are innumerous technical challenges for enabling the execution and communication of software components in a platform like MEDUSA. Regarding stroke care, the software components execute in different computing devices (CPUs, GPUs, etc.) and based on different software platforms (web, Linux, Windows, etc.). In the MEDUSA platform these challenges are tackled using SOA approach and a virtualized infrastructure. Because of the variety of application types, a uniform way of establishing communication between the MEDUSA applications has not been developed yet. Nevertheless, the direct communication between applications based on the exchange of well-defined file formats through WebSockets was demonstrated to be effective, without a negative impact in the development and integration of these applications. The current functionalities present in the MSP have the potential to improve several aspects of current stroke care.
The MEDUSA platform is still under development. Thus, most of the components to implement security are still not completely integrated in the platform yet. Defining and developing the security aspects of a platform like MEDUSA is also a very challenging task, since it is necessary to cope with different legal constraints, in particular across countries. The development process of the MEDUSA platform includes the implementation and validation of the platform in three different hospitals. This validation is currently being carried out in one hospital. Preliminary evaluation of the platform indicates that the solution is promising and has potential large value for improving treatment of these patients.
Fig. 1 .
1 Fig. 1. The MEDUSA platform architecture.
Fig. 2 .
2 Fig. 2. Plugin for automated measurement of the cerebral infarct volume in the ISD.
Fig. 3 .
3 Fig. 3. 3D segmentation renderer showing the segmentation of brain tissue (green and blue) and the infarction in the brain (red).
Table 1 .
1 Current stroke care vs. stroke care with MEDUSA.
current with MEDUSA
Data availability images are not available images are available online
Time to access data transport by car of physical media (minutes to hours) online data transfer (few seconds)
Potential value for automated quantitative analysis not used results of MIPAs readily available as
decision yet for clinical decision decision parameters
Infrastructure static, proprietary, fixed scale pay-per-use, scalable, and portable to different cloud providers
Remote collaboration by phone by video-conference with access to the patient data
Acknowledgments. This work has been funded by ITEA2 10004: MEDUSA. |
01757656 | en | [
"info.info-es"
] | 2024/03/05 22:32:10 | 2018 | https://cea.hal.science/cea-01757656/file/main.pdf | Maha Kooli
Henri-Pierre Charles
Clement Touzet
Bastien Giraud
Jean-Philippe Noel
Smart Instruction Codes for In-Memory Computing Architectures Compatible with Standard SRAM Interfaces
This paper presents the computing model for In-Memory Computing architecture based on SRAM memory that embeds computing abilities. This memory concept offers significant performance gains in terms of energy consumption and execution time. To handle the interaction between the memory and the CPU, new memory instruction codes were designed. These instructions are communicated by the CPU to the memory, using standard SRAM buses. This implementation allows (1) to embed In-Memory Computing capabilities on a system without Instruction Set Architecture (ISA) modification, and (2) to finely interlace CPU instructions and in-memory computing instructions.
I. INTRODUCTION
In-Memory Computing (IMC) represents a new concept of data computation that has been introduced to overcome the von Neumann bottleneck in terms of data transfer rate. This concept aimes to reduce the traffic of the data between the memory and the processor. Thus, it offers significant reduction of energy consumption and execution time compared to the conventional computer system where the computation units (ALU) and the storing elements are separated. Hardware security improvements can also be expected thanks to this system architecture (e.g., side channel attacks, etc).
The IMC concept has just started to be the focus of recent research works. The objective of our research works is to focus on different technological layers of an IMC system: silicon design, system architecture, compilation and programming flow. This enables to build a complete IMC system that can be then industrialized. In previous publications, we introduced our novel In-Memory Power Aware CompuTing (IMPACT) system. In [START_REF] Akyel | DRC 2: Dynamically Reconfigurable Computing Circuit based on memory architecture[END_REF], we presented the IMPACT concept based on a SRAM architecture and the possible in-memory arithmetic and logic operations. In [START_REF] Kooli | Software Platform Dedicated for In-Memory Computing Circuit Evaluation[END_REF], we proposed a dedicated software emulation platform to evaluate the IMPACT system performance. The results achieved in these papers show a significant performance improvement of the IMPACT system compared to conventional systems. In the present research work, we focus on a new important step of the design of a complete IMPACT system, in particular the communication protocol between the memory and the Computation Processor Unit (CPU).
Fig. 1 presents a comparison of the communication protocol for a conventional system, for a GPU system and for our IMPACT system. In a conventional system based on von Neumann architecture, the traffic between the memory and the CPU is very crowded. Several instruction fetches and data transfers occupy the system buses during the computation (Fig. 1.a). In systems that are integrating accelerators (e.g., GPUs), the computation is performed in parallel, whereas only a single instruction fetch is needed. However, data transfers are still required over the data bus (Fig. 1.b). The traffic of our IMPACT system is completely different from the previous systems. No data transfer over the system buses is required since the computation is performed inside the memory. In addition, only one instruction transfer towards the memory is required (Fig. 1.c). Indeed, the IMPACT system presents a new concept that completely changes the memory features by integrating computation abilities inside the memory boundary. Therefore, the usual communication protocol between the memory and CPU is not fully compatible with the specific IMPACT system architecture. Thus, it has to be redefined to manage the new process of instruction executions.
In this paper, we push one step further our research works on IMPACT system by:
• Introducing a novel communication protocol between the CPU and the memory that is able to manage the transfer of the IMPACT instructions to the memory. • Defining the ISA that corresponds to this protocol. The reminder of this paper is organized as follows. Section II provides a summary of the architecture and the communication protocol used in conventional system. Section III discusses related works. Section IV introduces the IMPACT instruction codes and the communication protocol. In section V, we provide a possible solution to integrate the proposed IMPACT instructions inside an existing processor ISA. Finally, section VI concludes the paper.
II. BACKGROUND
In most traditional computer architecture, the memory and the CPU are tightly connected. Conventionally, a microprocessor presents a number of electrical connections on its pins dedicated to select an address from the main memory, and another set of pins to read/write the data stored from/or into that location. The buses which connect the CPU and the memory are one of the defining characteristics of a system. These buses need to handle the communication protocol between the memory and the CPU. The buses transfer different types of data between components. In particular, we distinguish, as shown in Fig. 2, three types:
• Data bus: It has a bidirectional functionality. It enables the transfer of data that is stored in the memory towards the CPU, or vice versa. • Address bus: It is an unidirectional bus that enables the transfer of addresses from CPU to the memory. When the CPU needs a data, it sends its corresponding memory location via the address bus, the memory then sends back the data via the data bus. When the processor wants to store a data in the memory, it sends the memory location where it will be stored via the address bus, and the data via the data bus. When the program is executed, for each instruction the processor proceeds by the following steps:
1) Fetch the instruction from memory: The CPU transmits the instruction address via the address bus, the memory forwards then the instruction stored in that location via the data bus. 2) Decode the instruction using the decoder: The decoding process allows the CPU to determine which instruction will be performed. It consists in fetching the input operands and the opcode, and moving them to the appropriate registers in the register file of the processor. 3) Access memory (in case of read/write instructions): For 'read' instruction, this step consists in sending a memory address on the address bus and receiving the value on the data bus; The 'write' instruction consists in sending a data with the data bus. Then, this data is copied into a memory address, sent by the address bus. The control bus is used activate the write or read mode. 4) Execute the instruction. 5) Write-back (in case of arithmetic/logic instructions): the ALU performs the computation and write back the result in the corresponding register.
III. RELATED WORKS A. In-Memory Computing
Processing in-Memory (PiM), Logic in-Memory (LiM) and IMC architectures have been widely investigated in the context of integrating processor and memory as close as possible, in order to reduce memory latency and increase the data transfer bandwidth. All these architectures attempt to reduce the physical distance between the processor and the memory.
In Fig. 3, we represent the main differences between PiM, LiM and IMC architectures. PiM [START_REF] Gokhale | Processing in memory: The Terasys massively parallel PIM array[END_REF] [4] [START_REF]UPMEM[END_REF] consists in putting the computation unit next to the memory while keeping the two dissociated. It is generally implemented in stand alone memories fabricated with a DRAM process. LiM and IMC architectures are based on embedded memories fabricated with a CMOS process. LiM [START_REF] Matsunaga | MTJ-based nonvolatile logic-in-memory circuit, future prospects and issues[END_REF] enables distributing non-volatile memory elements over a logic-circuit plane. IMC consists in integrating computation units inside the memory boundary, and represents a different concept that completely changes the memory behavior by integrating some in-situ computation functions located either before or after sens amplifiers circuits. As a result, the communication protocol between the memory and the processor has to be redefined. Compared to LiM, IMC enables non-destructive computing in the memory, i.e., the operand data are not lost after computation. Some recent research works start to explore and evaluate the performance of this concept. It has been applied both on volatile memories [START_REF] Jeloka | A 28 nm configurable memory (TCAM/BCAM/SRAM) using push-rule 6T bit cell enabling logic-in-memory[END_REF] [8] [START_REF] Aga | Compute caches[END_REF] and non volatile memories [START_REF] Wang | DW-AES: A Domain-Wall Nanowire-Based AES for High Throughput and Energy-Efficient Data Encryption in Non-Volatile Memory[END_REF] [START_REF] Li | Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories[END_REF].
Most of the existing IMC studies focus on the IMC hardware design. The system buses have never been presented, nor interactions between the CPU and the memory. Moreover, no ISA to implement the IMC system architecture has already been defined. All these points clearly limit the conception of a complete IMC system. In this paper, we focus on the communication protocol between the memory and the CPU for the IMPACT system and we define the ISA. This study is a basic step in the development of a complete IMC system on different technological layers from the hardware to the software. In addition, the IM-PACT system is able to support operations with multi-operand selection. Thus, the classic format of instruction (opcode + two source operand addresses + a destination address) cannot be used due to limitations in the bus size.
CPU
SRAM bus
Additional Logic
On-chip Memory
CPU Additional Logic
Off-chip Memory
Logic in Memory
B. Communication Protocols in Computing Systems
The communication protocol between the memory and CPU has been widely presented and discussed in different works [START_REF] Vahid | Embedded system design: a unified hardware/software introduction[END_REF] [13] [START_REF] Null | The essentials of computer organization and architecture[END_REF]. This protocol is implemented and managed using buses. Existing system buses (data bus, address bus, control bus) are generally used to transfer the data or the addresses, but not the instructions. For the IMPACT system, the CPU should communicate the instruction to the memory so that the memory executes/computes this instruction. In the existing computer architecture, the buses are designed to enable only read and write operations, but not arithmetic and logic operations. This paper introduces a new communication protocol that is able to efficiently manage the interaction between the CPU and the IMPACT memory with a full compatibility with the existing buses.
IV. IMPACT MEMORY INSTRUCTION CODES
In this Section, we define the different IMPACT memory instruction codes that will be transmitted by the processor to the memory via the system buses. The challenge is to make the IMPACT system architecture as close as possible to a conventional system architecture, i.e., we aim to bring the less possible changes to the conventional system implementation in order to facilitate the integration of the IMPACT system to existing system architectures. In addition, it allows to propose a system that is able to interweave the execution of conventional CPU instructions and IMPACT instructions.
A. IMPACT Specific Operations
IMPACT system is based on the SRAM architecture to perform operations inside the memory macro thanks to an array composed of bitcells with dedicated read ports. The IMPACT system circuitry enables the same logic and arithmetic operations as a basic ALU. It also presents new specific features that are:
• Multi-Operand Operations: The IMPACT system circuitry is able to perform logic and memory operations not only on two operands as for conventional systems, but on multiple operands. In fact, this feature is achieved thanks to the multi-row selector, which enables to generate a defined selection pattern (e.g., one line out of four, halftop of the rows, etc). • Long-Word Operations: The IMPACT system circuitry is able to perform arithmetic/logic/memory operations on words whose size can be up to the physical row size of the memory. The operand size is no longer limited by the register size (that is much more smaller that the maximum memory row size).
B. IMPACT Instruction Formats
Regarding these specific features of the IMPACT operations, we propose two new formats that enable to build the IMPACT memory instruction codes.
(a) Multi-Operand Instruction Format: (b) Two-Operand Instruction Format:
Opcode Address
Mask SP Output SI Opcode Address 1 Address 2 SP Output SI
1) Multi-Operand Instruction Format:
The multi-operand format enables to define the structure of the instruction that performs a multi-operand operation. In fact, in conventional system architecture, the instruction size is usually of 32 bits [START_REF]MIPS32 Architecture[END_REF]. Thus, they do not enable to encode all the addresses of the multiple operands. Therefore, we propose to define a pattern that enables to select the lines that store the operand data of the given operation. This pattern is built thanks to a pattern code (defined by both an address and a mask) driving a specific row-selector.
To implement this instruction, we propose a multi-operand format, as shown in Fig. 4.a, that encodes:
-The opcode of the operation. In Fig. 6, we provide the list of the logic multi-operand operations that the IMPACT system is able to execute. -The row selector address.
-The row selector mask.
-The output address, where the computation result is stored. -A Smart Instruction (SI) bit to inform the memory about the instruction type: an IMPACT or a conventional instruction. -A Select Pattern (SP) bit to enable/disable the pattern construction using the row-selector.
In Fig. 5, we provide an example of the operating mode of the IMPACT system when it is executing a logic 'OR' operation with multi-operand. As input, the system take the instruction composants (opcode, pattern code, etc). Based on the bits of the pattern code address and mask, the specific row selector of the IMPACT memory builds a regular pattern to select the multiple memory lines. In this row selector, we create a sort of path (i.e., a tree) filled regularly by '0' and '1' bits. Then, the rule consists in looking after the bits of the mask: if the bit is '1', we select the two branches in the tree, if the bit is '0', only the branch corresponding to the address bit is selected. This method allows then to build regular output patterns. These patterns can then be refined by adding/deleting a specific line. For that, we define specific IMPACT operations ('PatternAdd' and 'PatternSub'). As shown in Fig. 5, the pattern can be also stored in the pattern register in case we require to refine it or to use it in the future. We assume that this refinement process could take some additional clock cycles to build the required pattern, however for certain applications where the pattern is used several times, the gain would be considerable. Once the pattern is build, the last step of the instruction execution consists in selecting the lines in the SRAM memory array that correspond to '1' in the pattern, and performing the operation. The advantage of this format consists in not explicitly encoding all the operand addresses inside the instruction. To the best of our knowledge, there is no computer architecture that defines such instructions using this pattern methodology.
2) Two-Operand Instruction Format: The two-operand instruction format represents the conventional format of instructions with maximum two source addresses. This format is used for the long-word operations. The source address represents the address of the memory row on which the operation will be performed. As shown in Fig. 4.b, the two-operand instruction format encodes:
-The opcode of the operation. In Fig. 6, we provide the list of all the operations that the IMPACT system is able to execute. -The addresses of the first and second operand.
-The output address, where the computation result is stored. -SI bit to inform the memory about the instruction type:
an IMPACT or a conventional instruction. -SP bit to activate/dis-activate the pattern construction using the row-selector. In this format the pattern construction should be disable for all the instructions.
C. IMPACT Communication Protocol
In mainstream memories, the system buses are used to communicate between the memory and the CPU during the execution of different instructions of the program (that are stored in the code segment of the memory). In the IMPACT system, the instructions are built on the fly during the compilation by the processor respecting the formats defined in Subsection IV-A. Then, they are transferred from the processor to the memory via the standard SRAM buses (data and address buses). The compilation aspect and communication between the program code and the processor are not detailed in the present paper. In this Section, we present the implementation of the IMPACT instructions on the data and the address buses.
For the proposed communication protocol, we consider a data bus of 32-bits size, an address bus of 32-bits size. Then, we propose a specific encoding of the instruction elements over the two buses. As shown in Fig. 7, we make use of the data and the address buses to encode the opcode, the source addresses, the output address, and additional one-bit SP and SI signals.
implementation does not change the implementation of conventional system. The communication protocol is able to address both the IMPACT and the SRAM memories.
1) Data Bus: The data bus encodes the opcode on 7 bits. In fact, the IMPACT operations are hierarchically ranked as shown in Fig. 6. Then, the data bus encodes the source addresses, each over 12 bits, leading to a maximum 4096 words of IMPACT memory. In case of two-operand format, we encode successively the two operand addresses. In case of multi-operand format, we encode the address and the mask of the pattern code. The last bit of the data bus is occupied by the SP signal as described in Subsection IV-B.
2) Address Bus: The address bus encodes the output address over 12 bits in the Least Significant Bit (LSB). It also reserves one bit, in the Most Significant Bit (MSB) for the smart instruction signal in order to inform the memory about the arriving of an IMPACT instruction.
V. IMPLEMENTATION AT ISA LEVEL
We propose, in this Section, a possible solution to built the proposed IMPACT instruction codes from the instruction set architecture (ISA) of a given processor. The solution consists in using an existing processor ISA without integrating new instructions or opcodes. In particular, we use the store instruction to monitor the IMPACT operations (arithmetic/logic). The IM-PACT opcode, as well as its operands, will be encoded inside the operands (i.e., registers) of the conventional instruction. In Fig. 8, we provide, for an IMPACT addition operation, the compilation process to create the corresponding IMPACT assembly code. First, the compiler generates instructions that encodes the addresses of the IMPACT opcode as well as the operands inside specific registers. These instructions will be transferred via the system buses respecting the conventional communication protocol. Then, the compiler generates the store instruction with the previously assembled specific registers. This store instruction is then transferred to the memory through the system buses respecting the IMPACT communication protocol defined in Subsection IV-C.
The advantage of this solution consists in its facility to be compatible with the processor ISA (no problem in case of version changes). However, the compilation process will be quite complex since it requires to generate a preliminary list of instructions needed then to generate IMPACT instruction using the store instruction. Further solutions to integrate the proposed IMPACT memory instruction code in the ISA are possible. However, they require to change the processor ISA by integrating one or more new instructions (e.g., 'IMPACTAdd'). The compilation process will be then simpler since it does not require to generate the preliminary list of instructions. However, this solution could have some problems of compatibility with future ISA versions.
VI. CONCLUSION
This paper discusses the integration of the In-Memory Computing capabilities into a system composed of a processor and a memory without changing the processor implementation and instruction set. This is acheived by inverting the von Neuman model: instead of reading instructions from memory, the CPU communicates the instructions, in certain formats, to the IMPACT memory via the standard SRAM buses.
The proposed approch allows to benefit from the huge speedup in terms of execution time and energy consumption offered by the IMPACT system, but also to easily interweave conventional CPU instructions and in-memory computing instructions. One main advantage of this approach is to have a similar data layout view on both CPU and IMPACT side. Whereas, other conventional approaches (e.g., GPUs) need to copy data and change the layout at run-time.
As future works, we aim to continue characterizing applications on high level, and to develop the compiler for this system. High level optimizations of classical programming languages or new programming paradigms are also under investigation.
Fig. 1 .
1 Fig. 1. Comparison between the Communication Protocols of (a) Conventional System (von Neumann), (b) System with Accelerators GPUs (von Neumann with Single Instruction Multiple Data (SIMD)) and (c) IMPACT Systems (non von Neumann).
Fig. 3 .
3 Fig. 3. Comparison between IMC, LiM and PiM Architectures.
Fig. 4 .
4 Fig. 4. Description of IMPACT Instruction Formats.
Fig. 5 .Fig. 6 .
56 Fig. 5. Illustration of a Multi-Operand Instruction (OR) requiring a Pattern Code.
Fig. 7 .
7 Fig. 7. Example of Data and Address Buses Use to Encode IMPACT and Conventional Instructions.
Fig. 8 .
8 Fig. 8. Implementation of IMPACT Store Instruction.
Address Bus Central Memory CPU ALU Decoder Reg PC Control Bus Data Bus
• Control bus: It is a set of additional signals defining the
operating mode, read/write, etc.
Fig. 2. Conventional Computing System Architectures. |
01757665 | en | [
"info.info-es",
"info.info-ar",
"info"
] | 2024/03/05 22:32:10 | 2018 | https://cea.hal.science/cea-01757665/file/DATE-18-HPC.pdf | Henri-Pierre Charles
CONCLUSION & PERSPECTIVES
| 5
BRING THE COMPUTATION INTO MEMORY
When data start to look like motorists in the traffic jam during the rush hour… (data@memory ↔ data@comp_unit)
•••
…it's time to consider teleworking, in other word the in-memory computing …meanwhile in IMC world! The largest part of power consumption of logic and arithmetic operations is due to the memory access! POWER WALL The way to perform basic operations has to be restudied A lot of applications should be improved in performance logic/arith. op. vs memory access energy 100Computing in dedicated units: • High data transfer between the ALU & the memory Power hungry Interconnect & memory security issues • In-memory computing: • Reduced data transfer Energy-efficient Execution time acceleration Security reinforcements (limitation of the side channel attacks (on buses) Enable in-memory operations • Reduce latency & energy consumption due to data transfer • SRAM bit cell array allows: • Long word arithmetic/logic operations not limited by register size, but with memory line size • Multi-row selection for some logic operations • Simultaneous storing in different addresses DATE'18 | Henri-Pierre Charles | 22/03/2018 K. C. Akyel, et al. ICRC, 2016
new communication protocol between the CPU and the IMPACT memory Compatible with existing system architecture (conventional system bus) Enable interleaving the CPU & the in-memory instruction execution • Work on the compiler: Generate the assembly code respecting the communication protocol Interleave the IMC & CPU instruction execution • Based on the performance evaluation Optimize the data set-up in the memory • Data alignment in the IMC • Data interleaving
• Evaluate high level benchmarks -on going • ../.. IMPACT ROADMAP | 16 Maximum two input operands IMPACT OPERATIONS AKA OPCODES DATE'18 | Henri-Pierre Charles | 22/03/2018
Not
Memory Set
Reset
Logic Shift Shift Left Shift Right
& Memory Xor
Operation Nxor
Logic And Or
Nor
Nand
Memory line size word Addition
Arithmetic Operation 8 bits words 16 bits words 32 bits words Substraction Increment Decrement
64 bits words Comparison
DATA INTERLEAVING: A TEST CASE typedef unsigned char ImgLine _attribute_((ext_vector_type(N))); typedef ImgLine Image[M];
-1: if IMPACT instr.
A select pattern bit to enable/disable the pattern |
01757390 | en | [
"spi.gproc",
"chim.poly"
] | 2024/03/05 22:32:10 | 2009 | https://hal.science/hal-01757390/file/Kamar-Arcachon-2009-HAL.pdf | Martial SAUCEAU* Karl Kamar
Martial Sauceau
email: martial.sauceau@mines-albi.fr
Élisabeth Rodier
Jacques Fages
Elisabeth Rodier
Biopolymer foam production using a (SC CO2
est destinée au dépôt
INTRODUCTION
Polymers are widely used in several areas. However, due to their slow degradation and the predicted exhaustion of the world petroleum reserves, significant environmental problems have arisen. Therefore, it is necessary to replace them with bioplastics that degrade in a short time when exposed to a biologically active environment [START_REF] Bucci | PHB packaging for the storage of food products[END_REF]. Biopolymers like PHAs (polyhydroxyalkanoates) are marketed as the natural substitutes for common polymers, as they are 100% biodegradable polymers. PHAs are polyesters of various HAs which are synthesised by numerous microorganisms as energy reserve materials in the presence of excess carbon source. Poly(3-hydroxybutyrate) (PHB) and its copolymers with hydroxyvalerate (PHB-HV) are the most widely found members of this biopolymer group and were also the first to be discovered, and most widely studied PHA [START_REF] Khanna | Recent advances in microbial poly-hydroxyalkanoates[END_REF]. They possess properties similar to various synthetic thermoplastics like polypropylene and hence can be used alternatively. Specifically, PHB exhibits properties such as melting point, a degree of cristallinity and glass transition temperature, similar to polypropylene (PP). Although, PHB is stiffer and more brittle than PP, the copolymerization of PHB with 3-hydroxyvalerate (PHB-HV) produces copolymers which are less stiff and tougher. That is to say that there is a wide range of applications for these copolymers [START_REF] Gunaratne | Multiple melting behaviour of poly(3-hydroxybutyrate-cohydroxyvalerate) using step scan DSC[END_REF]. The properties of this copolymer depend on the HV content, which determines the polymer crystallinity [START_REF] Peng | Isothermal crystallization of poly(3hydroxybutyrate-co-hydroxyvalerate)[END_REF].
Extrusion is a process converting a raw material into a product of uniform shape and density by forcing it through a die under controlled conditions [START_REF] Rauwendaal | Polymer Extrusion[END_REF]. It has extensively been applied in the plastic and rubber industries, where it is the most important manufacturing process. A particular application concerns the generation of polymeric foams. Polymeric foams are expanded materials with large applications in the packaging, insulating, pharmaceutical and car industries because of their high strength/weight ratio or their controlled release properties.
Conventional foams are produced using either chemical or physical blowing agents. Various chemical blowing agents, which are generally low molecular weight organic compounds, are mixed with a polymer matrix and decompose when heated beyond a threshold temperature. This results in the release of a gas, and thus the nucleation of bubbles. This implies however the presence of residues in the porous material and the need for an additional stage to eliminate them.
Injection of scCO 2 in extrusion process modifies the rheological properties of the polymer in the barrel of the extruder and scCO 2 acts as a blowing agent during the relaxation when flowing through the die [START_REF] Sauceau | Improvement of extrusion processes using supercritical carbon dioxide[END_REF]. The pressure drop induces a thermodynamic instability in the polymer matrix, generating a large number of bubbles. The growth of cells continues until the foam is rigidified (when T<T g ). Moreover, its relatively high solubilisation in the polymer results in extensive expansion at the die. The reduction of viscosity decreases the mechanical constraints and the operating temperature within the extruder. Thus, coupling extrusion and scCO 2 would allow the use of fragile or thermolabile molecules, like pharmaceutical molecules. The absence of residues in the final material is also an advantage for a pharmaceutical application.
Our lab has developed a scCO 2 -assisted extrusion process that leads to the manufacturing of microcellular polymeric foams and already elaborated microcellular foams using a biocompatible amorphous polymer [START_REF] Sauceau | Effect of supercritical carbon dioxide on polystyrene extrusion[END_REF][START_REF] Nikitine | Residence time distribution of a pharmaceutical grade polymer/supercritical CO2 melt in a single screw extrusion process[END_REF][START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO2 and single screw extrusion process[END_REF]. In this work, this process has been applied to PHB-HV. Foam production of semi-crystalline polymer is less frequent in the literature. Crystallinity hinders the solubility and diffusion of CO 2 into the polymer and leads consequently to less uniform porous structure [START_REF] Doroudiani | Processing and characterization of microcellular foamed high-density polyethylene/isotactic polypropylene blends[END_REF]. Moreover, it has been shown that a large volume expansion ratio could be achieved by freezing the extrudate surface of the polymer melt at a reasonably low temperature [START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. Thus, in this work, in order to control and improve the porous structure of the PHB-HV, the influence of melt and die temperatures have been studied.
MATERIALS AND METHODS
PHB-HV (M w =600 kDa), with a HV content of 13 % and plasticized with 10 % of a copolyester was purchased from Biomer (Germany). Melting temperature was measured at 159°C by DSC (ATG DSC 111, Setaram). The solid density € ρ P , determined by helium pycnometry (Micromeretics, AccuPYC 1330) is about 1216 kg.m -3 . A rheological study at atmospheric pressure has been performed in oscillatory mode (MARS, Thermo Scientific). The polymer viscosity decreases when temperature and shear rate increase, which is a characteristic behaviour of a pseudoplastic fluid (Figure 1). This characterization step helped in choosing the operating conditions to process PHB-HV by extrusion. These conditions have to ensure that the polymer flows well enough through the barrel without being thermally degraded.
Figure 2 shows the experimental set up, which has previously been detailed elsewhere [START_REF] Nikitine | Residence time distribution of a pharmaceutical grade polymer/supercritical CO2 melt in a single screw extrusion process[END_REF][START_REF] Nikitine | Controlling the structure of a porous polymer by coupling supercritical CO2 and single screw extrusion process[END_REF]. The single-screw extruder has a 30 mm-screw diameter and a length to diameter ratio (L/D) of 35 (Rheoscam, SCAMEX). A great L/D ratio generally indicates a good capacity of mixing and melting but important energy consumption. The screw is divided into three parts. The first one has a length to diameter ratio of 20 and the two others have a length to diameter ratio of 7.5. Between each part, a restriction ring has been fitted out in order to obtain a dynamic gastight which prevents scCO 2 from backflowing. The first conical part allows the transport of solid polymers and then, their melting and plasticizing. Then, the screw has a cylindrical geometry from the first gastight ring to the die. This die has a diameter of 1 mm and a length of 11.5 mm. The temperature inside the barrel is regulated at five locations: T a and T b before the CO 2 injection, T c and T d after the injection and T e in the die.
Figure 1: Evolution of viscosity with pulsation
There are three pressure and two temperature sensors: P 1 after the CO 2 injector, P 2 and T 1 before the second gastight ring and P 3 and T 2 by the die. This allows measuring the temperature and the pressure of the polymer inside the extruder. Errors associated to pressure and temperature sensors were about 0.2 MPa and 3.3°C respectively. CO 2 (N45, Air liquide) is pumped from a cylinder by a syringe pump (260D, ISCO) and then introduced at constant volumetric flow rate. The pressure in the CO 2 pump is kept slightly higher than the pressure P 1 . The CO 2 injector is positioned at a length to diameter ratio of 20 from the feed hopper. It corresponds to the beginning of the metering zone, that is to say the part where the channel depth is constant and equal to 1.5 mm. The pressure, the temperature and the volumetric CO 2 flow rate are measured within the syringe pump. CO 2 density, obtained on NIST website by Span and Wagner equation of state [START_REF] Span R | A New Equation of State for Carbon Dioxide Covering the Fluid Region from the Triple-Point Temperature to 1100 K at Pressures up to 800 MPa[END_REF], is used to calculate mass flow rate and thus the CO 2 mass fraction € w CO 2 . For each experiment, only the temperature of the metering zone T d and of the die T e were changed. The three other temperatures T a , T b and T c were kept constant at 160°C. CO 2 mass fraction € w CO 2 was also kept constant at 1 %, which is much less than solubility [START_REF] Cravo | Solubility of carbon dioxide in a natural biodegradable polymer: determination of diffusion coefficientsJ[END_REF]. Three series of experiments were carried out. T d was fixed at 140°C, 135°C and 130°C respectively and T e varied from 140 down to 110°C. At lower values of T e , the extruder stopped due to too high a pressure P 3 according to the established alarm value.
Once steady state conditions are reached, extrudates were collected and water-cooled at ambient temperature in order to freeze the extrudate structure. Several samples were collected during the experiment in order to check the homogeneity of the extrudates. To calculate the apparent porosity € ρ app , samples were weighed and their volumes were evaluated by measuring their diameter and length with a vernier (Facom). To obtain this apparent density with a good enough precision, the mean of 6 measurements was carried out. Porosity, defined as the ratio of the pore volume to total volume is calculated by equation 1:
€ ε = 1- ρ app ρ P (1)
€ ρ P is the PHB-HV density and € ρ app the apparent density of the extrudates. The theoretical maximum porosity € ε max is obtained if all the dissolved CO 2 would become gaseous inside the extrudate at ambient conditions and would thus create porosity. It could be calculated by the following equation:
€ ε max = w CO 2 ρ P w CO 2 ρ P + ρ CO 2 (atm) (2)
€ w CO 2 is the CO 2 mass fraction and € ρ CO2 (atm) is the CO 2 density at ambient conditions.
To complete the characterization of the porosity structure, samples were examined by scanning electron microscopy (ESEM, FEG, Philips).
RESULTS
The results of porosity are presented in Figure 3. It is noticeable that, for all experiments, the obtained porosity is lower than the theoretical maximal porosity € ε max , which is estimated at about 90 %. The higher porosity, obtained at the lowest T d and T e , 130 and 110°C respectively, is about 70 %.
The porosity increases with decreasing temperature T d . The evolution of porosity with the temperature T e depends on the value of T d . At T d equal to 140°C, the porosity is constant, whereas at T d lower than 140°C, the porosity decreases with increasing die temperature. It was previously observed for polystyrene that, at a reasonably low temperature of polymer melt, there exists an optimal die temperature for which large volume expansion ratio are achieved by freezing the extrudate surface [START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. This effect is explained because more gas is retained in the foam at lower temperature and used for cell nucleation and growth. However, when the nozzle temperature was further decreased, the volume expansion ratio decreased because of the increased stiffness of the frozen skin layer. Our experiments might be explained in the same way. Thus T e would be still too high to obtain higher porosity. As porosity increases, it seems thus that growth phenomena occur at lower temperature. This evolution is opposite to previous results in which coalescence and growth phenomena occurred when temperature increased and led to larger porosity [START_REF] Sauceau | Effect of supercritical carbon dioxide on polystyrene extrusion[END_REF][START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. Indeed, it was believed that the polymer melt should be cooled substantially in order to increased melt strength and thus prevent cell coalescence.
Figure 2
2 Figure 2: Experimental device
Figure 1 :
1 Figure 1: Porosity evolution
Figure 2
2 Figure 2 presents pictures at two different values of T d . It could be observed that the pores are large (more than 200 µm) and that they become fewer and larger when T d decreases.As porosity increases, it seems thus that growth phenomena occur at lower temperature. This evolution is opposite to previous results in which coalescence and growth phenomena occurred when temperature increased and led to larger porosity[START_REF] Sauceau | Effect of supercritical carbon dioxide on polystyrene extrusion[END_REF][START_REF] Park | Low density microcellular foam processing in extrusion using CO2[END_REF]. Indeed, it was believed that the polymer melt should be cooled substantially in order to increased melt strength and thus prevent cell coalescence.
Figure 2 :
2 Figure 2: SEM pictures (a) Td=130°C (b) Td=140°C |
01757793 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01757793/file/ICRA18_2547_Rasheed_Long_Marquez_Caro_HAL.pdf | Tahir Rasheed
email: tahir.rasheed@ls2n.fr
Philip Long
email: p.long@northeastern.edu
David Marquez-Gamez
email: david.marquez-gamez@irt-jules-verne.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Available Wrench Set for Planar Mobile Cable-Driven Parallel Robots
published or not. The documents may come L'archive ouverte pluridisciplinaire
Available Wrench Set for Planar Mobile Cable-Driven Parallel Robots
Tahir Rasheed 1 , Philip Long 2 , David Marquez-Gamez 3 and Stéphane Caro 4 , Member, IEEE Abstract-Cable-Driven Parallel Robots (CDPRs) have several advantages over conventional parallel manipulators most notably a large workspace. CDPRs whose workspace can be further increased by modification of the geometric architecture are known as Reconfigurable Cable Driven Parallel Robots(RCDPRs). A novel concept of RCDPRs, known as Mobile CDPR (MCDPR) that consists of a CDPR carried by multiple mobile bases, is studied in this paper. The system is capable of autonomously navigating to a desired location then deploying to a standard CDPR. In this paper, we analyze the Static equilibrium (SE) of the mobile bases when the system is fully deployed. In contrast to classical CDPRs we show that the workspace of the MCDPR depends, not only on the tension limits, but on the SE constraints as well. We demonstrate how to construct the Available Wrench Set (AWS) for a planar MCDPR wih a point-mass end-effector using both the convex hull and Hyperplane shifting methods. The obtained results are validated in simulation and on an experimental platform consisting of two mobile bases and a CDPR with four cables.
I. INTRODUCTION
A Cable-Driven Parallel Robot (CDPR) is a type of parallel manipulator whose rigid links are replaced by cables. The platform motion is generated by an appropriate control of the cable lengths. Such robots hold numerous advantages over conventional robots e.g. high accelerations, large payload to weight ratio and large workspace [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. However, one of the biggest challenges in classical CDPRs which have a fixed cable layout, i.e. fixed exit points and cable configuration, is the potential collisions between the cables and the surrounding environment that can significantly reduce the workspace. By appropriately modifying the robot architecture, better performance can be achieved [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF]. CD-PRs whose geometric structure can be changed are known as Reconfigurable Cable-Driven Parallel Robots (RCDPRs). Different strategies, for instance maximizing workspace or increasing platform stiffness, have been proposed to optimize cable layout in recent work on RCDPRs [3]- [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF]. However, reconfigurability is typically performed manually, a costly and time consuming task.
Recently a novel concept of Mobile Cable-Driven Parallel Robots (MCDPRs) has been introduced in [START_REF] Rasheed | Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots[END_REF] to achieve an autonomous reconfiguration of RCDPRs. A MCDPR is composed of a classical CDPR with q cables and a n degreeof-freedom (DoF) moving-platform mounted on p mobile bases (MBs). The MCDPR prototype that has been designed and built in the context of Echord++ 'FASTKIT' project is shown in Fig. 1. FASTKIT addresses an industrial need for flexible pick-and-place operations while being easy to install, keeping existing infrastructures and covering large areas. The prototype is composed of eight cables (q = 8), a six degree-of-freedom moving-platform (n = 6) and two MBs (p = 2). The overall objective is to design and implement a system capable of interacting with a high level task planner for logistic operations. Thus the system must be capable of autonomously navigating to the task location, deploying the system such that the task is within the reachable workspace and executing a pick-and-place task. In spite of the numerous advantages of the mobile deployable system, the structural stability must be considered. In [START_REF] Rasheed | Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots[END_REF], a real time continuous tension distribution scheme that takes into account the dynamic equilibrium of the moving-platform and the static equilibrium (SE) of the MBs has been proposed. In this paper, we focus on the workspace analysis of MCDPRs.
The classical techniques used to analyze the workspace of CDPRs are wrench-closure workspace (WCW) [START_REF] Gouttefarde | Analysis of the wrenchclosure workspace of planar parallel cable-driven mechanisms[END_REF], [START_REF] Lau | Wrench-Closure Workspace Generation for Cable Driven Parallel Manipulators using a Hybrid Analytical-Numerical Approach[END_REF] and wrench-feasible workspace (WFW) [START_REF] Gouttefarde | Interval-analysisbased determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF]. In this paper, WFW is chosen as it is more relevant from a practical viewpoint. WFW is defined as the set of platform poses for which the required set of wrenches can be balanced with wrenches generated by the cables, while maintaining the cable tension within the defined limits [START_REF] Gouttefarde | Interval-analysisbased determination of the wrench-feasible workspace of parallel cable-driven robots[END_REF]. For a given pose, the set of wrenches a mechanism can generate is defined as available wrench set (AWS), denoted as A . For classical CDPRs, AWS depends on the robot geometric architecture and the tension limits. The set of wrenches required to complete a task, referred to as the required wrench set (RWS), denoted as R, will be generated if it is fully included by A : R ⊆ A .
(1)
For MCDPRs, the classical definition of AWS for CDPRs must additionally consider the Static Equilibrium (SE) constraints associated with the MBs. The two main approaches used to represent the AWS for CDPRs are the Convex hull method and the Hyperplane shifting method [START_REF] Grünbaum | Convex Polytopes[END_REF]. Once the AWS is defined, WFW can be traced using the Capacity Margin index [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], [START_REF] Ruiz | Arachnis: Analysis of robots actuated by cables with handy and neat interface software[END_REF].
This paper deals with the determination of the AWS required to trace the workspace for planar MCDPRs with point-mass end-effector. Figure 2 illustrates a Planar MCDPR with p = 2 MBs, q = 4 number of cables and n = 2 DoF point mass end-effector. In this paper, wheels are assumed to form a simple contact support with the ground and friction is sufficient to prevent the MBs from sliding. This paper is organized as follows. Section II presents the parameterization of a MCDPR. Section III deals with the SE conditions of the MBs using free body diagrams. Section IV is about the nature of AWS for MCDPRs by considering the SE of the platform as well as the SE of the MBs. Section V discusses how to trace the workspace using the capacity margin index. Section VI shows some experimental validations of the concept. Finally, conclusions are drawn and future work is presented in Section VII.
II. PARAMETERIZATION OF A MCDPR
Let us denotes the jth Mobile Base (MB) as M j , j = 1, . . . , p. The ith cable mounted onto M j is named as C i j , i = 1, . . . , q j , where q j denotes the number of cables carried by M j . The total number of cables of the MCDPR is equal to
q = p ∑ j=1 q j . (2)
Let u i j be the unit vector of C i j pointing from the pointmass effector to the cable exit point A i j . Let t i j be the cable tension vector along u i j , expressed as
t i j = t i j u i j , (3)
where t i j is the tension in the ith cable mounted on M j . The force f i j applied by the ith cable onto M j is expressed as
f i j = -t i j u i j . (4)
III. STATIC EQUILIBRIUM OF A PLANAR MCDPR
For a planar MCDPR with a point mass end-effector, the SE equation of the latter can be expressed as
f e = - p ∑ j=1 q j ∑ i=1 t i j u i j . (5)
Equation ( 5) can be expressed in the matrix form as:
Wt + f e = 0, (6)
where W is a (2 × q) wrench matrix mapping the cable tension vector t ∈ R q onto the wrench applied by the cables
A 12 A 22 A 21 A 11 C C r 1 C l 2 C r 2 B 12 u 21 11 22 u u u G 1 G 2 O f cr j f cl j C lj C rj jt h m o b il e b a se f 2j 1j f G j w g j j A 1j A 2j P e f x [m] y [m] 12 22 1 1 ∑ 2
Fig. 2: Planar MCDPR composed of two MBs (p = 2), four cables (q = 4) with two cables per mobile base (q 1 = q 2 = 2) and a two degree-of-freedom (n = 2) point-mass end-effector on the end-effector. f e = [ f x e f y e ] T denotes the external wrench applied to the end effector. t and W can be expressed as:
t = [t 1 t 2 . . . t j . . . t p ] T , (7)
W = [W 1 W 2 . . . W j . . . W p ], (8)
where
t j = [t 1 j t 2 j . . . t q j j ] T , (9)
W j = [u 1 j u 2 j . . . u q j j ]. (10)
As the MBs should be in equilibrium during the motion of the end-effector, we need to formulate the SE conditions for each mobile base, also referred to as the tipping constraints. Figure 2 illustrates the free body diagram of M j with q j = 2. The SE equations of M j carrying up to q j cables is expressed as:
w g j + q j ∑ i=1 f i j + f cl j + f cr j = 0, (11)
m O j = 0. ( 12
)
w g j denotes the weight vector of M j . m O j denotes the moment of M j about point O.
f cl j = [ f x cl j f y cl j ] T and f cr j = [ f x cr j
f y cr j ] T denote the contact forces between the ground and the left and right wheels contact points C l j and C r j , respectively. Note that the superscripts x and y in the previous vectors denote their x and y components. m O j can be expressed as:
m O j = g T j E T w g j + q j ∑ i=1 a T i j E T f i j + c T l j E T f cl j + c T r j E T f cr j , (13)
with
E = 0 -1 1 0 ( 14
)
where a i j = [a x i j a y i j ] T denotes the Cartesian coordinate vectors of point A i j , c l j = [c x l j c y l j ] T and c r j = [c x r j c y r j ] T denote the Cartesian coordinate vectors of contact points C l j and C r j , respectively. g j = [g x j g y j ] T is the Cartesian coordinate vector of the center of gravity G j . Let m Cr j be the moment generated at the right contact point C r j at the instant when M j loses contact with the ground at point C l j such that f y c l j = 0, expressed as:
m Cr j = (g j -c r j ) T E T w g j + q j ∑ i=1 (c r j -a i j ) T E T t i j (15)
Let Σ j be the set composed of M j , its front and rear wheels, the cables attached to it and the point-mass end-effector, as encircled in red in Fig. 2. From the free body diagram of Σ i , moment m Cr j can also expressed as:
m Cr j = -(p -c r j ) T E T f + (g j -c r j ) T E T w g j + p ∑ o=1,o = j q o ∑ i=1 (p -c r j ) T E T t io , (16)
where o = 1, . . . , p and o = j. p denotes the Cartesian coordinate vector of the point-mass end-effector P. f = [ f x f y ] T denotes the force applied by the cables onto the point-mass end-effector, namely,
f = -f e . (17)
Similarly, the moment m Cl j generated at the left contact point C l j on Σ j takes the form:
m Cl j = -(p -c l j ) T E T f + (g j -c l j ) T E T w g j + p ∑ o=1,o = j q o ∑ i=1 (p -c l j ) T E T t io . (18)
For M j to be stable, the moments generated by the external forces at point C r j (C l j , resp.) should be counterclockwise (clockwise, resp.), namely,
m Cr j ≥ 0, j = 1, . . . , p (19)
m Cl j ≤ 0, j = 1, . . . , p (20)
IV. AVAILABLE WRENCH SET FOR MCDPRS
In this section the nature of the AWS for MCDPRs is analyzed. The cable tension t i j associated with the ith cable mounted on M j is bounded between a minimum tension t i j and a maximum tension t i j . It should be noted that the AWS of a classical CDPR depends uniquely on its platform pose and cable tension limits and forms a zonotope [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF]. In contrast, the tipping constraints of the MBs must be considered in the definition of the AWS of a MCDPR. The AWS A 1 for a planar CDPR with a point mass end-effector can be expressed as:
A 1 = f ∈ R 2 | f = p ∑ j=1 q j ∑ i=1 t i j u i j , t i j ≤ t i j ≤ t i j , i = 1, . . . , q j , j = 1, . . . , p . (21) 0 1 2 3 1 2 3 4 5
11
t [N] 2 1 22 t [N] x f [N] y f [N] W 11 u 22 u 21 u x [m] y [m] t [ N ] T T 1 1 2 2 P 0 0
Fig. 3: MCDPR with q = 3 cables and a n = 2 DoF pointmass end-effector, the black polytopes illustrate the TS and AWS of the CDPR at hand, whereas green polytopes illustrate the TS and AWS of the MCDPR at hand
The AWS A 2 for a planar MCDPR with a point-mass endeffector is derived from A 1 by adding the tipping constraints defined in ( 19) and (20). Thus A 2 can be expressed as:
A 2 = f ∈ R 2 | f = p ∑ j=1 q j ∑ i=1 t i j u i j , t i j ≤ t i j ≤ t i j ,
m Cr j ≥ 0, m Cl j ≤ 0, i = 1, . . . , q j , j = 1, . . . , p .
Figure 3 shows a comparison between the Tension space (TS) T 1 and the AWS A 1 of a CDPR with three cables and an point-mass platform with the TS T 2 and AWS A 2 of a MCDPR with three cables, a point-mass platform and two mobile bases. The tension space between t 11 and t 21 is reduced by the linear tipping constraint on M 1 . As M 2 is carrying a single cable C 22 , only the maximum limit t 22 is modified. The difference in the polytopes A 1 and A 2 is due to the additional constraints associated with the equilibrium of the MCDPR MBs expressed by ( 19) and (20). By considering only the classical cable tension limit (t i j and t i j ) constraints, the shape of the AWS is a zonotope. When the tipping constraints are included, the AWS is no longer a zonotope, but a convex polytope. The two methods used to represent convex polytopes are V-representation, known as the convex hull approach, and H-representation, known as the hyperplane shifting method [START_REF] Grünbaum | Convex Polytopes[END_REF]. V-representation is preferred for visualization while H-representation is used to find the relation between A and R. The convex-hull approach is used to find the vertices that form the boundary of the polytope, whereas hyperplane shifting method is a geometric method used to obtain the facets of the polytope.
y [m] t [N] t [N] v 11 v 21 v 31 v 41 v 51 v 12 v 22 v 32 v 42 P x f [N] y f [N] 2 1
Fig. 4: Comparison of TS and AWS between CDPR (in black) and MCDPR (in green) (a) MCDPR configuration with q 1 = q 2 = 2 (b) TS formed by t i1 (c) TS formed by t i2 (d) A 1 is AWS of the CDPR at hand, A 2 is AWS of the MCDPR at hand
A. Convex Hull Method
AWS is defined using the set of vertices forming the extreme points of the polytope. For the jth mobile base M j , a q j dimensional TS is formed by the attached cables. The shape of this TS depends on the mapping of the tipping constraints on the q j -dimensional TS formed by the cable tension limits t i j and t i j . Figures 4(b) and 4(c) illustrate the TS associated with each MB of the MCDPR configuration shown in Fig. 4(a). The feasible TS is formed by the cable tension limits as well as the tipping constraints of the MBs. The new vertices of the TS for MCDPRs do not correspond with the minimum/maximum cable tensions as in the classical case [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF].
Let v j denote the number of vertices and v k j be the coordinates of kth vertex for the TS associated to the jth mobile base M j , k = 1, . . . , v j . Let V j represent the set of the vertices of the TS associated to M j :
V j = {v k j }, k = {1, . . . , v j }. ( 23
)
Let V j be a (q j × v j ) matrix containing the coordinates of the TS vertices associated with M j , expressed as:
V j = [v 1 j v 2 j . . . v v j j ]. (24)
v is the total number of vertices formed by all the q cables and is obtained by the product of the number of vertices for each MB, namely,
v = p ∏ j=1 v j . ( 25
)
Let V represent the set of all vertices in the TS which is obtained by the Cartesian product between V j , j = 1, . . . , p. Accordingly, V is (q × v)-matrix, which denotes the coordinates of all the vertices in V expressed as: where g = 1, . . . , v. v g is a q-dimensional vector representing the coordinates of the gth vertex of the MCDPR Tension Space. The image of AWS is constructed from V under the mapping of the wrench matrix W. A numerical procedure such as quickhull [START_REF] Barber | The quickhull algorithm for convex hulls[END_REF] is used to compute the convex hull forming the boundary of AWS. Figure 4(d) illustrates the AWS obtained by Convex Hull Method. A 1 is the AWS obtained by considering only the cable tension limits and is a zonotope. A 2 is the AWS obtained by considering both the cable tension limits and the tipping constraints of the two mobile bases.
V = [v 1 v 2 . . . v g . . . v v ], (26)
B. Hyperplane Shifting Method
Hyperplane Shifting Method (HFM) is a geometric approach, which defines a convex polytope as the intersection of the half-spaces bounded by its hyperplanes [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. The classical HFM used to obtain the AWS of CDPRs is described in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF], [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. Nevertheless, this approach is not sufficient to fully characterize the AWS of MCDPRs because it requires a hypercube TS (T 1 ). For instance, for the MCDPR shown in Fig. 3, it can be observed that the TS (T 2 ) is no longer a hypercube due to the additional constraints associated with the SE of the mobile bases.
As a consequence, this section presents an improved version of the HFM described in [START_REF] Rasheed | Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots[END_REF][START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF] that takes into account the tipping constraints of the MCDPR mobile bases. As a result, Fig. 5 depicts the AWS of the MCDPR configuration shown in Fig. 4(a), obtained by the improved HFM. This AWS is bounded by hyperplanes H + s , H - s , s = 1, ..., q, obtained from the cable tension limits associated to the four cables attached to the point mass end-effector, and by hyperplanes H r M 1 and H l M 2 , corresponding to the tipping constraints of M 1 about point C r1 and the tipping constraint of M 2 about point C l2 , respectively.
1) Determination of H l M j and H r M j , j=1,...,p: Let r r j (r l j , resp.) be the unit vector pointing from C r j (C l j , resp.) to P, expressed as:
r r j = p -c r j p -c r j 2 , r l j = p -c l j p -c l j 2 . ( 27
)
Upon dividing (19) ((20), resp.) by pc r j 2 ( pc l j 2 , resp.), the tipping constraints can be expressed in the wrench space as:
-r T r j E T f + (g j -c r j ) T p -c r j 2 E T w g j + p ∑ o=1,o = j q o ∑ i=1 r T r j E T t io ≥ 0, (28)
-r T l j E T f + (g j -c l j ) T p -c l j 2 E T w g j + p ∑ o=1,o = j q o ∑ i=1 r T l j E T t io ≤ 0.
(29) Equations ( 28) and (29) take the form:
e T r j f ≤ d r j , e T l j f ≤ d l j . (30)
Equation ( 30) corresponds to the hyperplanes [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF] for the tipping constraints of M j in the wrench space. e r j and e l j are the unit vectors normal to H r M j and H l M j , expressed as: e r j = Er r j , e l j = -Er l j .
d r j (d l j , resp.) denotes the shifted distance of H r j (H l j , resp.) from the origin of the wrench space along e r j (e l j , resp.). The shift d r j depends on the weight of M j and the combination of the cable tension t io for which
p ∑ o=1,o = j q o ∑ i=1 r T
r j E T t io is a maximum. While the shift d rl depends on the weight of M j and the combination of the cable tension t io for which
p ∑ o=1,o = j q o ∑ i=1 r T l j E T t io is a minimum, namely, d r j = (g j -c r j ) T p -c r j 2 E T w g j + p ∑ o=1,o = j max q o ∑ i=1 r T r j E T v i ko u io , k = 1, ..., v o , (32)
d l j = - (g j -c l j ) T p -c l j 2 E T w g j - p ∑ o=1,o = j min q o ∑ i=1 r T l j E T v i ko u io , k = 1, ..., v o , (33)
where v i ko corresponds to the ith coordinate of v ko . Figure 6 shows the geometric representation of the tipping hyperplanes for the MCDPR under study. From Figs. 5 and6, it can be observed that the orientation of H r M j (H l M j , resp.) is directly obtained from r r j (r l j , resp.)
2) Determination of H + s and H - s , s=1,...,q: For classical CDPRs with given cable tension limits, ∆t i j = t i jt i j is a constant, AWS is a zonotope formed by the set of vectors α i j ∆t i j u i j , where 0 ≤ α i j ≤ 1 [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF], [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF]. The shape of the zonotope depends on the directions of the cable unit vectors u i j as well as the difference between the minimum and maximum cable tension limits ∆t i j . It is noteworthy that ∆t i j is no longer a constant for MCDPRs. The property of a zonotope having parallel facets still holds as the orientation of the hyperplanes is given by the cable unit vectors u i j . However, the position of the hyperplanes is modified, forming a convex polytope with parallel facets rather than a zonotope.
H + s and H - s are obtained using the classical HFM as described in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF], [START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF] based on the TS of MCDPRs. For a planar MCDPR with a point mass end-effector, each cable unit vector u s = u i j will form a pair of parallel hyperplanes {H + s , H - s } at the origin. Each pair is associated with a unit vector e s orthogonal to its facets, expressed as:
e s = E T u s . ( 34
)
The shift of the initial hyperplanes is determined by the projection of the MCDPR tension space vertices on e s . Let l s be a q-dimensional vector containing the projections of the cable unit vectors in W on e s , expressed as:
l s = W T e s . (35)
The projection of u s will be zero as it is orthogonal to e s . The distances h + s and h - s are given by the maximum and minimum combinations of l s with the coordinates of the TS vertices V, expressed as:
h + s = max q ∑ s=1 v s g l s , g = 1, . . . , v , (36) h
- s = min q ∑ s=1 v s g l s , g = 1, . . . , v , (37)
where v s g and l s denote the sth coordinate of vector v g and l s , respectively. To completely characterize the hyperplanes, a point p + s (p - s , resp.) on H + s (H - s , resp.) must be obtained, given as:
p + s = h + s e s + p ∑ j=1 q j ∑ i=1 t i j u i j ; p - s = h - s e s + p ∑ j=1 q j ∑ i=1 t i j u i j , (38)
The respective pair of hyperplanes is expressed as:
H + s : e T s f ≤ d + s ; H - s : -e T s f ≤ d - s . (40)
The above procedure is repeated to determine the q pairs of hyperplanes associated to the q cables of the MCDPR.
V. WORKSPACE ANALYSIS Wrench-feasible workspace (WFW) is defined as the set of poses that are wrench-feasible [START_REF] Bosscher | Wrenchfeasible workspace generation for cable-driven robots[END_REF]. A well known index used to compute the wrench feasible set of poses is called Capacity Margin [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], [START_REF] Ruiz | Arachnis: Analysis of robots actuated by cables with handy and neat interface software[END_REF]. It is a measure of the robustness of the equilibrium of the robot, expressed as:
s = min ( min s j,l ), (41)
where s j,l is the signed distance from jth vertex of the required wrench set R to the lth face of the available wrench set A . s j,l is positive when the constraint is satisfied, and negative otherwise. The index remains negative as long as at least one of the vertices of R is outside of A . The index is positive if all the the vertices of R are inscribed by A .
VI. EXPERIMENTS AND RESULTS
The proposed approach was tested on MCDPR prototype shown in Fig. 7(a), made up of two TurtleBot mobile bases and a 0.5 kg point mass end-effector. The WFW of the MCDPR under study is illustrated in Fig. 7(b) for R equal to the weight of the end-effector. The green region corresponds to the modified WFW where both cable tension limits and mobile base tipping constraints are satisfied. In blue area, at least one of the two mobile bases is tipping. In red area, both the cable tension limits and the mobile base tipping constraints are not satisfied to keep the end-effector in equilibrium and to avoid mobile base tipping. It can be observed that for MCDPRs, the ability of the cables to apply wrenches on the platform may be reduced due to the mobile base tipping constraints. The simulation and the experimental validation of the proposed approach can be seen in video 1 . In the latter, two different trajectories are tested and compared. Based on the proposed workspace analysis, it can be observed that if the end-effector is within the defined WFW calculated offline, both the mobile bases are in equilibrium. On the contrary, for the end-effector outside of the WFW, at least one mobile base is not in equilibrium anymore.
VII. CONCLUSION
In this paper, the Available Wrench Set required to trace the Wrench Feasible Workspace of a Mobile Cable-Driven Parallel Robot (MCDPR) has been determined. The proposed workspace considers the cable tension limits and the static equilibrium of the mobile bases. Two different approaches, the convex hull and the hyperplane shifting method, are used to illustrate how the additional constraints can be considered. The additional constraints modified the shape of the AWS, forming new facets and reducing the capability of the cables to apply wrenches on the platform. Future work will focus on extending this approach to spatial MCDPRs consisting of more than two mobile bases and taking into account wheel slipping constraints. Furthermore, the evolution of the MCDPR workspace during their deployment will be studied.
Fig. 1 :
1 Fig. 1: Fastkit prototype, Undeployed configuration (left) Deployed configuration (right)
1 +Fig. 5 :
15 Fig.5: AWS formed by the intersection of all the hyperplanes (red hyperplanes corresponds to the tipping constraints, blue hyperplanes corresponds to the cables tension limits)
Fig. 6 :
6 Fig. 6: Geometric representation of e r1 and e l2 from the MCDPR configuration
Fig. 7 :
7 Fig. 7: MCDPR prototype made up of two TurtleBot mobile bases and a 0.5 kg point mass end-effector
supported by École Centrale Nantes and Echord++ FASTKIT project. |
01757796 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757796/file/ICOME2017_Shah_Gao_Pashkevich_Caro_Courtemanche_HAL.pdf | Divya Shah
Jiuchun Gao
Anatol Pashkevich
Stéphane Caro
Benoît Courtemanche
Computer-Aided Design and Optimization of a Redundant Robotic System for Automated Fiber Placement Process
Keywords: Robotic fibre placement, Computer-Aided Design, Optimal motion planning, Collision detection, Robot programming
This paper proposes a comprehensive methodology for the computer-aided design and optimization of a robotic workcell for the automated fiber placement. The robotic cell, comprising of a 6-axis industrial manipulator and an actuated positioner, is kinematically redundant with respect to the considered task. An efficient optimization technique for managing these kinematic redundancies is proposed. The robot/positioner motion planning and robotic cell layout design in a CAD environment are presented. To confirm validity of the developed methodology, experimental results are presented that deal with automation of thermoplastic fiber placement process.
INTRODUCTION
Recently, use of fiber reinforced composite materials has drastically increased, mainly in aerospace and automotive industries [START_REF] Marsh | Automating aerospace composites production with fibre placement[END_REF]. Compared to their conventional counterparts, composite materials offer better stiffness-to-weight ratio, strength-to-weight ratio, flexibility of shaping and corrosion resistance [START_REF] Gay | Composite materials: design and applications[END_REF]. To produce components from composite materials, a new technological process AFP (Automated Fiber Placement) is increasingly implemented. In this process the workpiece molds or liners are mounted on the shaft of a rotating positioner to improve the accessibility of certain locations. An industrial manipulator equipped with a specialized technological tool is used to heat the fiber tows and place them in-situ using a pressure roller [START_REF] Peters | Handbook of composites[END_REF].
A typical AFP workcell, comprising of a 6-axis manipulator and 1-or 2-axis positioner for rotating the workpiece, has at least 7-8 degrees of freedom (DOF) whereas the fiber placement task requires only 6 DOF. This kinematic redundancy provides the user with some flexibility in programming the robot motions, and at the same time creates a problem of redundancy management in an optimal way in order to maximize the productivity. At present, a general trend in literature for redundancy resolution is based on converting this continuous path problem into a discrete one [START_REF] Li | A survey on path planning algorithms in robotic fibre placement[END_REF]. Further enhancements were successfully achieved for the applications of laser cutting in [START_REF] Dolgui | Manipulator motion planning for high-speed robotic laser cutting[END_REF] following a graph-based search. For fiber placement application, manipulator motion planning was developed in [START_REF] Martinec | Calculation of the robot trajectory for the optimum directional orientation of fibre placement in the manufacture of composite profile frames[END_REF] assuming constant tool velocity. An alternative approach was proposed in [START_REF] Debout | Tool path smoothing of a redundant machine: Application to Automated Fiber Placement[END_REF], focusing on tool path smoothing in Cartesian space. In [START_REF] Gan | Singularity-Free Workspace Aimed Optimal Design of a 2T2R Parallel Mechanism for Automated Fiber Placement[END_REF], the authors propose a singularity-free optimal design for a 2T2R parallel robot with a 2-axis positioner, thus eliminating the kinematic redundancy for the AFP process.
Another challenge in implementation of AFP is to design the optimal layout of the robotic system and to automate the robot programming process. Existing works in this area mainly deal with welding applications [START_REF] Li | Seeder Rack Welding Robot Workstation Design Based on DELMIA[END_REF]. For the fiber placement, some results were presented in [START_REF] Krombholz | ADVANCED AUTOMATED FIBRE PLACEMENT[END_REF].
However, the complete design and optimization procedure for AFP is very complicated and time-consuming. For this reason, this work concentrates on CAD-based design of workstations for AFP technology taking into account kinematic redundancy of the robotic system.
COMPUTER-AIDED DESIGN OF ROBOTIC CELL
The developed methodology for the design and optimization a robotic system for AFP is presented in Figure -1. The inputs in this process are the robot, positioner, tool and the workpiece models. The first step is the robotic cell assembly, wherein the layout of workcell is determined, and the manufacturing task is defined considering the desired placement track. The next step involves generation of optimal motions for the system taking into consideration the joint limits and kinematic singularities and checking for the admissibility of the relevant locations of the technological tool. At the final steps, the robotic cell simulations and motion programming is performed. If some problems are detected during simulations, the design process can be reconfigured and repeated with necessary modifications in the optimal motion generation or the cell assembly steps. The overall outcomes of the procedure include the cell layout along with robot programs for optimal trajectories, ready to be implemented. Each of these aspects is discussed in detail in the subsequent sections of this paper.
ROBOTIC CELL ASSEMBLY & TASK DEFINITION
The computer-aided design of the robotic cell includes selection of the workcell components and their relative positioning. To simplify this process, the robotic CAD system includes a library of commercially available robots and auxiliary devices. The positioning of the components is adjusted in a manner that allows the robot to reach the entire surface of the workpiece. In addition, certain CAD systems also include specialized tools allowing the user to create a 3D model of the workpiece and generate the fiber placement tracks. This workpiece model is also an input for robotic cell design process.
3D Modeling of Robotic Cell
The robotic system considered in this work consists of 6-axis serial anthropomorphic robot, specialized placement tool, 1-axis rotating positioner and a 1-axis linear track. Thus the system has 8 DOF, with a degree of redundancy equal to 2. Depending upon the geometry of the workpiece in consideration, the robot base location on the linear track can be fixed or changed online; thus providing the additional flexibility.
Using the standard step/iges files available from the manufacturer's catalogs, each of the components was designed in the CATIA V5 software from Dassault Systems as shown in Figure -2. All the active joints were defined so as to be able to simulate the system using joint coordinates.
Kinematic Modeling of Robot & Positioner
For developing the kinematic model of the system, a frame is attached to each component of the robotic cell as follows: : Global frame with Z-axis in the vertical direction. : Robot base frame with origin at the base of the robot and orientation same as the global frame. : Positioner frame with origin at the center of rotating flange and Z-axis as the rotating axis.
: Workpiece frame with Z-axis along the direction of the workpiece liner. : Tool frame located at Tool Center Point (TCP). : Linear track frame with origin at the centre of the moving rail and orientation same as global frame.
The geometric model for the robot r g is obtained using the standard Denavit-Hartenberg parameters and presented in the following way:
j j j j r r q g 6 1 1 T q , ( 1
)
where, T is the 4x4 homogenous transformation matrix, r q are the joint coordinates and j is the joint index. Similarly, the geometric model for the positioner p g is presented as a 4x4 homogenous rotation matrix
Z Rot around the Z-axis; p Z p p q q g Rot ; (2)
and for the linear track l g as a 4x4 homogenous translational matrix
X Trans along the X-axis; l X l l q q g Trans . ( 3
)
Manufacturing Task Definition In the developed system, there are some specialized tools to define fiber placement tracks for the desired workpiece. These tracks are then discretized into a set of points, called task locations, expressed in the workpiece frame. A Cartesian coordinate frame is attached to each of the task location such that the Z-axis is always normal to the workpiece surface and the X-axis follows the travelling direction of the track. The following equation represents the n task locations, indexed as i;
OPTIMAL TRAJECTORY GENERATION
For the considered AFP process, where the tool speed variations are allowed, a discrete optimization based methodology was proposed in [START_REF] Gao | Optimization of the robot and positioner motion in a redundant fiber placement workcell[END_REF] to generate timeoptimal trajectories considering one degree of redundancy, where the robot base location on the linear track is fixed. They are summarized below.
Task Graph Generation
Consider the time interval
where all notations are defined in previous section.
It is clear that this system is nonlinear and redundant. Here, it is reasonable to treat the positioner coordinate as the redundant variable, which allows using robot and positioner models independently. This redundant variable is sampled with a step of p q as follows:
max min , p p k p q q q , (6)
where k=0,1,…,m and
p p p q q q m / min max
. Then, by computing the positioner direct kinematics and robot inverse kinematics as given by eqn. [START_REF] Debout | Tool path smoothing of a redundant machine: Application to Automated Fiber Placement[END_REF]; μ being the robot configuration index vector:
μ q , 1 i k p p r i k r t q g g t ; (7)
a set of all possible robot and positioner configurations for every sampling step of the positioner coordinate and for every task location of the given placement track, can be obtained:
i k p i k r i k TL t q t , , q L . ( 8
)
These 'location cells', expressed in the joint space, form the 'task graph' as shown in Figure -3. In case of two degrees of redundancy, similar sets of location cells can be computed for every sampling step of the linear axis coordinate l q , thus generating a two- dimensional task graph.
Admissibility & Collision Check
The next step is to verify the admissibility of these location cells on the task graph, i.e., to check for the practical feasibility of the configuration represented by the cell against the constraints. The cells violating any of these constraints are flagged "inadmissible".
Firstly, the task graph is checked for the actuator position, velocity and acceleration limits and the kinematic singularities of the robot, as given by the following equation:
max min max min max min j i j j j i j j j i j j q t q q q t q q q t q q
, ( 9
)
where j represents all actuator joints (robot and positioner).
The task graph, thus reduced, is then simulated on the CAD model to detect for collisions within the workcell. Various interferences among the workcell components are defined in the model and when the collisions occur, the colliding components are highlighted, as shown in Figure -4. A set of numerical values are obtained from the CAD model for each location cell, where zero represents all the collision-free candidates, thus obliging the following collision constraint:
T t t q t p , 0 ; 0 , r q cols . ( 10
Optimal Trajectory Planning
To find the optimal trajectory, the edges on the task graph are weighted according to the minimum travelling time required between consequent locations, restricted by the maximum actuator velocities and accelerations. The objective function (robot processing time) can now be presented as a summation of these edge weights, expressed as dist :
1 , , 1 1 1 , i k TL i k TL n i i i dist T L L , (11)
The optimization algorithm is based on the dynamic programming principle to compute the "shortest" path, with its length denoted as
i k d , : i k TL i k TL i k k i k dist d d , ' 1 , , ' ' 1 , , min L L , (12)
These distances are computed sequentially from the second location until the last. The indices k record the optimal solution, expressed in the joint space.
ROBOT CELL SIMULATIONS & PROGRAMMING
Debugging and analysis of the optimal robot motions is done by simulating them in the CAD environment. If the results are not acceptable, the procedure will be easily reiterated using task graphs with finer sampling step or even adjustments in the layout of the cell. The robot program for implementation is then generated. A brief comparison among the standard robot motion commands is shown in Table-1, the choice of which is critical to the performance of the system. This is because the travelling time is recomputed by the robot controller based on the motion commands. Hence, with execution of a set of polynomials and the possibility of adjusting the travelling time between nodes, Spline (SPL) commands are favorable.
EXPERIMENTAL RESULTS
To demonstrate the efficiency of the methods proposed above, the industrial case of fabricating a cylindrical pressure vessel with spherical domes (as shown in Figure -3) using thermoplastic composites was studied in collaboration with Centre Technique des Industries Mécaniques (CETIM). The laser assisted robotic fiber placement platform comprises of the following: 6-axis serial manipulator KUKA KR210 R3100 ultra. 1-axis linear track KUKA KL 2000. 1-axis rotating positioner AFPT 550 series. Specialized placement tool from AFPT.
Starting from the individual models, the cell components were assembled together as shown in the Figure -2, such that the rotating axis falls within maximum area of robot workspace. Here, the length of the workpiece in consideration is much shorter than the horizontal reach of the robot. Thus, the robot base location on the linear axis was fixed. The best position was estimated by computing the solutions for different coordinates on the linear axis and comparing the total travelling times of the process, as shown in Figure -5. This provides the cell layout for the considered case. The desired placement track is represented by the white curve on the workpiece liner in Figure -3. To fabricate the complete pressure vessel, similar placement tracks are repeated with offsets on the rotating axis, so as to cover the entire surface area. This track was discretized into 200 task locations represented in the Cartesian space referenced to the workpiece frame. The positioner coordinate was sampled for every 1° and the solutions were generated for each to form the task graph. Following the admissibility check and trajectory planning, an optimal trajectory was generated for the process with the total travelling time 3.1sec T . The following Figures-6 and -7 represent the positioner and robot joint coordinate and velocity curves respectively, plotted against the travelling time. The trajectory was simulated in the CAD model and the corresponding robot program, ready to be implemented on the industrial platform, was generated using the SPL commands. Compared to the initial value of sec 1 T 4 , this method reduces the total travelling time of the process 75%. This method can be adopted in fabrication of components with various shapes and sizes and complex placement track profiles. Smooth trajectories can be obtained by finer discretization of the placement track and finer sampling of the external axes. The use of the linear axis online allows for fabrication of long workpieces.
CONCLUSIONS
This paper contributes towards the complete methodology for computer-aided design and optimization of a redundant robotic system for the automated fiber placement problem. It integrates the computational tools for optimal motion planning with the CAD tools for design, simulation and verification. The theoretical results obtained in this work promise significant reduction in the processing time, thus improving the overall efficiency. Future work will involve the implementation of these results on the industrial platform at CETIM.
Figure- 1 .
1 Figure-1. An outline of computer-aided design and optimization methodology for AFP process.
Figure- 2 .a
2 Figure-2. CAD model of the AFP robotic workcell
of the task locations defined in eqn. (4). Let us also, define the functions robot and the positioner motions respectively. The optimization problem can thus be formulated as minimizing the total travelling time T . This problem is subject to the following equality constraint representing the closed loop form of the system:
Figure- 3 .
3 Figure-3. Task-graph representation.
)Figure- 4 .
4 Figure-4. Collision detection in CAD model.
Figure- 5 .
5 Figure-5. Optimal robot location selection.
Figure- 6 .
6 Figure-6. Robot and Positioner Joint Coordinates.
Figure- 7 .
7 Figure-7. Robot and Positioner Joint Velocities.
Table -
-
1. Comparison of robot motion commands
PTP LIN/CIR SPL
Interpolation Joint-space Cartesian-space Both
Movement Start-Stop Start-Stop Continuous
Time Setting No No Yes |
01757797 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757797/file/DETC2017_67441_Nayak_Haiyang_Hao_Caro_HAL.pdf | Abhilash Nayak
email: abhilash.nayak@ls2n.fr
Haiyang Li
email: haiyang.li@umail.ucc.ie
Guangbo Hao
email: g.hao@ucc.ie
Stéphane Caro
email: stephane.caro@ls2n.fr
St Éphane Caro
A Reconfigurable Compliant Four-Bar Mechanism with Multiple Operation Modes
published or not. The documents may come
INTRODUCTION
Although mechanisms are often composed of rigid bodies connected by joints, compliant mechanisms include flexible el-ements whose elastic deformation is utilized in order to transmit a force and/or motion. There are different ways to design compliant mechanisms, such as the kinematic based approaches, the building blocks approaches, and the structural optimizationbased approaches [START_REF] Gallego | Synthesis methods in compliant mechanisms: An overview[END_REF][START_REF] Olsen | Utilizing a classification scheme to facilitate rigid-body replacement for compliant mechanism design[END_REF][START_REF] Howell | Compliant mechanisms[END_REF][START_REF] Hao | Conceptual designs of multidegree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF]. In the kinematic based approach, the joints of a chosen rigid-body mechanism are replaced by appropriate compliant joints followed by pseudo-rigid body modeling [START_REF] Olsen | Utilizing a classification scheme to facilitate rigid-body replacement for compliant mechanism design[END_REF][START_REF] Howell | Compliant mechanisms[END_REF][START_REF] Hao | Conceptual designs of multidegree of freedom compliant parallel manipulators composed of wire-beam based compliant mechanisms[END_REF]. This method is advantageous due to the extensive choice of existing rigid-body mechanisms and their modeling tools. Parallel or closed-loop rigid-body architectures gain an upper hand here as their intrinsic properties favour the characteristics of compliant mechanisms like compactness, symmetry to reduce parasitic motions, low stiffness along the desired degrees of freedom (DOF) and high stiffness in other directions. Moreover, compliant mechanisms usually work around a given position for small range of motions and hence they can be designed by considering existing parallel manipulators in parallel singular configurations. Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity as explained in [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF]. Rubbert et al. used an actuation singularity to type-synthesize a compliant medical device [START_REF] Rubbert | Using singularities of parallel manipulators for enhancing the rigid-body replacement design method of compliant mechanisms[END_REF][START_REF] Rubbert | Design of a compensation mechanism for an active cardiac stabilizer based on an assembly of planar compliant mechanisms[END_REF]. Another interesting kind of parallel singularity for a parallel manipulator that does not depend on the choice of ac-
l A D B C y 0 x 0 y 1 x 1 Ʃ 1 O 1 (a,b) O 0 (0,0) x Ʃ l Ʃ 0 FIGURE 1: AN EQUILATERAL FOUR BAR LINKAGE.
tuation is a constraint singularity [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. It divides the workspace of a parallel manipulator into different operation modes resulting in a reconfigurable mechanism. Algebraic geometry tools have proved to be efficient in performing global analysis of parallel manipulators and recognizing their operation modes leading to mobility-reconfiguration [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF][START_REF] Nurahmi | Reconfiguration analysis of a 4-RUU parallel manipulator[END_REF][START_REF] He | Design and Analysis of a New 7R Single-Loop Mechanism with 4R, 6R and 7R Operation Modes[END_REF]. Though there are abundant reconfigurable rigid-body mechanisms in the literature, the study of reconfigurable compliant mechanisms is limited. Hao studied the mobility and structure reconfiguration of compliant mechanisms [START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF] while Hao and Li introduced a position-spacebased structure reconfiguration (PSR) approach to the reconfiguration of compliant mechanisms and to minimize parasitic motions [START_REF] Hao | Positionspace-based compliant mechanism reconfiguration approach and its application in the reduction of parasitic motion[END_REF][START_REF] Li | Compliant mechanism reconfiguration based on position space concept for reducing parasitic motion[END_REF]. In this paper, one of the simplest yet ubiquitous parallel mechanisms, a planar equilateral four-bar linkage is considered at a constraint singularity configuration to synthesize a reconfigurable compliant four-bar mechanism. From our best understanding, this is the first piece of work that considers a constraint singularity to design a reconfigurable compliant mechanism with multiple operation modes, also called motion modes. This paper is organized as follows : Kinematic analysis of a rigid four-bar mechanism is performed to determine the constraint singularities and different operation modes. Rigid-body replacement design approach is followed to further synthesize a reconfigurable compliant four-bar mechanism and the motion type associated to each operation mode is verified through non-linear Finite Element Analysis (FEA).
KINEMATIC ANALYSIS AND OPERATION MODES OF A FOUR BAR LINKAGE
A planar equilateral four-bar linkage with equal link lengths, l is depicted in Fig. 1. Link AD is fixed, AB and CD are the cranks and BC is the coupler. Origin of the fixed frame (Σ 0 ), O 0 coincides with the center of link AD while that of the moving frame (Σ 1 ) O 1 with the center of BC. The coordinate axes are oriented in such a way that the position vectors of the intersection points between the revolute joint axes and the x 0 y 0 plane can be homogeneously written as follows:
r 0 A = [1, -l 2 , 0] T r 0 D = [1, l 2 , 0] T (1)
r 1 B = [1, -l 2 , 0] T r 1 C = [1, l 2 , 0] T (2)
The displacement of the coupler with respect to the fixed frame can be rendered by (a, b, φ ), where a and b represent the positional displacement of the coupler (nothing but the coordinates of point O 1 in Σ 0 ) and φ is the angular displacement about z 0 -axis (angle between x 0 and x 1 ). Thus, the corresponding set of displacements can be mapped onto a three-dimensional projective space, P 3 with homogeneous coordinates x i (i = 1, 2, 3, 4) [START_REF] Bottema | Theoretical Kinematics[END_REF]. This mapping (also known as Blashke mapping in the literature) is defined by the following matrix M :
M = 1 0 0 2x 1 x 3 + 2x 2 x 4 x 2 3 + x 2 4 -x 2 3 + x 2 4 x 2 3 + x 2 4 -2x 3 x 4 x 2 3 + x 2 4 -2x 1 x 4 + 2x 2 x 3 x 2 3 + x 2 4 2x 3 x 4 x 2 3 + x 2 4 -x 2 3 + x 2 4 x 2 3 + x 2 4 (3)
The planar kinematic mapping can also be derived as a special case of Study's kinematic mapping by equating some of the Study parameters to zero [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. To avoid the rotational part of M to be undefined, the following equation is defined:
H := x 2 3 + x 2 4 = 1 (4)
Without loss of generality, x i can be expressed in terms of (a, b, φ ), as follows [START_REF] Bottema | Theoretical Kinematics[END_REF] :
x 1 : x 2 : x 3 : x 4 = (au -bv) : (av + bu) : 2u : 2v with u = sin( φ 2 ), v = cos( φ 2 ) (5)
Constraint Equations
Points B and C are constrained to move along circles of centers A and D, respectively and with radius l each. The position vectors of points B and C are expressed algebraically in frame Σ 0 as follows :
r 0 B = M r 1 B ; r 0 C = M r 1 C (6) 4x 1 2 + 4x 2 2 -l 2 x 4 2 = 0 4lx 1 x 3 = 0 Q 1 Q 2 x 3 /x 4 x 1 /x 4 x 2 /x 4 L 1 L 2 C FIGURE 2: CONSTRAINT MANIFOLDS OF THE FOUR BAR LINKAGE IN IMAGE SPACE.
Therefore, the algebraic constraint equations take the form :
(r 0 B -r 0 A ) T (r 0 B -r 0 A ) = l 2 =⇒ g 1 := 4(x 2 1 + x 2 2 ) + 4lx 1 x 3 -l 2 x 2 4 = 0 (7) (r 0 C -r 0 D ) T (r 0 C -r 0 D ) = l 2 =⇒ g 2 := 4(x 2 1 + x 2 2 ) -4lx 1 x 3 -l 2 x 2 4 = 0 (8)
Since g1 ± g2 = 0 gives the same variety, the final simplified constraint equations are :
H 1 := g 1 -g 2 := 4lx 1 x 3 = 0 (9)
H 2 := g 1 + g 2 := 4(x 2 1 + x 2 2 ) -l 2 x 2 4 = 0 (10)
Equation ( 9) degenerates into two planes x 1 = x 3 = 0 into the image space and Eqn. ( 10) amounts to a cylinder with a circular cross-section in the image space. Assuming x 4 = 0, these constraint manifolds can be represented in the affine space, A 3 , as shown in Fig. 2.
Operation Modes
The affine variety of the polynomials H 1 and H 2 amounts to all the possible displacements attainable by the coupler. This variety is nothing but the intersection of these constraint surfaces in the image space [START_REF] Husty | Algebraic methods in mechanism analysis and synthesis[END_REF]. The intersections can be seen as two lines and a circle in Fig. 2. In fact, these curves can be algebraically represented by decomposing the constraint equations ( 9) and [START_REF] Zlatanov | Constraint Singularities as C-Space Singularities[END_REF]. A primary decomposition of the ideal I = H 1 , H 2 onto the field K(x 1 , x 2 , x 3 , x 4 ) results in the following sub-ideals:
I 1 = x 1 , 2x 2 -lx 4 (11) I 2 = x 1 , 2x 2 + lx 4 ( 12
)
I 3 = x 3 , 4(x 2 1 + x 2 2 ) -l 2 x 2 4 ( 13
)
It shows that this four-bar linkage has three operation modes.
The Hilbert dimension of the ideals I i including the polynomial H from Eqn. ( 4) is calculated to be one, indicating that the DOF of the four-bar mechanism is one in each of these three operation modes. I 1 and I 2 correspond to x 1 = 0 implying u = b a from Eqn. [START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF]. Furthermore, for I 1 , eliminating u from 2x 2lx 4 = 0 gives
a 2 + b 2 -al = 0 (14)
which is the equation of a circle of center point B of Cartesian coordinates ( l 2 , 0) and radius l 2 as shown in Fig. 3.
l/2 D B C ( a , b ) (0,0) (0,0) (l/2,0) A D FIGURE 3: OPERATION MODE 1 : a 2 + b 2 -al = 0
Similarly, I 2 yields
a 2 + b 2 + al = 0 (15)
which is the equation of a circle of center point C of Cartesian coordinates (-l 2 , 0) and radius l 2 as shown in Fig. 4. The third ideal I 3 corresponds to x 3 = 0 and hence u = 0 implying φ = 0. The second equation of the same ideal results in
a 2 + b 2 -l 2 = 0 (16) B C l/2 A ( a , b ) (0,0) D (-l/2,0) (0,0) (0,0) (0,0) ) FIGURE 4: OPERATION MODE 2 : a 2 + b 2 + al = 0.
being the equation of a circle of center (0, 0) and radius l as shown in Fig. 5. As a result, I 1 and I 2 represent rotational modes while I 3 represents a translational mode.
l A D B C (a,b) (0,0) B FIGURE 5: OPERATION MODE 3 : a 2 + b 2 -l 2 = 0.
Ultimately, in Fig. 2, the intersection lines L 1 and L 2 of the constraint manifolds portray the rotational motion modes while the circle C portrays the translational motion mode.
Constraint Singularities
These operation modes are separated by two similar constraint singularities shown in Fig. 6.
They can be algebraically represented by 5), these singularities occur when b = 0, φ = 0 and a = ±l. These two configurations correspond to the two points Q 1 and Q 2 in the image space shown in Fig. 2. At a constraint singularity, any mechanism gains one or more degrees of freedom. Therefore, in case of the four-bar linkage with equal link lengths, the DOF at a constraint singularity is equal 2. In this configuration, points A, B, C and D are collinear and the corresponding motion type is a translational motion along the
x 1 = x 3 = 4x 2 2 - l 2 x 2 4 = 0. From Eqn. (
D B C A (0,0) (l,0) L ABCD (a) a = l, b = 0, φ = 0 A C D B (0,0) (-l,0) L ABCD (b) a = -l, b = 0, φ = 0 FIGURE 6: CONSTRAINT SINGULARITIES OF THE FOUR BAR MECHANISM.
normal to the line L ABCD passing through the four points A, B, C and D combined with a rotation about an axis directed along z 0 and passing through L ABCD . Eventually, it is noteworthy that two actuators are required in order to control the end-effector in those constraint singularities in order to manage the operation mode changing.
DESIGN AND ANALYSIS OF A COMPLIANT FOUR-BAR MECHANISM
In this section, two compliant four-bar mechanisms, compliant four-bar mechanism-1 and compliant four-bar mechanism-2, are proposed based on the operation modes and constraint singularities of the four-bar rigid-body mechanism shown in Fig. 6b. Moreover, the desired motion characteristics of the compliant four-bar mechanism-2 are verified by nonlinear FEA simulations.
Design of a compliant four-bar mechanism
Based on the constraint singularity configuration of the four-bar rigid-body mechanism represented in Fig. 6, a compliant four-bar mechanism can be designed through kinematically replacing the rigid rotational joints with compliant rotational joints [START_REF] Hao | Positionspace-based compliant mechanism reconfiguration approach and its application in the reduction of parasitic motion[END_REF]. Each of the compliant rotational joints can be any type compliant rotational joint such as cross-spring rotational joint, notch rotational joint and cartwheel rotational joint [START_REF] Howell | Compliant mechanisms[END_REF]. As shown in Fig. 7, a compliant four-bar mechanism, termed as the compliant four-bar mechanism-1, has been designed by replacing the four rigid rotational joints with three cross-spring rotational joints (RJ-0, RJ-1 and RJ-3) and one leaf-type isoscelestrapezoidal rotational joint that provides remote rotation centre (RJ-2).
For small motion ranges, the compliant four-bar mechanism-1 has the same operation modes as the fourbar rigid-body mechanism shown in Fig. 6, via controlling the rotations of the Bar-1 and Bar-3. Moreover, both the compliant four-bar mechanism-1 and the four-bar rigid-body mechanism are plane motion mechanisms. Additionally, the three cross-spring rotational joints in the compliant four-bar mechanism-1 can be replaced by other types of rotational joints, which can form different compliant four-bar mechanisms. In this paper, cross-spring rotational joints are employed due to their large motion ranges while small rotation centre shifts. However, the isosceles-trapezoidal rotational joint in the compliant four-bar mechanism-1 performs larger rotation centre shifts compared with the cross-spring rotational joint. Therefore, the compliant four-bar mechanism-1 can be improved by replacing the leaf-type isosceles-trapezoidal rotational joint with a cross-spring rotational joint. Such an improved design can be seen in Fig. 8, which is termed as the compliant four-bar mechanism-2. Note that, in Fig. 8, the RJ-0 and RJ-2, are traditional cross-spring rotational joints, while both the RJ-1 and the RJ-3 are double cross-spring joints introduced in this paper. Each of the rotational joints, RJ-1 and RJ-3, consists of two traditional cross-spring rotational joints in series. We specify that the Bar-0 is fixed to the ground and the Bar-2 is the output motion stage, also named coupler. The main body including rigid bars and compliant joints of the proposed compliant four-bar mechanism-2 can be fabricated monolithically using a CNC milling machine. It can also be 3D printed, and a 3Dprinted prototype is shown in Fig. 8. The bars of the prototype have many small through holes, which can reduce material consumption and improve dynamic performance. Additionally, two cross-shaped parts are added to the actuated bars, which are used to actuate the mechanism by hands. The operation modes of the compliant four-bar mechanism-2 as output stage are analyzed in the following sections.
OPERATION MODES OF THE COMPLIANT FOUR-BAR MECHANISM-2
Like the four-bar rigid-body mechanism shown in Fig. 6b, the output motion stage (Bar 2) of the compliant four-bar mechanism-2 has multiple operation modes under two rotational actuations (controlled by input displacements α and β ), as shown in Fig. 8. However, the compliant four-bar mechanism-2 has more operation modes than the rigid counterpart. In order to simplify the analysis, let α and β be non-negative. A coordinate system is defined in Fig. 8, which is located on Bar 2. Based on this assumption, operation modes of the compliant four-bar mechanism-2 are listed below : These operation modes are also highlighted through the printed prototype in Fig. 9. The primary motions of output motion stage (Bar-2) are the rotation in the XY plane and the translations along the X-and Y-axes; while the rotations in the XZ and YZ planes and translational motion along the Z-axis are the parasitic motions that are not the interest of this paper. Moreover, the rotation angle in the XY-plane and the Y-axis translational motion can be estimated analytically using Eqs. ( 17) and [START_REF] Zhao | A novel compliant linear-motion mechanism based on parasitic motion compensation[END_REF]. However, the X-axis translational motion cannot be accurately estimated in such a simple way, because it is heavily affected by the shift of the rotation centres of the two cross-spring rotational joints [START_REF] Zhao | A novel compliant linear-motion mechanism based on parasitic motion compensation[END_REF]. The X-axis translational motion will be analytically studied in our future work, but will be captured by non-linear FEA.
θ Z = α -β (17)
D Y = 1 2 (L B + L R )(sin α + sin β ) (18)
where θ Z is the rotation in the XY plane and D Y is the translational displacement in the Y-axis. L B and L R are the geometrical dimensions of the reconfigurable mechanism at hand, as defined in Fig. 8.
SIMULATIONS OF THE OPERATION MODES
In order to verify the operation modes of the 4R compliant mechanism-2, nonlinear FEA software is employed to simulate the motions of the compliant four-bar mechanism-2. For the FEA simulations, let L B be 100 mm, L R and L H be 50 mm, the beam thickness be 1 mm, the beam width be 23 mm, the Poissons ratio be 0.33, and the Youngs modulus be 6.9 GPa. Commercial software, COMSOL MULTIPHYSICS, is selected for the nonlinear FEA simulations, using the 10-node tetrahedral element and finer meshing technology (minimum element size 0.2 mm, curvature factor 0.4, and resolution of narrow regions 0.7). Note that the translational displacements of the Bar-2 along the X and Y axes are measured at the centre point of the top surface of the Bar-2 (termed as the interest point), as shown in Fig. Overall, for all the operation modes of the compliant four-bar mechanism-2, the obtained analytical kinematic models are accurate enough to predict the rotation angle in the XY-plane and the translation displacement along the Y-axis, under specific input actuations. Additionally, the parasitic motions are much smaller than the primary motions, which ensures that the tiny effect of the parasitic motions on the primary motions can be ignored in an acceptable way. Therefore, it has been proved that the compliant four-bar mechanism-2 can be operated in the different operation modes with high accuracy.
A PROSPECTIVE APPLICATION AS A COMPLIANT GRIPPER
The reconfigurable compliant four-bar mechanism-1 shown in Fig. 7 is used to design a reconfigurable gripper as shown in Fig. 14. It can exhibit four grasping modes based on the actuation of the linear actuator 1 (±α) or 2 (±β ) as displayed in Fig. 15. The first three grasping modes are angular, where the jaws of the gripper rotate about an instantaneous centre of rotation which is different for each grasping mode. The gripper displays an angular grasping mode when α = 0, β = 0 as shown in Fig. 15a, α = 0, β = 0 as shown in Fig. 15b or when α < 0, β < 0 as shown in the right Fig. 15c. The parallel grasping mode in which the jaws are parallel to one another is achieved when α > 0, β < 0 as shown in the left Fig. 15c. Thus, the reconfigurable compliant gripper at hand unveils an ability to grasp a plethora of shapes unlike other compliant grippers in literature that exhibit only one of these modes of grasping [START_REF] Hao | Design and static testing of a compact distributed-compliance gripper based on flexure motion[END_REF][START_REF] Hao | Conceptual design and modelling of a self-adaptive compliant parallel gripper for highprecision manipulation[END_REF]. Potential applications include micromanipulation and grasping lightweight and vulnerable materials like glass, resins, porous composites, etc. in difficult and dangerous environments. In addition, it can be used for medical applications to grasp and manipulate living tissues during surgical operations or as a gripper mounted on a parallel manipulator dedicated to fast and accurate pick-and-place operations. Figure 16 shows the prototype of the reconfigurable compliant gripper.
CONCLUSIONS AND FUTURE WORK
A novel idea of designing mobility-reconfigurable compliant mechanisms inspired by the constraint singularities of rigid body mechanisms was presented in this paper. A rhombus planar rigid four-bar mechanism was analyzed to identify its three operation modes and two constraint singularities separating those modes. The rigid joints were replaced by compliant joints to obtain two designs of a reconfigurable compliant four-bar mechanism. The second design was found to be more accurate and A novel reconfigurable compliant gripper less parasitic than the first one, which is verified by its nonlinear FEA simulations in different motion modes. Moreover, the compliant four-bar mechanism was shown to have four operation modes based on the particular actuation strategy unlike its rigid counterpart. A preliminary design of a compliant gripper has been designed based on the reconfigurable compliant fourbar mechanism introduced and studied in this paper. In the future, we will focus on the analytical kinetostatic modelling of the reconfigurable compliant mechanism at hand while exploring appropriate applications. We also intend to design mobility-reconfigurable compliant mechanisms based on the constraint singularities of spatial rigid body mechanisms.
Fig. 7 4R 2 FIGURE 7 : 1 Fig. 7 4RFIGURE 8 :
727178 Fig. 7 4R compliant mechanisms: (a) 4R compliant mechanism-1, and (b) 4R compliant mechanism-2
1 .
1 Operation mode I : Rotation in the XY-plane about the Axis-L, when α > 0 and β = 0, as shown in Fig. 9a, 2. Operation mode II : Rotation in the XY-plane about the Axis-R when α = 0 and β > 0, as shown in Fig. 9b, 3. Operation mode III : Rotation in the XY-plane about other axes except the Axis-L and Axis-R, when α = β > 0, as shown in Fig. 9c, and 4. Operation mode IV : Pure translations in the XY-plane along Operation mode I : Rotation in the XY-plane about the Axis-L Operation mode II : Rotation in the XY-plane about the Axis-R Operation mode III : Rotation in the XY-plane about other axes except the Axis-L and Axis-R Operation mode IV : Pure translations in the XY-plane along the X and Y axes
FIGURE 9 :
9 FIGURE 9: OPERATION MODES OF THE COMPLIANT FOUR-BAR MECHANISM-2
8. Results of the simulations are plotted in Figs. 10 to 13, and the following conclusions are drawn : 1. The maximum difference between the FEA results and the analytical results in terms of the Y-axis translation of the interest point (the centre of the top surface of the Bar-2) is tiny, which is less than 0.5% as shown in Figs. 10a, 11a, 12a and 13a. 2. The FEA results of the rotation in the XY-plane match the analytical results very well. The difference is less than 0.8 × 10 -3 rad (0.5% of the maximum rotation angle), which is shown in Figs. 10b, 11b and 12b.
FIGURE 10 :FIGURE 11 :
1011 FIGURE 10: FEA RESULTS FOR OPERATION MODE I
FIGURE 12 :FIGURE 13 :
1213 FIGURE 12: FEA RESULTS FOR OPERATION MODE III
FIGURE 14: A novel reconfigurable compliant gripper
(a) Angular grasping mode 1 : α = 0, β = 0 (b) Angular grasping mode 2 : α = 0, β = 0 (c) Left : parallel grasping mode (α > 0, β < 0); Right : angular grasping mode 3 (α < 0, β < 0)
FIGURE 15 :
15 FIGURE 15: FOUR GRASPING MODES OF THE COMPLI-ANT GRIPPER
FIGURE 16 :
16 FIGURE 16: Prototype of the reconfigurable compliant gripper
(a) Translations along the X and Y axes (b) Rotation about the Axis-L (c) parasitic motions (rotations about the X-and Y-axes and translation along the Z-axis)
ACKNOWLEDGMENT
The authors would like to express their gratitude for the Ulysses 2016 grant between Ireland and France. Mr. Tim Powder and Mr. Mike O'Shea in University College Cork are appreciated |
01757798 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757798/file/CableCon2017_Lessanibahri_Gouttefarde_Caro_Cardou_vf2.pdf | S Lessanibahri
email: saman.lessanibahri@irccyn.ec-nantes.fr
M Gouttefarde
email: marc.gouttefarde@lirmm.fr
S Caro
P Cardou
email: pcardou@gmc.ulaval.ca
Twist Feasibility Analysis of Cable-Driven Parallel Robots
Although several papers addressed the wrench capabilities of cable-driven parallel robots (CDPRs), few have tackled the dual question of their twist capabilities. In this paper, these twist capabilities are evaluated by means of the more specific concept of twist feasibility, which was defined by Gagliardini et al. in a previous work. A CDPR posture is called twist-feasible if all the twists (point-velocity and angular-velocity combinations), within a given set, can be produced at the CDPR mobile platform, within given actuator speed limits. Two problems are solved in this paper: (1) determining the set of required cable winding speeds at the CDPR winches being given a prescribed set of required mobile platform twists; and (2) determining the set of available twists at the CDPR mobile platform from the available cable winding speeds at its winches. The solutions to both problems can be used to determine the twist feasibility of n-degree-of-freedom (DOF) CDPRs driven by m ≥ n cables. An example is presented, where the twist-feasible workspace of a simple CDPR with n = 2 DOF and driven by m = 3 cables is computed to illustrate the proposed method.
Introduction
A cable-driven parallel robot (CDPR) consists of a base frame, a mobile platform, and a set of cables connecting in parallel the mobile platform to the base frame.
The cable lengths or tensions can be adjusted by means of winches and a number of pulleys may be used to route the cables from the winches to the mobile platform. Among other advantages, CDPRs with very large workspaces, e.g. [START_REF] Gouttefarde | A versatile tension distribution algorithm for n-DOF parallel robots driven by n+2 cables[END_REF][START_REF] Lambert | Implementation of an Aerostat Positioning System With Cable Control[END_REF], heavy payloads capabilities [START_REF] Albus | The NIST Robocrane[END_REF], or reconfiguration capabilities, e.g. [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF] can be designed. Moreover, the moving parts of CDPRs being relatively light weight, fast motions of the mobile platform can be obtained, e.g. [START_REF] Kawamura | High-speed manipulation by using parallel wire-driven robots[END_REF].
The cables of a CDPR can only pull and not push on the mobile platform and their tension shall not become larger than some maximum admissible value. Hence, for a given mobile platform pose, the determination of the feasible wrenches at the platform is a fundamental issue, which has been the subject of several previous works, e.g. [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Hassan | Analysis of Bounded Cable Tensions in Cable-Actuated Parallel Manipulators[END_REF]. A relevant issue is then to determine the set of wrench feasible poses, i.e., the so-called Wrench-Feasible Workspace (WFW) [START_REF] Bosscher | Wrench-feasible workspace generation for cable-driven robots[END_REF][START_REF] Riechel | Force-feasible workspace analysis for underconstrained point-mass cable robots[END_REF], since the shape and size of the latter highly depends on the cable tension bounds and on the CDPR geometry [START_REF] Verhoeven | Analysis of the Workspace of Tendon-Based Stewart-Platforms[END_REF]. Another issue which may strongly restrict the usable workspace of a CDPR or, divide it into several disjoint parts, are cable interferences. Therefore, software tools allowing the determination of the interference-free workspace and of the WFW have been proposed, e.g. [START_REF] Ruiz | Arachnis : Analysis of robots actuated by cables with handy and neat interface software[END_REF][START_REF] Perreault | Geometric determination of the interferencefree constant-orientation workspace of parallel cable-driven mechanisms[END_REF],. Besides, recently, a study on acceleration capabilities was proposed in [START_REF] Eden | Available acceleration set for the study of motion capabilities for cable-driven robots[END_REF][START_REF] Gagliardini | Determination of a dynamic feasible workspace for cable-driven parallel robots[END_REF].
As noted in [START_REF] Gagliardini | Dimensioning of cable-driven parallel robot actuators, gearboxes and winches according to the twist feasible workspace[END_REF] and as well known, in addition to wrench feasibility, the design of the winches of a CDPR also requires the consideration of cable and mobile platform velocities since the selection of the winch characteristics (motors, gearboxes, and drums) has to deal with a trade-off between torque and speed. Twist feasibility is then the study of the relationship between the feasible mobile platform twists (linear and angular velocities) and the admissible cable coiling/uncoiling speeds. In the following, the cable coiling/uncoiling speeds are loosely referred to as cable velocities. The main purpose of this paper is to clarify the analysis of twist feasibility and of the related twist-feasible workspace proposed in [START_REF] Gagliardini | Dimensioning of cable-driven parallel robot actuators, gearboxes and winches according to the twist feasible workspace[END_REF]. Contrary to [START_REF] Gagliardini | Dimensioning of cable-driven parallel robot actuators, gearboxes and winches according to the twist feasible workspace[END_REF], the twist feasibility analysis proposed here is based on the usual CDPR differential kinematics where the Jacobian matrix maps the mobile platform twist into the cable velocities. This approach is most important for redundantly actuated CDPRs, whose Jacobian matrix is rectangular.
A number of concepts in this paper are known, notably from manipulability ellipsoids of serial robots, e.g. [START_REF] Yoshikawa | Foundations of Robotics[END_REF], and from studies on the velocity performance of parallel robots, e.g. [START_REF] Krut | Velocity performance indices for parallel mechanisms with actuation redundancy[END_REF]. A review of these works is however out of the scope of the present paper whose contribution boils down to a synthetic twist feasibility analysis of n-degrees-of-freedom (DOF) CDPRs driven by m cables, with m ≥ n. The CDPR can be fully constrained or not, and the cable mass and elasticity are neglected.
The paper is organized as follows. The usual CDPR wrench and Jacobian matrices are defined in Section 2. Section 3 presents the twist feasibility analysis, which consists in solving two problems. The first one is the determination of the set of cable velocities corresponding to a given set of required mobile platform twists (Section 3.1). The second problem is the opposite since it is defined as the calculation of the set of mobile platform twists corresponding to a given set of cable velocities (Section 3.2). The twist and cable velocity sets considered in this paper are convex Fig. 1: Geometric description of a fully constrained CDPR polytopes. In Section 4, a 2-DOF point-mass CDPR driven by 3 cables is considered to illustrate the twist feasibility analysis. Section 5 concludes the paper.
Wrench and Jacobian Matrices
In this section, the well-known wrench matrix and Jacobian matrix of n-DOF mcable CDPRs are defined. The wrench matrix maps the cable tensions into the wrench applied by the cables on the CDPR mobile platform. The Jacobian matrix relates the time derivatives of the cable lengths to the twist of the mobile platform. These two matrices are essentially the same since one is minus the transpose of the other.
Some notations and definitions are first introduced. As illustrated in Fig. 1, let us consider a fixed reference frame, F b , of origin O b and axes x b , y b and z b . The coordinate vectors b a i , i = 1, . . . , m define the positions of the exit points, A i , i = 1, . . . , m, with respect to frame F b . A i is the point where the cable exits the base frame and extends toward the mobile platform. In this paper, the exit points A i are assumed to be fixed, i.e., the motion of the output pulleys is neglected. A frame F p , of origin O p and axes x p , y p and z p , is attached to the mobile platform. The vectors p b i , i = 1, . . . , m are the position vectors of the points B i in F p . The cables are attached to the mobile platform at points B i .
The vector b l i from B i to A i is given by
b l i = b a i -p -R p b i , i = 1, . . . , m (1)
where R is the rotation matrix defining the orientation of the mobile platform, i.e., the orientation of F p in F b , and p is the position vector of F p in F b . The length of the straight line segment
A i B i is l i = || b l i || 2 where || • || 2 is the Euclidean norm.
Neglecting the cable mass, l i corresponds to the length of the cable segment from point A i to point B i . Moreover, neglecting the cable elasticity, l i is the "active" length of the cable that should be unwound from the winch drum. The unit vectors along the cable segment A i B i is given by
b d i = b l i /l i , i = 1, . . . , m (2)
Since the cable mass is neglected in this paper, the force applied by the cable on the platform is equal to τ i b d i , τ i being the cable tension. The static equilibrium of the CDPR platform can then be written [START_REF] Hiller | Design, analysis and realization of tendon-based parallel manipulators[END_REF][START_REF] Roberts | On the Inverse Kinematics, Statics, and Fault Tolerance of Cable-Suspended Robots[END_REF]
Wτ τ τ + w e = 0 ( 3
)
where w e is the external wrench acting on the platform, τ τ τ = [τ 1 , . . . , τ m ] T is the vector of cable tensions, and W is the wrench matrix. The latter is an n × m matrix defined as
W = b d 1 b d 2 . . . b d m R p b 1 × b d 1 R p b 2 × b d 2 . . . R p b m × b d m (4)
The differential kinematics of the CDPR establishes the relationship between the twist t of the mobile platform and the time derivatives of the cable lengths l
Jt = l ( 5
)
where J is the m × n Jacobian matrix and l = l1 , . . . , lm T . The twist t = [ ṗ, ω ω ω] T is composed of the velocity ṗ of the origin of frame F p with respect to F b and of the angular velocity ω ω ω of the mobile platform with respect to F b . Moreover, the well-known kineto-statics duality leads to
J = -W T (6)
In the remainder of this paper, l is loosely referred to as cable velocities. The wrench and Jacobian matrices depend on the geometric parameters a i and b i of the CDPR and on the mobile platform pose, namely on R and p.
Twist Feasibility Analysis
This section contains the contribution of the paper, namely, a twist feasibility analysis which consists in solving the following two problems.
1. For a given pose of the mobile platform of a CDPR and being given a set [t] r of required mobile platform twists, determine the corresponding set of cable velocities l. The set of cable velocities to be determined is called the Required Cable Velocity Set (RCVS) and is denoted l r . The set [t] r is called the Required Twist Set (RTS). 2. For a given pose of the mobile platform of a CDPR and being given a set l a of available (admissible) cable velocities, determine the corresponding set of mobile platform twists t. The former set, l a , is called the Available Cable Velocity Set (ACVS) while the latter is denoted [t] a and called the Available Twist Set (ATS).
In this paper, the discussion is limited to the cases where both the RTS [t] r and the ACVS l a are convex polytopes. Solving the first problem provides the RCVS from which the maximum values of the cable velocities required to produce the given RTS [t] r can be directly deduced. If the winch characteristics are to be determined, the RCVS allows to determine the required speeds of the CDPR winches. If the winch characteristics are already known, the RCVS allows to test whether or not the given RTS is feasible.
Solving the second problem provides the ATS which is the set of twists that can be produced at the mobile platform. It is thus useful either to determine the velocity capabilities of a CDPR or to check whether or not a given RTS is feasible.
Note that the feasibility of a given RTS can be tested either in the cable velocity space, by solving the first problem, or in the space of platform twists, by solving the second problem. Besides, note also that the twist feasibility analysis described above does not account for the dynamics of the CDPR.
Problem 1: Required Cable Velocity Set (RCVS)
The relationship between the mobile platform twist t and the cable velocities l is the differential kinematics in [START_REF] Eden | Available acceleration set for the study of motion capabilities for cable-driven robots[END_REF]. According to this equation, the RCVS l r is defined as the image of the convex polytope [t] r under the linear map J. Consequently, l r is also a convex polytope [START_REF] Ziegler | Lectures on Polytopes[END_REF].
Moreover, if [t] r is a box, the RCVS l r is a particular type of polytope called a zonotope. Such a transformation of a box into a zonotope has previously been studied in CDPR wrench feasibility analysis [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gallina | 3-dof wire driven planar haptic interface[END_REF][START_REF] Gouttefarde | Characterization of Parallel Manipulator Available Wrench Set Facets[END_REF]. Indeed, a box of admissible cable tensions is mapped by the wrench matrix W into a zonotope in the space of platform wrenches. However, a difference lies in the dimensions of the matrices J and W, J being of dimensions m × n while W is an n × m matrix, where n ≤ m. When n < m, on the one hand, W maps the m-dimensional box of admissible cable tensions into the n-dimensional space of platform wrenches. On the other hand, J maps n-dimensional twists into its range space which is a linear subspace of the m-dimensional space of cable velocities l. Hence, when J is not singular, the n-dimensional box [t] r is mapped into the zonotope l r which lies into the ndimensional range space of J, as illustrated in Fig. 3 . When J is singular and has rank r, r < n, the n-dimensional box [t] r is mapped into a zonotope of dimension r.
When an ACVS l a is given, a pose of the mobile platform of a CDPR is twist feasible if
l r ⊆ l a (7)
Since l a is a convex polytope, ( 7) is verified whenever all the vertices of l r are included in l a . Moreover, it is not difficult to prove that l r is the convex hull of the images under J of the vertices of [t] r . Hence, a simple method to verify if a CDPR pose is twist feasible consists in verifying whether the images of the vertices of [t] r are all included into l a .
Problem 2: Available Twist Set (ATS)
The problem is to determine the ATS [t] a corresponding to a given ACVS l a . In the most general case considered in this paper, l a is a convex polytope. By the Minkowski-Weyl's Theorem, a polytope can be represented as the solution set of a finite set of linear inequalities, the so-called (halfspace) H-representation of the polytope [START_REF] Fukuda | Frequently asked questions in polyhedral computation[END_REF][START_REF] Ziegler | Lectures on Polytopes[END_REF], i.e.
l a = { l | C l ≤ d } (8)
where matrix C and vector d are assumed to be known. According to (5), the ATS is defined as
[t] a = { t | Jt ∈ l a } (9)
which, using (8), implies that
[t] a = { t | CJt ≤ d } (10)
The latter equation provides an H-representation of the ATS [t] a . In practice, when the characteristics of the winches of a CDPR are known, the motor maximum speeds limit the set of possible cable velocities as follows li,min ≤ li ≤ li,max [START_REF] Gouttefarde | Characterization of Parallel Manipulator Available Wrench Set Facets[END_REF] where li,min and li,max are the minimum and maximum cable velocities. Note that, usually, li,min = -li,max , l1,min = l2,min = . . . = lm,min , and l1,max = l2,max = . . . = lm,max . In other words, C and d in (8) are defined as
C = 1 -1
and d = l1,max , . . . , lm,max , -l1,min , . . . , -lm,min
T ( 12
)
where 1 is the m × m identity matrix. Eq. ( 10) can then be written as follows
[t] a = { t | lmin ≤ Jt ≤ lmax } (13)
where lmin = l1,min , . . . , lm,min T and lmax = l1,max , . . . , lm,max T .
When a RTS [t] r is given, a pose of the mobile platform of a CDPR is twist feasible if
[t] r ⊆ [t] a (14)
In this paper, [t] r is assumed to be a convex polytope. Hence, ( 14) is verified whenever all the vertices of [t] r are included in [t] a . With the H-representation of [t] a in (10) (or in ( 13)), testing if a pose is twist feasible amounts to verifying if all the vertices of [t] r satisfy the inequality system in (10) (or in ( 13)). Testing twist feasibility thereby becomes a simple task as soon as the vertices of [t] r are known. Finally, let the twist feasible workspace (TFW) of a CDPR be the set of twist feasible poses of its mobile platform. It is worth noting that the boundaries of the TFW are directly available in closed form from [START_REF] Gallina | 3-dof wire driven planar haptic interface[END_REF] or [START_REF] Hassan | Analysis of Bounded Cable Tensions in Cable-Actuated Parallel Manipulators[END_REF]. If the vertices of the (convex) RTS are denoted t j , j = 1, . . . , k, and the rows of the Jacobian matrix are -w T i , according to ( 13), the TFW is defined by li,min ≤ -w T i t j and -w T i t j ≤ li,max , for all possible combinations of i and j. Since w i contains the only variables in these inequalities that depend on the mobile platform pose, and because the closedform expression of w i as a function of the pose is known, the expressions of the boundaries of the TFW are directly obtained.
Case Study
This section deals with the twist feasibility analysis of the two-DOF point-mass planar CDPR driven by three cables shown in Fig. 2. The robot is 3.5 m long and 2.5 m high. The three exit points of the robot are named A 1 , A 2 are A 3 , respectively. The point-mass is denoted P. b d 1 , b d 2 and b d 3 are the unit vectors, expressed in frame F b , of the vectors pointing from point-mass P to cable exit points A 1 , A 2 are A 3 , respectively. The 3 × 2 Jacobian matrix J of this planar CDPR takes the form:
J = - b d T 1 b d T 2 b d T 3 ( 15
)
Figure 3 is obtained by solving the Problem 1 formulated in Sec. 3. For the robot configuration depicted in Fig. 3a and the given RTS of the point-mass P represented in Fig. 3b, the RCVS for the three cables of the planar CDPR are illustrated in Figs. 3c to 3f. Note that the RTS is defined as:
-1 m.s -1 ≤ ẋP ≤ 1 m.s -1 (16) -1 m.s -1 ≤ ẏP ≤ 1 m.s -1 (17)
where [ ẋP , ẏP ] T is the velocity of P in the fixed reference frame F b . Figure 4 depicts the isocontours of the Maximum Required Cable Velocity (MRCV) for each cable through the Cartesian space and for the RTS shown in Fig. 3b. Those results are obtained by solving Problem 1 for all positions of point P. It is apparent
y b O b x b A 1 A 2 A 3 P d 1 d 2 d 3 F b p a 3 3.5 m 2.5 m
Fig. 2: A two-DOF point-mass planar cable-driven parallel robot driven by three cables that P RTS is satisfied through the Cartesian space as long as the maximum velocity of each cable is higher than √ 2 m.s -1 , namely, l1,max = l2,max = l3,max = √ 2 m.s -1 with li,min = -li,max , i = 1, 2, 3.
For the Available Cable Velocity Set (ACVS) defined by inequalities [START_REF] Gouttefarde | Characterization of Parallel Manipulator Available Wrench Set Facets[END_REF] with
li,max = 1.3 m.s -1 , i = 1, 2, 3 (18)
Fig. 5 is obtained by solving the Problem 2 formulated in Sec. 3. For the two robot configurations illustrated in Fig. 5a and5c, the Available Twist Set (ATS) associated to the foregoing ACVS is determined from Eq. ( 13). It is noteworthy that the ATS in each configuation in delimited by three pairs of lines normal to three cables, respectively. It turns out that the first robot configuration is twist feasible for the RTS defined by Eqs. ( 16) and ( 17) because the latter is included into the ATS as shown Fig. 5b. Conversely, the second robot configuration is not twist feasible as the RTS is partially outside the ATS as shown Fig. 5d.
Finally, Fig. 6 shows the TFW of the planar CDPR for four maximum cable velocity limits and for the RTS shown in Fig. 3b. It is apparent the all robot poses are twist feasible as soon as the cable velocity limits of the three cables are higher than √ 2 m.s -1 .
x [m]
Conclusion
In summary, this paper presents two methods of determining the twist-feasibility of a CDPR. The first method uses a set of required mobile platform twists to compute the corresponding required cable velocities, the latter corresponding to cable winding speeds at the winches. The second method takes the opposite route, i.e., it uses the available cable velocities to compute the corresponding set of available mobile platform twists. The second method can be applied to compute the twist-feasible workspace, i.e., to determine the set of mobile platform poses where a prescribed polyhedral required twist set is contained within the available twist set. This method can thus be used to analyze the CDPR speed capabilities over its workspace, which should prove useful in high-speed CDPR applications.
The proposed method can be seen as a dual to the one used to compute the wrench-feasible workspace of a CDPR, just as the velocity equations may be seen as dual to static equations. From a mathematical standpoint, however, the problem is much simpler in the case of the twist-feasible workspace, as the feasibility conditions can be obtained explicitly. Nevertheless, the authors believe that the present paper complements nicely the previous works on wrench feasibility. Finally, we should point out that the proposed method does not deal with the issue of guaranteeing the magnitudes of the mobile platform point-velocity or angular velocity. In such a case, the required twist set becomes a ball or an ellipsoid, and thus is no longer polyhedral. This ellipsoid could be approximated by a polytope in order to apply the method proposed in this paper. However, since the accuracy of the approximation would come at the expense of the number of conditions to be numerically verified, part of our future work will be dedicated to the problem of determining the twist-feasibility of CDPRs for ellipsoidal required twist sets. Fig. 6: TFW of the planar CDPR for four maximum cable velocity limits and for the RTS shown in Fig. 3b Manufacturing Technologies for Composite, Metallic and Hybrid Structures) and of the RFI AT-LANSTIC 2020 CREATOR project.
Fig. 3 :
3 Fig. 3: Required Twist Set (RTS) of the point-mass P and corresponding Required Cable Velocity Sets for the three cables of the CDPR in a given robot configuration
Fig. 4 :
4 Fig.4: Maximum Required Cable Velocity (MRCV) of each cable through the Cartesian space for the RTS shown in Fig.3b
Fig. 5 :
5 Fig. 5: A feasible twist pose and an infeasible twist pose of the CDPR
1 x 1 x
11 li,max = 1.12 m.s -li,max = 1.32 m.s -li,max = 1.414 m.s -1
Acknowledgements
The financial support of the ANR under grant ANR-15-CE10-0006-01 (Dex-terWide project) is greatly acknowledged. This research work was also part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced |
01757800 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757800/file/CableCon2017_Rasheed_Long_Marquez_Caro_vf.pdf | Tahir Rasheed
email: tahir.rasheed@ls2n.fr
Philip Long
email: philip.long@northeastern.edu
David Marquez-Gamez
email: david.marquez-gamez@irt-jules-verne.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Tension Distribution Algorithm for Planar Mobile Cable-Driven Parallel Robots
Keywords: Cable-Driven Parallel Robot, Mobile Robot, Reconfigurability, Tension Distribution Algorithm, Equilibrium
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
A Cable-Driven Parallel Robot (CDPR) is a type of parallel robot whose movingplatform is connected to the base with cables. The lightweight properties of the CDPR makes them suitable for multiple applications such as constructions [START_REF] Albus | The nist spider, a robot crane[END_REF], [START_REF] Pott | Large-scale assembly of solar power plants with parallel cable robots[END_REF], industrial operations [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF], rehabilitation [START_REF] Rosati | Performance of cable suspended robots for upper limb rehabilitation[END_REF] and haptic devices [START_REF] Gallina | 3-dof wire driven planar haptic interface[END_REF].
A general CDPR has a fixed cable layout, i.e. fixed exit points and cable configuration. This fixed geometric structure may limit the workspace size of the manipulator due to cable collisions and some extrernal wrenches that cannot be accepted due to the robot configuration. As there can be several configurations for the robot to perform the prescribed task, an optimized cable layout is required for each task considering an appropriate criterion. Cable robots with movable exit and/or anchor points are known as Reconfigurable Cable-Driven Parallel Robots (RCDPRs). By appropriately modifying the geometric architecture, the robot performance can be improved e.g. lower cable tensions, larger workspace and higher stiffness. The recent work on RCDPR [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF][START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Nguyen | On the analysis of largedimension reconfigurable suspended cable-driven parallel robots[END_REF][START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF][START_REF] Zhou | Analysis framework for cooperating mobile cable robots[END_REF] proposed different design strategies and algorithms to compute optimized cable layout for the required task, while minimizing appropriate criteria such as the robot energy consumption, the robot workspace size and the robot stiffness. However, for most existing RCDPRs, the reconfigurability is performed either discrete and manually or continuously, but with bulky reconfigurable systems.
This paper deals with the concept of Mobile Cable-Driven Parallel Robots (MCDPRs). The idea for introducing MCDPRs is to overcome the manual and discrete reconfigurability of RCDPRs such that an autonomous reconfiguration can be achieved. A MCDPR is composed of a classical CDPR with m cables and a n degreeof-freedom (DoF) moving-platform mounted on p mobile bases. Mobile bases are four-wheeled planar robots with two-DoF translational motions and one-DoF rotational motion. A concept idea of a MCDPR is illustrated in Fig. 1 with m = 8, n = 6 and p = 4. The goal of such system is to provide a low cost and versatile robotic solution for logistics using a combination of mobile bases and CDPR. This system addresses an industrial need for fast pick and place operations while being easy to install, keeping existing infrastructures and covering large areas. The exit points for the cable robot is associated with the position of its respective mobile bases. Each mobile base can navigate in the environment thus allowing the system to alter the geometry of the CDPR. Contrary to classical CDPR, equilibrium for both the moving-platform and the mobile bases should be considered while analyzing the behaviour of the MCDPR.
A Planar Mobile Cable-Driven Parallel Robot with four cables (m = 4), a point mass (n = 2) and two mobile bases (p = 2), shown in Fig. 2, is considered throughout this paper as an illustrative example. This paper is organized as follows. Section 2 presents the static equilibrium conditions for mobile bases using the free body diagram method. Section 3 introduces a modified real time Tension Distribution Algorithm (TDA), which takes into account the dynamic equilibrium of the moving-platform and the static equilibrium of the mobile bases. Section 4 presents the comparison between the existing and modified TDA on the equilibrium of the
Static Equilibrium of Mobile Bases
This section aims at analyzing the static equilibrium of the mobile bases of MCD-PRs. As both the mobile bases should be in equilibrium during the motion of the end-effector, we need to compute the reaction forces generated between the ground and the wheels of the mobile bases. Figure 2 illustrates the free body diagram for the jth mobile base. u i j denotes the unit vector of the ith cable attached to the jth mobile base, i, j = 1, 2. u i j is defined from the point mass P of the MCDPR to the exit point A i j . Using classical equilibrium conditions for the jth mobile base p j , we can write:
∑ f = 0 ⇒ m j g + f 1 j + f 2 j + f r1 j + f r2 j = 0 (1)
All the vectors in Eq. ( 1) are associated with the superscript x and y for respective horizontal and vertical axes. Gravity vector is denoted as g = [0g] T where g = 9.8 m.s -2 , f 1 j = [ f x 1 j f y 1 j ] T and f 2 j = [ f x 2 j f y 2 j ] T are the reaction forces due to cable tensions onto the mobile base p j , C 1 j and C 2 j are the front and rear wheels con- tact points having ground reaction forces f r1 j = [ f x r1 j f y r1 j ] T and f r2 j = [ f x r2 j f y r2 j ] T , respectively. In this paper, wheels are assumed to be simple support points and the friction between those points and the ground is supposed to be high enough to prevent the mobile bases from sliding. The moment at a point O about z-axis for the mobile base to be in equilibrium is expressed as:
M z O = 0 ⇒ g T j E T m j g + a T 1 j E T f 1 j + a T 2 j E T f 2 j + c T 1 j E T f r1 j + c T 2 j E T f r2 j = 0 (2)
with
E = 0 -1 1 0 (3) a 1 j = [a x 1 j a y 1 j ] T and a 2 j = [a x 2 j a y 2 j ] T denote the Cartesian coordinate vectors of the exit points A 1 j and A 2 j , c 1 j = [c x 1 j c y 1 j ] T and c 2 j = [c x 2 j c y 2
j ] T denote the Cartesian coordinate vectors of the contact points C 1 j and C 2 j . g j = [g x j g y j ] T is the Cartesian coordinate vector for the center of gravity G j of the mobile base p j . The previous mentioned vector are all expressed in the base frame F B . Solving simultaneously Eqs. ( 1) and ( 2), the vertical components of the ground reaction forces take the form:
f y r1 j = m j g(c x 2 j -g x j ) + f y 1 j (a x 1 j -c x 2 j ) + f y 2 j (a x 2 j -c x 2 j ) -f x 1 j a y 1 j -f x 2 j a y 2 j c x 2 j -c x 1 j (4)
f y r2 j = m j g -f y 1 j -f y 2 j -f y r1 j (5)
Equations ( 4) and ( 5) illustrate the effect of increasing the external forces (cable tensions) onto the mobile base. Indeed, the external forces exerted onto the mobile base may push the latter towards frontal tipping. It is apparent that the higher the cable tensions, the higher the vertical ground reaction force f y r 1 j and the lower the ground reaction force f y r 2 j . There exists a combination of cable tensions such that f y r 2 j = 0. At this instant, the rear wheel of the jth mobile base will lose contact with the ground at point C 2 j , while generating a moment M C1 j about z-axis at point C 1 j :
M z C1 j = (g j -c 1 j ) T E T m j g + (a 1 j -c 1 j ) T E T f 1 j + (a 2 j -c 1 j ) T E T f 2 j (6)
Similarly for the rear tipping f y r 1 j = 0, the jth mobile base will lose the contact with the ground at C 1 j and will generate a moment M c2 j about z-axis at point C 2 j :
M z C2 j = (g j -c 2 j ) T E T m j g + (a 1 j -c 2 j ) T E T f 1 j + (a 2 j -c 2 j ) T E T f 2 j (7)
As a consequence, for the first mobile base p 1 to be always stable, the moments generated by the external forces should be counter clockwise at point C 11 while it should be clockwise at point C 21 . Therefore, the stability conditions for mobile base p 1 can be expressed as:
M z C11 ≥ 0 (8) M z C21 ≤ 0 (9)
Similarly, the stability constraint conditions for the second mobile base p 2 are expressed as:
M z C12 ≤ 0 (10) M z C22 ≥ 0 ( 11
)
where M z C12 and M z C22 are the moments of the mobile base p 2 about z-axis at the contact points C 12 and C 22 , respectively.
Real-time Tension Distribution Algorithm
In this section an existing Tension Distribution Algorithm (TDA) defined for classical CDPRs is adopted to Mobile Cable-driven Parallel Robots (MCDPRs). The existing algorithm, known as barycenter/centroid algorithm is presented in [START_REF] Lamaury | A tension distribution method with improved computational efficiency[END_REF][START_REF] Mikelsons | A real-time capable force calculation algorithm for redundant tendon-based parallel manipulators[END_REF]. Due to its geometric nature, the algorithm is efficient and appropriate for real time applications [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF]. First, the classical Feasible Cable Tension Domain (FCTD) is defined for CDPRs based on the cable tension limits. Then, the stability (static equilibrium) conditions for the mobile bases are considered in order to define a modified FCTD for MCDPRs. Finally, a new TDA aiming at obtaining the centroid/barycenter of the modified FCTD is presented.
FCTD based on cable tension limits
The dynamic equilibrium equation of a point mass platform is expressed as:
Wt p + w e = 0 =⇒ t p = -W + w e ( 12
)
where W = [u 11 u 21 u 12 u 22 ] is n × m wrench matrix mapping the cable tension space defined in R m onto the available wrench space defined in R (m-n) . w e denotes the external wrench exerted onto the moving-platform. W + is the Moore Penrose pseudo inverse of the wrench matrix W. t p = [t p11 t p21 t p12 t p22 ] T is a particular solution (Minimum Norm Solution) of Eq. ( 12). Having redundancy r = mn = 2, a homogeneous solution t n can be added to the particular solution t p such that:
t = t p + t n =⇒ t = -W + w e + Nλ λ λ ( 13
)
where N is the m × (mn) null space of the wrench matrix W and λ λ λ = [λ 1 λ 2 ] T is a (mn) dimensional arbitrary vector that moves the particular solution into the feasible range of cable tensions. Note that the cable tension t i j associated with the ith cable mounted onto the jth mobile base should be bounded between a minimum tension t and a maximum tension t depending on the motor capacity and the transmission system at hand. According to [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF][START_REF] Lamaury | A tension distribution method with improved computational efficiency[END_REF], there exists a 2-D affine space Σ defined by the solution of Eq. ( 12) and another m-dimensional hypercube Ω defined by the feasible cable tensions:
Σ = {t | Wt = w e } (14
)
Ω = {t | t ≤ t ≤ t} (15)
The intersection between these two spaces amounts to a 2-D convex polygon also known as feasible polygon. Such a polygon exists if and only if the tension distribution admits a solution at least that satisfies the cable tension limits as well as the equilibrium of the moving-platform defined by Eq. [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF]. Therefore, the feasible polygon is defined in the λ λ λ -space by the following linear inequalities:
t -t p ≤ Nλ λ λ ≤ t -t p (16)
The terms of the m × (mn) null space matrix N are defined as follows:
N = n 11 n 21 n 12 n 22 (17)
where each component n i j of the null space N in Eq. ( 17) is a (1 × 2) row vector.
FCTD based on the stability of the mobile bases
This section aims at defining the FCTD while considering the cable tension limits and the stability conditions of the mobile bases. In order to consider the stability of the mobile bases, Eqs. (8 -11) must be expressed into the λ λ λ -space. The stability constraint at point C 11 from Eq. ( 8) can be expressed as:
0 ≤ (g 1 -c 11 ) T E T m 1 g + (a 11 -c 11 ) T E T f 11 + (a 21 -c 11 ) T E T f 21 (18)
f i j is the force applied by the ith cable attached onto the jth mobile base. As f i j is opposite to u i j (see Fig. 2), from Eq. ( 13) f i j can be expressed as:
f i j = -[t pi j + n i j λ λ λ ] u i j (19)
Substituting Eq. (19) in Eq. (18) yields: Term [n i j λ λ λ ]u i j is the mapping of homogeneous solution t ni j for the ith cable carried by the jth mobile base into the Cartesian space. M C11 represents the lower bound for the constraint (8) in the λ λ λ -space:
(c 11 -g 1 ) T E T m 1 g ≤ (c
M C11 = (c 11 -g 1 ) T E T m 1 g + (a 11 -c 11 ) T E T t p11 + (a 21 -c 11 ) T E T t p21 (22)
Simplifying Eq. ( 21) yields:
M C11 ≤ (c 11 -a 11 ) T E T u 11 (c 11 -a 21 ) T E T u 21 n 11 n 21 λ 1 λ 2 (23)
Equation ( 23) can be written as:
M C11 ≤ n C11 λ λ λ (24)
where n C11 is a 1 × 2 row vector. Similarly the stability constraint at point C 21 from Eq. ( 9) can be expressed as:
n C21 λ λ λ ≤ M C21 (25)
where:
M C21 = (c 21 -g 1 ) T E T m 1 g + (a 11 -c 21 ) T E T t p 11 + (a 21 -c 21 ) T E T t p 21 ( 26
)
n C21 = (c 21 -a 11 ) T E T u 11 (c 21 -a 21 ) T E T u 21 n 11 n 21 (27)
Equations ( 24) and ( 25) define the stability constraints of the mobile base p 1 in the λ λ λ -space for the static equilibrium about frontal and rear wheels. Similarly, the above procedure can be repeated to compute the stability constraints in the λ λ λ -space for mobile base p 2 . Constraint Eqs. ( 10) and ( 11) for point C 12 and C 22 can be expressed in the λ λ λ -space as:
n C12 λ λ λ ≤ M C12 (28) M C22 ≤ n C22 λ λ λ (29)
Considering the stability constraints related to each contact point (Eqs. ( 24), ( 25), ( 28) and ( 29)) with the cable tension limit constraints (Eq. ( 16)), the complete system of constraints to calculate the feasible tensions for MCDPR can be expressed as:
t -t p M ≤ N N c λ 1 λ 2 ≤ t -t p M (30)
where:
N c = n C11 n C21 n C12 n C22 , M = M C11 -∞ -∞ M C22 , M = ∞ M C21 M C12 ∞ , (31)
The terms -∞ and ∞ are added for the sake of algorithm [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF] as the latter requires bounds from both ends. The upper part of Eq. (30) defines the tension limit constraints while the lower part represents the stability constraints for both mobile bases.
Tracing FCTD into the λ λ λ -space
The inequality constraints from Eq. ( 30) are used to compute the feasible tension distribution among the cables using the algorithm in [START_REF] Gouttefarde | A versatile tension distribution algorithm for-dof parallel robots driven by cables[END_REF] for tracing the feasible polygon P I . Each constraint defines a line in the λ λ λ -space where the coefficients of λ λ λ define the slope of the corresponding lines. The intersections between these lines form a feasible polygon. The algorithm aims to find the feasible combination for λ 1 and λ 2 (if it exists), that satisfies all the inequality constraints. The algorithm can start with the intersection point v i j between any two lines L i and L j where
λ 1 λ 2 L 1, m ax L 1, m in L 4,min 4,max L 2, m in L L 3,m in L 3,m ax L 2, m ax v 2 v 3 v8 v 7 v 6 v v 4 v 5 f = v 1 init v = P I1
L 1, m ax L 1, m in L 4,min 4,max L2 ,m in L 2, m ax L 3,m in L 3,m ax L LC 2 1 ,m a x L C 1 2 ,m a x L C 1 1 ,m in L C 2 2 ,m in λ 1 λ 2 v = init v 1 v 2 v 5 v 4 v 3 v 6 v 7 v 8 v f = P I 2
Fig. 4: Feasible Polygon considering both tension limit and stability constraints each intersection point v corresponds to a specific value for λ λ λ . After reaching the intersection point v i j , the algorithm leaves the current line L j and follows the next line L i in order to find the next intersection point v ki between lines L k and L i .
The feasible polygon P I is associated with the feasible index set I, which contains the row indices in Eq. (30). At each intersection point, the feasible index set is unchanged or modified by adding the corresponding row index of Eq. (30). It means that for each intersection point, the number of rows from Eq. (30) satisfied at current intersection point should be greater than or equal to the number of rows satisfied at previous visited points. Accordingly, the algorithm makes sure to converge toward the solution. The algorithm keeps track of the intersection points and updates the first vertex v f of the feasible polygon, which depends on the update of feasible index set I. If the feasible index set is updated at intersection point v, the first vertex of the polygon is updated as v f = v. Let's consider that the algorithm has reached a point v ki by first following line L j , then following L i intersecting with line L k . The feasible index set I ki at v ki should be such that I i j ⊆ I ki . If index k is not available in I i j , then I ki = I i j ∪ k as the row k is now satisfied. At each update of the feasible index set I, a new feasible polygon is achieved and the first vertex v f of the polygon is replaced by the current intersection point. This procedure is repeated until a feasible polygon (if it exists) is found, which is determined by visiting v f more than once. After computing the feasible polygon, its centroid, namely the solution furthest away from all the constraints is calculated. The λ λ λ coordinates of the centroid is used to calculate the feasible tension distribution using Eq. [START_REF] Sardain | Forces acting on a biped robot. center of pressure-zero moment point[END_REF].
For the given end-effector position in static equilibrium (see Fig. 2), the feasible polygon P I1 based only on the tension limits is illustrated in Fig. 3 while the feasible polygon P I2 based on the cable tension limits and the stability of the mobile bases is illustrated in Fig. 4. It can be observed that P I2 is smaller than P I1 and, as a consequence, their centroids are different.
Case Study
The stability of the mobile bases is defined by the position of their Zero Moment Point (ZMP). This index is commonly used to determine the dynamic stability of the humanoid and wheeled robots [START_REF] Lafaye | Linear model predictive control of the locomotion of pepper, a humanoid robot with omnidirectional wheels[END_REF][START_REF] Sardain | Forces acting on a biped robot. center of pressure-zero moment point[END_REF][START_REF] Vukobratović | Zero-moment pointthirty five years of its life[END_REF]. It is the point where the moment of contact forces is reduced to the pivoting moment of friction forces about an axis normal to the ground. Here the ZMP amounts to the point where the sum of the moments due to frontal and rear ground reaction forces is null. Once the feasible cable tensions are computed using the constraints of the modified TDA, the ZMP d j of the mobile base p j is expressed by the equation:
M z d j = M z O -f y r j d j (32)
where f y r j is the sum of all the vertical ground reaction forces computed using Eqs. ( 4) and ( 5), M d j is the moment generated at ZMP for the jth mobile base such that M z d j = 0. M O is the moment due to external forces, i.e., weight and cable tensions, except the ground reaction forces at O given by the Eq. ( 2). As a result from Eq. (32), ZMP d j will take the form:
d j = M z O f y r j = g T j E T m j g + a T 1 j E T f 1 j + a T 2 j E T f 2 j f y r j (33)
For the mobile base p j to be in static equilibrium, ZMP d j must lie within the contact points of the wheels, namely, Modified Algorithm for MCDPRs is validated through simulation on a rectangular test trajectory (green path in Fig. 2) where each corner of the rectangle is a zero velocity point. A 8 kg point mass is used. Total trajectory time is 10 s having 3 s for 1-2 and 3-4 paths while 2 s for 2-3 and 4-1 paths. The size of each mobile base is 0.75 m × 0.64 m × 0.7 m. The distance between the two mobile bases is 5 m with exit points A 2 j located at the height of 3 m. The evolution of ZMP for mobile base p 1 is illustrated in Fig. 5a. ZMP must lie between 0 and 0.75, which corresponds to the normalized distance between the two contact points of the wheels, for the first mobile base to be stable. By considering only cable tension limit constraints in the TDA, the first mobile base will tip over the front wheels along the path 3-4 as ZMP goes out of the limit (blue in Fig. 5a). While considering both cable tension limits and stability constraints, the MCDPR will complete the required trajectory with the ZMP satisfying Eqs. (34) and (35). Figure 5b depicts positive cable tensions computed using modified FCTD for MCDPRs.
A video showing the evolution of the feasible polygon as a function of time considering only tension limit constraints and both tension limits and stability constraints can be downloaded at 1 . This video also shows the location the mobile base ZMP as well as some tipping configurations of the mobile cable-driven parallel robot under study.
Conclusion
This paper has introduced a new concept of Mobile Cable-Driven Parallel Robots (MCDPR). The idea is to autonomously navigate and reconfigure the geometric architecture of CDPR without any human interaction. A new real time Tension Distribution algorithm is introduced for MCDPRs that takes into account the stability of the mobile bases during the computation of feasible cable tensions. The proposed algorithm ensures the stability of the mobile bases while guaranteeing a feasible ca-ble tension distribution. Future work will deal with the extension of the algorithm to a 6-DoF MCDPR by taking into account frontal as well as sagittal tipping of the mobile bases and experimental validation thanks to a MCDPR prototype under construction in the framework of the European ECHORD++ "FASTKIT" project.
Fig. 1 :
1 Fig. 1: Concept idea for Mobile Cable-Driven Parallel Robot (MCDPR) with eight cables (m = 8), a six degree-of-freedom moving-platform (n = 6) and four mobile bases (p = 4)
2 PFig. 2 :
22 Fig. 2: Point mass Mobile Cable-Driven Parallel Robot with p = 2, n = 2 and m = 4
11 -a 11 ) T E T [t p11 +n 11 λ λ λ ]u 11 +(c 11 -a 21 ) T E T [t p21 +n 21 λ λ λ ]u 21 (20) M C11 ≤ (c 11 -a 11 ) T E T [n 11 λ λ λ ]u 11 + (c 11 -a 21 ) T E T [n 21 λ λ λ ]u 21 (21)
Fig. 3 :
3 Fig. 3: Feasible Polygon considering only tension limit constraints
Fig. 5 :
5 Fig. 5: (a) Evolution of ZMP for mobile base p 1 (b) Cable tension profile
https://www.youtube.com/watch?v=XmwCoH6eejw
Acknowledgements This research work is part of the European Project ECHORD++ "FASTKIT" dealing with the development of collaborative and mobile cable-driven parallel robots for logistics. |
01757809 | en | [
"sde.es",
"shs.envir",
"shs.socio"
] | 2024/03/05 22:32:10 | 2009 | https://hal.science/hal-01757809/file/Bouleau%20et%20al%20Ecological%20Indicators%202009.pdf | Gabrielle Bouleau
Christine Argillier
Yves Souchon
Carole Barthelemy
Marc Babut
Carole Barthélémy
How ecological indicators construction reveals social changes -the case of lakes and rivers in France
come
How ecological indicators construction reveals social changes -the case of lakes and rivers in France
Introduction
For forty years, ecological concerns have spread beyond scientific spheres. Public interest and investment for ecosystems have been growing, notably for wetlands, rivers, and lakes. As a consequence, many agencies are developing so-called 'ecological restoration' projects the efficacy of which is not always obvious [START_REF] Kondolf | Two Decades of River Restoration in California: What Can We Learn?[END_REF]. Ecological indicators (EI) are meant to support decisions in order to set restoration priorities and/or to assess whether the proposed management will improve ecological conditions or not, or to appraise completed projects. Despite increasing developments in the field of EIs during the last thirty years [START_REF] Barbour | Measuring the attainment of biological integrity in the USA: A critical element of ecological integrity[END_REF][START_REF] Kallis | Evolution of EU water policy: A critical assessment and a hopeful perspective[END_REF][START_REF] Moog | Assessing the ecological integrity of rivers: walking the line among ecological, political and administrative interests[END_REF][START_REF] Wasson | Typologie des eaux courantes pour la directive cadre européenne sur l' eau : l' approche par hydro-écorégion. Mise en place de systèmes d' information à références spatiales[END_REF], many scientists complain that such indicators are hardly used to support decisions, management plans, and programs evaluations ( [START_REF] Dale | Challenges in the development and use of ecological indicators[END_REF][START_REF] Lenz | From data to decisions: Steps to an application-oriented landscape research[END_REF]. They usually attribute the gap separating the creation and the use of EI to social factors [START_REF] Turnhout | Ecological indicators: Between the two fires of science and policy[END_REF]. Therefore, research addressing the social perspective of EI has been developed recently. Yet such work focuses on short-term analyses. It has mainly addressed what social drivers are responsible for using or not using EI, once EI have been designed by scientists.
The recent literature on this topic falls in two categories: the market or the political arena as driving forces. In the first category, a market is presumed to exist for EI in which environmental scientists, the providers, must fulfil the expectations of decision makers, the buyers. In this perspective, authors recommend that EI be reliable, cheap, easy to use [START_REF] Cairns | A history of biological monitoring using benthic macroinvertebrates[END_REF][START_REF] Lenat | Using Benthic Macroinvertebrates Community Structure for Rapid, Cost-effective, Water Quality Monitoring: Rapid Bioassessment[END_REF]. They should provide users with the information they need in a form they can understand [START_REF] Shields | The role of values and objectives in communicating indicators of sustainability[END_REF][START_REF] Mcnie | Reconciling the supply of scientific information with user demands: an analysis of the problem and review of the literature[END_REF] according to their tasks, responsibilities, and values. Different problems in which scientists, politicians, and experts have different roles, may therefore require different indicators [START_REF] Turnhout | Ecological indicators: Between the two fires of science and policy[END_REF]. Even so, [START_REF] Ribaudo | Environmental indices and the politics of the Conservation Reserve Program[END_REF] noticed there is a need for generalised indicators which meet multiple objectives. In this market-like perspective, social demands challenge EI that were designed under purely scientific considerations. These authors suggest that defining indicators with stakeholders' participation improves their chance of success. In the second category, EI are not presumed to compete in a market but rather in a political arena. Each indicator is believed to promote a particular political point of view. Indeed, the ecological status of an ecosystem depends on the boundaries chosen for the system and on the characteristics chosen to be restored. Scholars adopting this perspective insist on the plurality of ecological objectives [START_REF] Higgs | Expanding the scope of restoration ecology[END_REF][START_REF] Jackson | Ecological restoration: a definition and comments[END_REF] and consider that defining restoration goals and objectives is a value-based activity [START_REF] Lackey | Values, policy, and ecosystem health[END_REF]. In this second perspective, economical and technical characteristics of indicators are hardly addressed. Emphasis is put on the expression of choices made within any indicator. Both categories may overlap since the frontier between science, market and policy is a fuzzy area [START_REF] Davis | The Science and Values of Restoration Ecology[END_REF][START_REF] Hobbs | Restoration ecology: the challenge of social values and expectations[END_REF][START_REF] Turnhout | Ecological indicators: Between the two fires of science and policy[END_REF]. In both perspectives, authors have addressed a small feedback loop between science and policy in which social factors act as selection forces for useful or compelling EI, which in turn are meant to supply data for adaptive management and policy changes if required (Fig. i).
To date, very little attention has been paid to the manner values or expectations of stakeholders influence the development of EI prior to selection. Determining the role the social context plays in EI development requires a historical perspective to study a larger feedback loop (Fig. ii). This paper develops a method to study long-term interactions between EI development and social factors. Social factors are understood here to designate all dynamics within human society including structural constraints and human agency. We elaborate an analytical framework to account for these interactions. Then we apply this approach to five case studies in France on lakes and rivers (saprobic index, test on minnows, biotic index, fish index and rapid diagnosis). Last we conclude on the interest of this new approach which provides new elements for explaining the gap between production and use of EI.
An analytical framework to study the long-term co-evolution of EI and society
Managing the environment and the human population is a recent concern of states. Foucault argues that western governments and scientists became interested in indicators only in the last 150 years, as governmental legitimacy shifted from expansionism to optimizing the well-being of domestic population. As political definitions of well-being evolved, they reshaped the scientific agenda [START_REF] Foucault | Naissance de la biopolitique[END_REF]. Scientific facts are not simply given, but selected by actors to make sense at one point [START_REF] Fleck | Genesis and Development of a Scientific Fact[END_REF][START_REF] Latour | Science in action[END_REF]). Access to nature influences data collecting [START_REF] Forsyth | Critical Political Ecology: The Politics of Environmental Science[END_REF]. Power structures influence the way experts define crises [START_REF] Trottier | Water crises: political construction or physical reality[END_REF]. In turn, experts' discourses spur new social demand. This longterm dynamic changes relationships between technology and society [START_REF] Callon | Some elements of a sociology of translation: domestication of the scallops and the fishermen of Saint Brieuc Bay. In Power, Action and Belief: A New Sociology of Knowledge? Sociological Review Monograph[END_REF][START_REF] Latour | Science in action[END_REF][START_REF] Star | Institutional Ecology, ' translations' and Boundary Objects: Amateurs and Professionals in Berkeley' s Museum of Vertebrate Zoology, 1907-39[END_REF]Sabatier and Jenkins-Smith 1993). This applies for EI as well. Exploring their history sheds light on the manner EI evolve, are selected and kept and how in turn, they influence knowledge and social representations (Fig. ii). This research identifies such long-term influences. For this purpose, we propose an analytical framework to sketch a new approach for social studies of EI, which is (1) interdisciplinary, (2) inductive, and (3) historical.
(1) A social study of EI requires a multidisciplinary dialogue between social and natural sciences. Social representations of nature are simultaneously influenced by the scientific state of the art, with which biologists are more familiar, and by the cultural and political context of the period, which social scientists know better. Both competencies are needed. They should not be separate; they should provide different assumptions for the same research questions. Once this is achieved this multidisciplinary approach has yielded an interdisciplinary framework [START_REF] Trottier | Managing Water Resources Past and Present[END_REF].
(2) An inductive and qualitative approach allows emerging concepts to be incorporated during the research [START_REF] Bryman | Qualitative Research Methodology -A review[END_REF][START_REF] Altheide | Ethnographic Content Analysis[END_REF]). In doing so, we identify causal relationships that may be significant even though not being always necessary nor sufficient. We advocate focusing on a small set of largely used indicators rather than studying many different ones. We look for factors that were significant at least in one case study.
(3) Without questioning the validity of scientific methods used, we explain how social factors historically influenced the choice of species and the finality of research. For this purpose, we define constitutive elements of the historical trajectory of an EI: the historical background of places where it was developed, the way data were collected and treated, what ecological knowledge was available, what belief concerning nature and society was spread, and what were the ecological management goals. We elaborate on the variation-selection-retention model of socio-environmental co-evolution developed by [START_REF] Kallis | Socio-environmental co-evolution: some ideas for an analytical approach[END_REF]: why different social groups promoted different representations of nature (variation), how a mainstream representation emerged (selection) and how it was maintained despite possible criticisms (retention). The variation step received very little attention in previous academic work. A second layer of evolution was also missing in previous short-term analyses. Scholars often considered society as a homogeneous entity one could represent in one box (Fig. This analytic framework enables to address cumulative effects, historical opportunities, technological gridlocks and path-dependence of EI that are otherwise ignored.
Case study background and method
The recent evolution of freshwater quality and its relationship with EI development are well documented. Experts generally agree that in western countries, during the 1960s, increasingly visible environmental catastrophes spurred public support for environmental legislation. This resulted in pieces of law as the Clean Water Act in the USA [START_REF] Barbour | Measuring the attainment of biological integrity in the USA: A critical element of ecological integrity[END_REF] and the first Council Directives in the European Community addressing water quality [START_REF] Aubin | European Water Policy. A path towards an integrated resource management[END_REF]. The adopted legislation has proven to be generally efficient in addressing point-source chemical pollution but is too narrowly focused to deal with habitat degradation that became more obvious once industrial and urban discharges into streams were reduced [START_REF] Karr | Defining and assessing ecological integrity beyond water quality[END_REF]. EI development appeared as a secondary social response to this initial improvement in chemical quality of freshwater. Yet the knowledge and the data used to design EI were developed much sooner in the last century. Therefore a study stretching back to the late nineteenth century was necessary to understand what data was available and what social representations framed (or were framed by) the emergence of EI in France.
Identifying consistent periods during which water management was uniform is difficult. Transitions emerge slowly from several, sometimes conflicting, influences. Yet, several-decade phases offer convenient time intervals to identify social changes affecting water management. Three main periods are presented within the scope of our study. In the first phase , the quality of surface water became a national issue for fishermen and hygienists as they developed specific sectors for drinking water and fishing. In the second phase economists and urban planners began to worry about the availability of water resources and they developed a sector to treat pollution. In the third phase (since 1980) a social demand for ecosystems management grew at the European level and promoted water body restoration.
Our research project was undertaken by two sociologists and three freshwater biologists. Sociologists undertook in depth interviews with actors having played different roles in different places (managers, scientists, activists, policy makers …). They conducted thirty-two 2-3 hour semi-structured interviews with people involved in the design or the use of five indicators in France -i.e. saprobic index [START_REF] Kolkwitz | Okologie des pflanzlichen saprobien[END_REF], test on minnows (Phoxinus phoxinus, L.1758), biotic index [START_REF] Woodiwiss | The biological system of stream classification used by the Trent River Board[END_REF][START_REF] Tufféry | Méthode de détermination de la qualité biologique des eaux courantes -exploitation codifiée des inventaires de la faune de fond[END_REF], fish index [START_REF] Oberdorff | Development and validation of a fishbased index for the assessment of 'river health' in France[END_REF][START_REF] Pont | Assessing the biotic integrity of rivers at the continental scale: a European approach[END_REF], and rapid diagnosis for lake [START_REF] Barbe | Diagnose rapide des plans d' eau[END_REF]). These indicators are the most currently used by water agencies and wildlife services. Sociologists attend some field work to get familiar with methods. Freshwater biologists compiled bibliographic and ethnographic work on published and non-published ecological research. Together we analysed the evolution of available knowledge and social justifications in each document. The snowballing method was used to identify actors involved in this process. Being affiliated to Cemagref and/or graduated from the National School of ingénieurs du GREF sometimes helped contacting officials and state engineers involved in the story. Most interviewees were contacted several times. We asked them to review their interviews and to send related personal archives. Historical studies were used for most ancient periods. We asked the actors to tell how they have been involved in the process and how they might explain choices done before they had been involved. We triangulated the results from the interviews using scientific, legal, and technical materials published at each stage to support or confront testimonies. We completed this material with bibliographical review of social works dealing with how water-related social and scientific practices and ideas have evolved during time. We compared different trajectories in order to assess whether common characteristics could be identified.
Results: the development of EI in France for lakes and rivers
In this section, we focus on three historical phases, 1900-1959, 1960-1980, and since 1980 to study five EI, namely the saprobic index, the test on minnows, the biotic index, the fish index and the rapid diagnosis for lake. All EI we studied were first set up by outsiders challenging mainstream water management. Following this phase of construction where adaptation prevailed, legal changes in law induced a shift by resting the burden of proof on other stakeholders and leading to a phase of accumulation.
1900-1959: Focus on fish and germs
At the end of the 19 th Century, two social groups alerted the public to the deterioration of surface water quality in France. The first group stemmed from a handful of physicians who experienced difficulty demonstrating that water could be pathogenic. Fishing clubs made up the second group as they complained about fish depletion. Unlike epidemiologists, they did not join the hygienist movement. They advocated restocking, reducing poaching, and punishing industrial discharges. These approaches led to two different tools: the saprobic index (developed in Germany at the beginning of the 20 th century) and a poorly reliable ecotoxicological test on minnows, perfected by French legal experts at the same period. While the hygiene act of 1902 compelled to test water quality systems systematically and secured funds for nationwide surveys, French law did not acknowledge accidental pollution as a misdemeanour before 1959 and fishermen kept struggling to obtain the recognition of its impact on fish population. This difference had a major effect on the fate of the variable promoted by each group.
Sanitary actors experienced hard time in the beginning. Freshwaters were long considered very healthy in France. The miasma theory predominated, which held that odours -rather than water -were malignant. Moreover, until the mid-nineteenth century, decayed matter was considered as beneficial by most people, including a large part of physicians. Despite the initial reluctance from the medical world, some outsiders started listing organisms living in contaminated waters. They succeeded in getting other stakeholders (urban planners, engineers, politicians) interested in their hygienist ideas. These sanitary reformers made rivers suspicious as they established more relations between contaminated places and water [START_REF] Goubert | La conquête de l' eau. L' avènement de la santé à l' âge industriel[END_REF]. It took time before hygienism spread out as a political doctrine with the scientific support of German physician Robert Koch (1843Koch ( -1910) ) and French chemist Louis Pasteur (1822-1895) pointing out the existence of germs. The French hygiene act of 1902 resulted from this evolution and paved the way for systematic chemical and biological assessments of diversions for water supply. The first EI got stabilized in this context. German naturalists [START_REF] Kolkwitz | Okologie des pflanzlichen saprobien[END_REF] developed the "saprobic" index. It consisted of an array of organisms ranked according to their response to organic matter. Presence and abundance of saprobic organisms, those that resisted best to faecal contamination, meant risks and corresponding waters were not to be used for drinking purposes. The saprobic index became a common "passage point" [START_REF] Star | Institutional Ecology, ' translations' and Boundary Objects: Amateurs and Professionals in Berkeley' s Museum of Vertebrate Zoology, 1907-39[END_REF] for many technologies addressing human well-being in cities. Sanitary authorities published in 1903 the first nationwide inventory of water addressing quality and updated periodically [START_REF] Goubert | La conquête de l' eau. L' avènement de la santé à l' âge industriel[END_REF]). Ironically, the sanitary perspective did not lead to any improvement of surface water quality. Since technical solutions to convey spring water or groundwater such as long aqueducts existed, urban rivers became stale. Large sewers were developed, more and more effluents were discharged in rivers [START_REF] Barles | The Nitrogen Question: Urbanization, Industrialization, and River Quality in Paris, 1830-1939[END_REF]. The development of the saprobic index is a success story. Outside the mainstream medical world, a few physicians lobbied to change waste water management and ended with standardization of the EI once the law institutionalized hygienist ideas.
Fishermen were not as successful. Fishing clubs had long complained about industrial impacts. But fish depletion in freshwaters was always attributed to poaching [START_REF] Barthélémy | Des rapports sociaux à la frontière des savoirs. Les pratiques populaires de pêche amateur au défi de la gestion environnementale du Rhône[END_REF] and French regulation targeted fishermen first. If ever acknowledged, pollution was considered as reversible and likely to be compensated by restocking. In case of fish mortality, claimants remained isolated outsiders. They had to provide evidence to pursue effluents dischargers. Fishing authorities would sample suspicious waters and perform tests on minnows to assess presence of toxic substances. Frequently samples were done after the toxic discharge stopped. Judges constantly considered that such cases were peculiar cases of poaching [START_REF] Corbin | L' avénement des loisirs (1850-1960[END_REF]. In the late 1880s, the state promoted fishing as an accessible supply of proteins to the poor. To prevent poaching and to maintain fish populations, regardless of water quality, fishing clubs and governmental authorities agreed to raise a tax on fishermen to increase control and develop hatcheries. Since the beginning of the twentieth century, fishermen helplessly asked for a specific legislation against pollution. The ecotoxicological test on minnows failed. Its story started with isolated actors claiming for a different water management. But fishermen hardly convinced larger coalitions of actors that pollution was a problem and the law did not change. No public fund was ever secured to apply the test at large scale. Since the burden of proving pollution rested on fishing clubs, data they accumulated remained scattered, and depending on their own activism. The legal acknowledgement of any ecological dysfunction is a crucial step for EI development. It changes the need for its monitoring.
This is what happened in 1959, when pollution became punishable in France.
1960-1980: resource management
Discontent fishermen suddenly found an opportunity to be listened to in 1959. As the colonial empire challenged French political influence overseas, President de Gaulle was trying to get legitimacy by paying attention to current domestic claims during emergency powers. By the edict of 1959, fishermen obtained that accidental pollution in rivers should be punished. Before the edict of 1959, only fishermen were trying to find evidence of pollution. After this edict, polluters too suddenly became concerned. This piece of law was the impetus of a major change in the French water management, although it was not discussed by parliament and adopted during a troublesome period [START_REF] Bouleau | La gestion française des rivières et ses indicateurs à l' épreuve de la directive cadre[END_REF]. In response, industrials and municipal councils, threatened by the new edict, asked for public funds to support treatment plants. The French post-war planned administration relied on the Commissariat Général au Plan for making prospects and public policies [START_REF] Colson | Revisiting a futures studies project--' Reflections on 1985[END_REF]. In this arena, many engineers were in favour of an economic and spatial approach of natural resources planning [START_REF] Rabinow | French Modern: Norms and Forms of the Social Environment[END_REF]. Influenced by [START_REF] Pigou | The Economics of Welfare[END_REF] and Kneese's ideas (1962), they promoted a "polluter-pays principle" at the scale of the watershed. It was enacted by the 1964 water law which established basin agencies and basin committee [START_REF] Nicolazo | Les agences de l' eau[END_REF]. To apply this principle, basin committees agreed on several conventional chemical parameters in order to make pollution commensurable from upstream to downstream, in rivers and lakes: suspended matter, dissolve salts, and a combined parameter including biological and chemical oxygen demand. Basin Agencies collected levies on water users and delivered grants to fund water supply and waste water treatment facilities. Needs were huge given the post-war demographic expansion, the repatriation after the end of colonisation, and industrial development fuelled by the reconstruction economy. The focus on chemistry helped build consensus on investment priorities, but it left habitat degradation aside. The biotic index was an attempt to fill the gap on rivers. Because this story is quite recent, we were able to interview actors who promoted this new EI [START_REF] Bouleau | La gestion française des rivières et ses indicateurs à l' épreuve de la directive cadre[END_REF].
The origin of the French biotic index dated back to the creation of a single state corps of engineers in 1965, ingénieurs du génie rural, des eaux et des forêts (IGREF), gathering "developers" of the corps of rural planners (ingénieurs du génie rural) and more "conservationist" engineers of the corps for water and forestry (Ingénieurs des eaux et des forêts). Developers promoted agricultural modernization. They allocated state grants to farmers and rural districts in order to drain wetlands, canalize streams, raise levees against frequent floods and build dams storing winter flows for irrigation. IGREF conservationists were critical of such projects they called "anti-tank ditches", and they felt marginalized. Basin agencies did not pay attention to upstream developments, focusing on downstream urbanized and industrialized rivers, more affected by pollution. As they shared the same concern for upstream rivers, conservationists built alliances with fishing clubs, getting them organized at the State level, supporting their claims against polluting effluents and sharing material and staff. They searched an EI that would perform better than the test on minnows. As sampling was often performed after the toxic discharge had stopped, they were looking for biological sentinels proving that pollution occurred. They hired freshwater biologists "who were anglers, interested in this stuff, because this was the issue". One IGREF said to a candidate: "your one and only job is to find some stuff we could present and standardize". Tuffery and Verneaux (1967) adapted the biotic index developed by the Trent River Authority [START_REF] Woodiwiss | The biological system of stream classification used by the Trent River Board[END_REF] to French rivers. [START_REF] Verneaux | Cours d' eau de Franche-Comté (Massif du Jura). Essai de biotypologie[END_REF] identified 10 river types on the Doubs basin based on a factorial correspondence analysis of the presence or abundance of Macrobenthic Invertebrates. In the foreword of his PhD dissertation Verneaux recounted: "This deals with a long story which started with my father twenty-five years ago on the banks of the Doubs River and is probably going to stop in few years as the river dies". He established reference conditions correlated to a combination of abiotic variables (slope, distance to source, width, and summer temperature). Then he extended his survey and adapted the biotic index to all French rivers and got it standardized (Indice Biologique Global Normalisé, IBGN). Conservationists enlisted the biotic index in the state regular monitoring (arrêté interministériel of September 2, 1969). Following the 1976 act on nature protection, IBGN became a common tool for environmental impact assessment. Because of its initial target on accidental pollution, IBGN remained poorly relevant to point out potential disturbance of trout habitat (channel simplication, increasing temperature) by hydraulic works. IBGN had little consideration to environmental variability, which would have required much more data collecting and processing. Albeit imperfect, such an option enabled rapid development of professional training for ecological monitoring. It secured funds for monitoring activities on a regular basis, and resulted in a today-available thirty-year database.
Analysing documents concerning the biotic index, we found the same pattern of EI development as the one induced from the saprobic index case. Outsiders promoted another focus on river management, made up a new EI, enrolled allies, and obtained a legal obligation of monitoring. Our interviews revealed other aspects of the social trajectory of the biotic index. Experts focused on what they valued in personal ties and collective strategies or practices. In the case of the biotic index their preference for upstream reaches and their targeting accidental pollution were embedded in the EI they developed.
to now: ecosystem management
Whereas post-war industrial fast development notably improved French workers conditions, it did not thoroughly fulfil middle class expectations. Graduate employees from the baby-boom generation did not adhere completely to the stalwart praise for industrial achievements. They complained of environmental degradation. Better equipped with computers to quantitatively address ecosystem relationships, ecological scientists supported this claim [START_REF] Fabiani | Sciences des écosystèmes et protection de la nature. In Protection de la nature[END_REF][START_REF] Aspe | Construction sociale des normes et mode de pensée environnemental[END_REF]. Data collection remained the weak spot. Developers had much more data to promote their projects than ecologists had to sustain environmental preservation. The creation of the Ministry of Environment (1971) and the 1976 act requiring environmental impact assessments, were critical steps securing funding for research. But the French state would not have extended its regular monitoring without European binding legislation. It spurred the development of two other EI, the rapid diagnosis for lakes and the fish index.
Although, lake dystrophy had long been ignored by the French state, freshwater biologists of the state field station in Thonon-les-Bains initiated a local eutrophication monitoring in the sixties, after algae proliferation was observed in fishermen's gillnets in nearby Lake Geneva, the largest natural lake in France and Switzerland. They chose chlorophyll a, a chemical surrogate of the abundance of the phytoplankton to demonstrate a causal relationship between nutrients and dystrophy. Similarly Swiss researchers started worm communities monitoring in the last 1970s [START_REF] Lang | Eutrophication of Lake Geneva indicated by the oligochaete communities of the profundal[END_REF]. This was the time when Northern American researchers supported by the "Soap and Detergent Association" published the article "We hung phosphates without a fair trial" (Legge and Dingeldein 1970 cited by [START_REF] Barroin | Phosphore, azote, carbone... du facteur limitant au facteur de maîtrise[END_REF]. Transatlantic controversies and French chemical lobbies hushed-up eutrophication alerts in France. Experts remained isolated. Managers of the Rhône Basin Agency in charge of financing restoration plans for alpine lakes finally agreed on supporting the development of a "rapid diagnosis" for lakes. Mouchel proposed to build on the trophic index developed in the United-States and Canada [START_REF] Dillon | The phosphorus-chlorophyll relathionship in lakes[END_REF][START_REF] Carlson | A trophic state index for lakes[END_REF][START_REF] Vollenweider | Synthesis report[END_REF][START_REF] Canfield | Prediction of chlorophyll a concentrations in Florida lakes: the importance of phosphorus and nitrogen[END_REF]. He correlated nutrient concentrations (total phosphorus, orthophosphates, total nitrogen, mineral nitrogen), transparency (Secchi depth) and chlorophyll a, and expanded results from the Alps to set up a national typology. Including then Oligochetae describing mineralization and assimilation of nutrients [START_REF] Lafont | Contribution à la gestion des eaux continentales : utilisation des oligochètes comme descripteurs de l' état biologique et du degré de pollution des eaux et des sédiments[END_REF][START_REF] Lafont | Un indice biologique lacustre basé sur l' examen des oligochètes[END_REF], and a Mollusc index responding to dissolved oxygen and organic matter content within sediment [START_REF] Mouthon | Un indice biologique lacustre basé sur l' examen des peuplements de mollusques[END_REF] the 'rapid diagnosis' was tested on about thirty lakes spread across different regions [START_REF] Mouchel | Essai de définition d' un protocole de diagnose rapide de la qualité des eaux des lacs. Utilisation d' indices trophiques[END_REF]1987;[START_REF] Barbe | Diagnose rapide des plans d' eau[END_REF]). Nevertheless this work did not lead to systematic lake monitoring in France. Resistance from lobbies defending chemical industries was strong [START_REF] Barroin | Phosphore, azote, carbone... du facteur limitant au facteur de maîtrise[END_REF]. Political engagement for ecological preservation of lakes was low. Such EI in the making were mainly used in the Alps.
In rivers, the 1976 act on nature protection encouraged further research on impacts beyond IBGN to address morphological changes on streams. The Research Centre for Agriculture, Forestry and Rural Environment (CERAFER, which later became Cemagref) devised new methods to determine minimum instream flows downstream hydroelectric impoundments [START_REF] Souchon | Peut-on rendre plus objective la détermination des débits réservés par une approche scientifique ?[END_REF] based on American existing works [START_REF] Tennant | Instream flow regimes for fish, wildlife, recreation and related environmental resources[END_REF][START_REF] Milhous | The PHABSIM system for instream flow studies[END_REF][START_REF] Bovee | A Guide to Stream Habitat Analysis Using Instream Flow Incremental Methodology[END_REF]. It resulted in the adoption of the 1984 act on fishing which required minimum instream flows. Anticipating the revival of hydropower development after the oil crises, the Ministry of Environment and the National Center for Scientific Research (CNRS) decided to support environmental research on the Rhône in 1978 (PIREN Rhône). Between 1979 and 1985, scientists of the PIREN assessed the environmental impact of hydropower projects in the upper-Rhône valley. They were politically supported by ecologist activists and urban recreational users of the valley who were calling into question the power facilities. Sampling floodplain fauna and flora, they related such inventories to physical and chemical analyses. They showed evidence of seasonal longitudinal, lateral and vertical transfers between the river, the floodplain and the groundwater [START_REF] Roux | Cartographie polythématique appliquée à la gestion écologique des eaux. Etude d' un hydrosytème fluvial : le Haut-Rhône français[END_REF][START_REF] Castella | Macroinvertebrates as 'describers' of morphological and hydrological types of aquatic ecosystems abandoned by the Rhône River[END_REF][START_REF] Amoros | A method for applied ecological studies of fluvial hydrosystems[END_REF][START_REF] Richardot-Coulet | Classification and succession of former channels of the French upper rhone alluvial plain using mollusca[END_REF]. They elaborated a general deterministic understanding of the "hydrosystem" which advocated for the preservation of natural hydrological patterns in rivers. Scientific evidence and political support led to the 1992 water act. It required permits and public hearings for all developments significantly affecting water. Yet, it did not result in a national change in monitoring activities. State authorities had already much invested in collecting biochemical, IBGN and fish population data. If ecological concerns were high in the Rhône valley, they were not priorities in the rest of the country. The list of currently monitored indicators appeared to be locked-in at that level. This reduced the scope of possible data accumulation. Instead, the adaptation phase extended.
Challenged by the European on-going research and inspired by Karr's work on biological integrity (1981;[START_REF] Karr | Biological Integrity: A Long-Neglected Aspect of Water Resource Management[END_REF], French freshwater biologists developed more theoretical work. They recomputed accumulated ecological data of the Rhône to test theories of functional ecology (Statzner, Resh et al. 1994;Statzner, Resh et al. 1994;[START_REF] Beisel | Stream community structure in relation to spatial variation: the influence of mesohabitat characteristics[END_REF][START_REF] Usseglio Polatera | Biological and ecological traits of benthic freshwater macroinvertebrates: relationships and definition of groups with similar traits[END_REF]. The functional approach spread out, notably at European level where it makes very different ecosystems comparable. The European political arena happens to be much more sensitive to ecologist concerns than the French one. In 2000, European Parliament and council enacted the Water Framework Directive (WFD) which has set an ecosystem-based regulation. It holds undisturbed conditions of freshwaters as references defined in different hydro-ecoregions and requires economic justification for any development resulting in a significant gap between the current status of a water body and its reference. Designed as a masterpiece of binding legislation for European waters, the WFD created favourable conditions to break down the previous locked-in fits of the French freshwater monitoring.
Freshwater biologists re-processed the thirty-year database of faunistic inventories used for IBGN to identify eco-regional references for rivers [START_REF] Khalanski | Quelles variables biologiques pour quels objectifs de gestion ? In Etat de santé des écosystèmes aquatiques -les variables biologiques comme indicateurs[END_REF][START_REF] Wasson | Typologie des eaux courantes pour la directive cadre européenne sur l' eau : l' approche par hydro-écorégion. Mise en place de systèmes d' information à références spatiales[END_REF]. Others compiled records on fish population to calibrate a "fish index" [START_REF] Oberdorff | Development and validation of a fishbased index for the assessment of 'river health' in France[END_REF]. Further research is currently underway to better understand how already monitored biological assemblages respond to contamination [START_REF] Archaimbault | Biological And Ecological Responses Of Macrobenthic Assemblages To Sediment Contamination In French Streams[END_REF]). More research is required to calibrate indicators for compartments that were previously little monitored such as diatoms and macrophytes. French lake specialists were spurred to update the 'rapid diagnosis' method according to WFD standards, i.e. better considering environmental and morphological heterogeneity, including missing biological information (abundance of taxa for example). Previous ecological research on Alpine lakes was reprocessed accordingly to propose new index WFD compliant (phytoplankton and macrobenthos index) Nevertheless, this integrative approach was mainly calibrated for the quality assessment of natural lakes of the Alpine area and remained less reliable on other lakes. Moreover, because fish experts and botanists hardly explored lakes in France before the WFD was enacted, fish and macrophytes are still to be studied in a bioindication perspective. Relationships between fish assemblages and environmental factors have been studied (Argillier, Pronier et al. 2002;Argillier, Pronier et al. 2002;[START_REF] Irz | Comparison between the fish communities of lakes, reservoirs and rivers: can natural systems help define the ecological potential of reservoirs?[END_REF], but EI development are still under development at the national level.
These two cases ('rapid diagnosis' and fish index) show how EI initially designed for a specific political arena can be reshaped to fit another one. Regional EI promoters may get institutional support above state level (European Community) to impose the legal framework favourable to the implementation of EI at lower level. This requires that EI promoters, who initially worked at regional scale, shall reprocess their indicators to meet international specifications. When no previous research was conducted at national level, data must be collected in the first place. To reduce this endeavour, experts promote their methods at European level. Yet few adaptations to regional features remain enshrined in EI and impair their universality.
Conclusion: a cumulative and adaptive model of social trajectories of EI
This study is meant to be inductive and allows us to derive a theoretical framework from observations. Bioindication tools we studied were never optimised for the present market or political arena but rather recycled. Their development was (a) adaptive, i.e. innovative to respond to new questions; (b) constrained by law; and (c) cumulative, i.e. built on what already existed.
(a) We observed innovative ecological monitoring stemming from regional outsiders who wanted to address a new water quality problem raised by a social minority. Interaction between regional scientists and the social minority framed the issue (double arrow in Fig. iii). Actors reluctant to admit the problem asked for evidence (Fig. iii shows no direct influence from innovation to implementation). The burden of the proof rested on EI developers, who adapted previously existing data and knowledge to their specific concern. It consisted of changing the format of data and including new information, very often without immediate recognition. From adapted and new gathered data, they induced ecological causal relationships. They refined protocols in order to address more specifically the problem. At this stage, ecological data were not framed as indicators -i.e. decision support tools -they were only variables. But variables could be mapped in space and monitored in time. This phase is called "adaptation" in Fig. iii. (b) New EI promoters experienced difficulty in convincing existing institutions and did not get much funding for implementing their monitoring. Some failed to convince other actors and went on adapting their tools at regional level. Others were able to raise larger public attention or to meet other stakeholders' concerns who joined in "advocacy coalitions" [START_REF] Sabatier | The Advocacy Coalition Framework. An Assessment[END_REF]. Negotiations required quantification and standardization. Variables that were previously only used by activists became "passage points" [START_REF] Star | Institutional Ecology, ' translations' and Boundary Objects: Amateurs and Professionals in Berkeley' s Museum of Vertebrate Zoology, 1907-39[END_REF] for many different actors and got standardized. People translated their interests using such variables, and spread their use. Successful EI promoters found a suitable political arena for changing law despite institutional resistance to change and got their EI institutionalized. We illustrate this phase with the box "institutions" in Fig. iii. (c) Evolutions we studied were also cumulative processes. Once enacted, a new law imposed that a successful EI had to be regularly monitored. It transferred the burden of the proof onto projects initiators whose plans might harm freshwaters or made the state responsible for collecting data. It spread the use of new EI outside the region where it was developed. Given the cost of developing an EI, recycling prevailed. Systematic monitoring got established and data became available for further research. We call this phase "accumulation" in Fig . iii. Yet regional fits of the EI were often kept in this process. Moreover the inertia of the routine worked as a locked-in fit preventing other EI to be adopted. Adaptive management is limited (dotted line in Fig. iii). Therefore new environmental problems could hardly be seen through mainstream monitoring and required another cycle of adaptation (a). This analytical model applies in the development of all the five EI we studied, (i.e. saprobic index, test on minnows, biotic index, 'rapid diagnosis', and fish index). It reveals the evolution of French water management in two ways. The emergence of an EI corresponds to the emergence of an ecological problem mainly ignored by authorities. Secondly groups of biota used for the construction of an EI reveal a converging interest of different stakeholders for the same type of information at one historical moment. We inherit from EI that fit the social context of their emergence. New phases of adaptation may be required to adjust them to our current concerns. It enlightens weaknesses of the models illustrated in Fig. This analytical framework helps us addressing the initial question: "why EI seem to be hardly used to support decisions, management plans, and programs evaluations?" In the adaptation phase, outsiders raise the alarm without bending the trend of the current management. They collect new data and reprocess available information in order to convince others. This phase can be long and very frustrating. Biologists get little support and often complain about indifferent decision-makers. Institutional recognition on the contrary is a sudden shift which challenges the existing water management. Authorities seek new solutions. EI are used to assess pilot restoration techniques. Soon best practices emerge and got standardized. Managers make plans and programs based on such standards. They used EI to define targets but EI do not influence everyday decisions. Bad records may raise the attention of the public and the decision makers. But if the problem is settled enough, social mobilization declines. In such cases EI are monitored by precaution rather than decision support. Hence two kinds of EI exist, the ones addressing an issue that is not yet acknowledged and the ones monitoring a problem already tackled. The period inbetween is too short to get noticed. One should not base the righteousness of EI according to their influence on everyday decisions. On the contrary, the virtue of environmental indicators is to put into question paradigms of management. And this may not happen everyday. On a regular basis EI are collected for the sake of accumulation of data and knowledge. They will constitute a valuable resource for the next outsiders tracking regional biases of existing EI and revealing unnoticed environmental problems.
Figures
i and ii). To account for the social evolution, we propose to split the social component into two elements (Fig. iii), one representing what is established (institutions, law) and one representing more evolving social features (coalitions, representations).
i and Fig. ii and supports the theoretical framework presented in Fig. iii. The adaptation phase accounts for innovations mainly due to outsiders who seek opportunities for institutionalization. Implementation is more an accumulation phase secured by regular funding than one of selection or adaptation.
Fig. 1 .
1 Fig. 1. To date, authors have studied a small loop of feedback between social and political factors and EI implementation. They have not addressed the social and political influences on the EI development.
Fig. 2 .Fig. 3 .
23 Fig.2. We propose to address a larger loop of social interactions in which EI development is also included. This approach enables to take into account influences of data availability, changes in social and scientific representations, opportunity and resistance to changes.
Acknowledgements
This work was financially supported by Cemagref call for research "MAITRISES". The authors are grateful to the scientists, managers and stakeholders interviewed for the purpose of this study. The paper benefited from valuable discussions with and wise advice of Tim Duane, Adina Merenlender, Stefania Barca, and Matt Kondolf from University of California at Berkeley. The authors would like to thank Delaine Sampaio, Christelle Gramaglia and Julie Trottier and two anonymous reviewers for their helpful comments and suggestions. |
01737915 | en | [
"spi.nano"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01737915/file/CorentinPigot_JJAP2017_3ndrev.pdf | Corentin Pigot
email: corentin.pigot@st.com
Fabien Gilibert
Marina Reyboz
Marc Bocquet
Paola Zuliani
Jean-Michel Portal
Phase-Change Memory: A Continuous Multilevel Compact Model of Subthreshold Conduction and Threshold Switching
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
The rapidly growing market of the internet-of-things requires the use of embedded non-volatile memory (e-NVM) presenting ultra-small area, ultra-fast access time and ultra-low consumption. The mainstream solution, the NOR flash technology, needs high potentials hardly compatible with high-k metal gate of advanced CMOS processes (like FinFET or FD-SOI) and thus requires costly advanced process modifications. Other back-end resistive memories are then investigated, and among them, phase-change memory (PCM) is one of the most mature technologies and has reached a pre-production level. 1,2) The memory principle relies on the phase transition of an active element between two phases (amorphous or crystalline) presenting resistance levels separated by two or three orders of magnitude. 3) The state of the cell is determined by reading the resistance in low field area. The PCM has unique multilevel capabilities, because the resistance can vary continuously between the full RESET and the full SET state. [START_REF] Papandreou | IEEE Int. Conf. Electron. Circuits, Syst[END_REF] For some states including the RESET state, the I-V characteristics of the cell exhibits a threshold switching, above which the amorphous phase becomes suddenly conductive. The nature of this threshold switching has been a long-term discussion and relies classically on two main hypotheses. First of all, the mechanism has been reported to be mainly electronic, [START_REF] Adler | [END_REF][6][7][8][9] but recent studies brought evidences in favor of a thermal activation in some nanoscale cell. 10,11) Although it has nothing to do with phase transition, this behavior is central in the PCM's functioning and crucial for circuit designers to determine sense margin. Indeed, the read operation has to be done under the threshold, otherwise phase transition could happen. However, fast reading means setting the reading voltage as high as possible in order to maximize the current difference between two states and to speed up the read process. Moreover, the lower the SET programing current, and the lower the SET resistance achieved, so the better the programming window, [START_REF] Kluge | IMW[END_REF] which is interesting in terms of multilevel programing. To fully exploit this unique feature, designers need a trustworthy model to verify by simulation the validity of their design. Mainly, designers want to validate that no switching occurs when supplying the array during the read phase and that peripherals are well designed and provide proper biasing during programming phase with the continuous simulation of the state of the cell including the threshold switching process.
In this context, a compact model requires to be fast, robust and accurate. The threshold switching is a difficult part of the PCM modeling due to its intrinsic non-linearity and abrupt transition regarding transient simulation. Thus, it may generate convergence problems in the electrical simulators used in the CAD tools. Lots of compact models of PCM have appeared through the years using various modeling strategies. SPICE macro-models have been developped, [START_REF] El-Hassan | [END_REF][14][15] other more physical models based on a crystalline fraction have been implemented in verilog-A, [16][17][18] but most of them dedicate themselves to the phase transition and attach too few importance to the DC behavior. Among those, some use a negative resistance area, 19) some use a Fermi-like smoothing function, 20,21) others use switches. 22) In this work, it has been modeled for the first time using exclusively self-heating mechanism of the cell. This original approach has been validated through I-V measurements for a large set of intermediate states. The simplicity and the continuity for all regimes (below and above the threshold voltage) of the approach is highly interesting in terms of simulation time and convergence ease required in compact modeling. This paper expands the abstract presented on the 2017 International Conference on Solid State Devices and Materials, 23) justifying deeper the validity of the proposed compact model, and exhibiting new simulation results. First, the measurement setup, followed by the modeling method are presented. The correlation between experimental and modeling results is then detailed, and the good convergence is validated with additional simulations. Finally, comments on the coherency of such modeling approach is discussed and the compliance with a new cell-metric is shown.
Experimental Setup and Modeling Method
Experimentation
Measurements have been performed on a test structure manufactured on a 90nm CMOS node with embedded PCM option. This test structure is composed of a PCM stack serially connected to a MOS transistor, the latter being used to limit the current flowing through the cell.
A TEM cross-section along with a 3-D equivalent schematic of the memory cell is shown on Electrode (TE) and a heater with a wall structure shape. 3) The size of the amorphous dome that can be seen on Fig. 1 reflects the state of the cell, so the goal of the measurements is to let this thickness ua vary in order to highlight threshold switching for all the states where it happens. Tuning the WL voltage from 1V to 2V, a resistance continuum between 125kΩ and 1.3MΩ can be achieved. The current-voltage characteristics is then obtained by reading the Bit Line current while applying a 1V/ms ramp (0 to 2V) on the top electrode. During this read phase, the WL voltage is set to 1.2V to limit the current and thus the PCM stress. In order to avoid any drift effect, 24) and to ensure similar measurement conditions whatever the resistance level, a fixed delay has been introduced between every SET pulse and read ramping.
Compact Model
It is widely known that the amorphous part of the subthreshold transport is a hopping conduction of Poole-Frenkel type. [25][26][START_REF] Shih | ICSSICT[END_REF][START_REF] Ielmini | [END_REF] In this work however, for compact model purpose, a limited density of traps is assumed and only a simplified form 29) is considered, given by,
𝐼 𝑃𝐹 = 𝐴 * 𝐹 * exp (- 𝛷-𝛽√𝐹 𝑘𝑇 ) with 𝐹 = 𝑉 𝑢 𝑎 and 𝛷 = 𝐸 𝑎 0 - 𝑎𝑇 2 𝑏+𝑇 ( 1
)
where k is the Boltzmann constant, β a constant of the material linked to its permittivity and 𝐴 is a fitting parameter. T is a global temperature inside the active area and F the electric field across the amorphous phase. It is calculated through this simplified equation under the assumption of a negligible voltage drop inside the crystalline GST, allowing the access to the amorphous thickness ua, straightly linked to the state of the memory (Fig. 1). V is the PCM's voltage, and Φ is the activation energy of a single coulombic potential well. It follows the Varshni's empirical law for its temperature dependence, 11,30) with 𝐸 𝑎 0 the barrier height at 0K, a and b material-related fitting parameters.
The threshold switching is modeled as a thermal runaway in the Poole-Frenkel current triggered by the self-heating of the cell. Any elevation of the temperature in the material being due to the Joule Effect, the temperature is calculated, under the assumption of a short time constant, as, 𝑇 = 𝑇 𝑎𝑚𝑏 + 𝑅 𝑡ℎ * 𝑃 𝐽 where 𝑃 𝐽 = 𝑉 𝑃𝐶𝑀 * 𝐼 𝑃𝐶𝑀 (2) with Tamb the ambient temperature, Rth an effective thermal resistance, taking amongst other the geometry of the cell into account. 𝑃 𝐽 is the electrical power dissipated inside the PCM.
As it depends on the current flowing through the cell, the calculation of the temperature implies a positive feedback responsible of the switching inside the amorphous phase. Once it has switched, a series resistance of 6kΩ corresponding to the heater resistance limits the current.
Extending the field approximation as long as some amorphous phase exists in the active areaneglecting the voltage drop outside the areathe same Poole-Frenkel current is applied to all the intermediate states as well. ua parameter carries the state as it varies from 0nm to the maximum thickness ua,max extracted from the full RESET state. The crystalline resistance is said to be semiconducting-type, so it can be expressed as, 31) 𝑅 𝑐𝑟𝑦 = 𝑅 𝐶 0 * e -𝐸 𝑎 𝑐 (
1 𝑘𝑇 𝑎𝑚𝑏 - 1 𝑘𝑇 ) (3)
where 𝐸 𝑎 𝑐 is an activation energy and 𝑅 𝐶 0 = 𝑅 𝑐𝑟𝑦 when 𝑇 = 𝑇 𝑎𝑚𝑏 ; they are both treated as fitting parameters.
Results and Discussion
Subthreshold conduction and threshold switching modeling
The comparison of the I-V characteristics between model and measurements for a full range of resistance values is presented on Fig. 4. It shows a very good agreement between data and simulations for two decades of current. The measured resistance is extracted at a constant voltage of 0.36V during the slow ramping procedure. The current at high applied voltage is fitted by the modeling of the serially connected MOS transistor. (b) (a)
800K
The model card parameters are summarized in Table I. Rth is in good agreement with the commonly accepted value for a high thermal efficiency nanoscale PCM cell. 20) A value of relative dielectric constant εr = 10 11) implies, accordingly to Poole-Frenkel's theory, 25) β = 24μeV.V -0.5 .m 0.5 . a and b parameters have been chosen to fit the self-heating inside the GST but they were kept close to the one found in the literature. 11) Similarly, the couple of parameters (RC0, Eac) has been chosen to fit the self-heating of the material in crystalline phase for high current density so it is not surprising that it is found higher than previous values. 32) Fig. 5. Resistance of the cell as a function of the amorphous thickness
The only model parameter that varies from one state to another is the amorphous thickness ua. Reversing Eq. ( 1), amorphous thicknesses have been calculated as a function of the measured resistance for each state. The resistance level as a function of the amorphous thickness is given Fig. 5 and one can verify that there is an excellent correlation between the simulated and the measured-based calculated ua. It means that ua can indeed be used as state parameter for further model-measurement correlation. This allows the computation of the threshold field (cf. Eq. ( 1)), which is plotted Fig. 6, along with the threshold power (see Eq.
(
)), as a function of the resistance of the cell. The threshold is defined as the value of voltage and current where the current in one voltage step of 10mV exceeds a given value of 1μA.
Based on this definition, states that are less resistive than 0.45MΩthat have an amorphous dome smaller than 28.8nmdo not present the threshold switching.
Robustness and coherency of the model
The method used for the measurements and simulations presented Fig. 4 was to apply voltage steps on the top electrode and read the current, it is not possible to see a snapback this way.
On the contrary, the current-driven simulations for different sizes of amorphous dome shown As the threshold switching is highly dependent on the temperature calculation, the subthreshold conduction in full amorphous state (ua = 48nm) as a function of the temperature has been plotted Fig. 8. The threshold power has been extracted based on the same criteria as before, and it is plotted against the ambient temperature in the inset of the Fig. 8. The subthreshold conduction has a strong temperature dependence, but the threshold seems here again to happen at fixed power. These simulations are fully coherent with previous experiments about threshold switching 11) and validate that the switching is indeed triggered by a thermal runaway in the model. I-V characteristics simulated for ambient temperature ranging from 0°C to 85°C. The inset plot the threshold power as a function of the ambient temperature, showing a constant trend. The main roadblock preventing a good multilevel programing of the phase-change memory is for now on the resistance drift, due to relaxation inside the amorphous phase. 2) The resistance of the cell tends to increase with time, preventing to correctly read the state of the cell after a while. To get rid of the inconvenience, Sebastian et al. purposed a new metric M that is less drift-dependent than the low-field resistance for the cell reading. 33) M is defined as the voltage needed to reach a reference current under the application of a linear ramping voltage. M as a function of ua is plotted Fig. 9
Conclusion
This work presents a compact modeling of the threshold switching in phase-change memory based solely on self-heating in the Poole-Frenkel's conduction. This new approach presents the advantage of modeling the current characteristic in a fully continuous way, even the non-linearity of the threshold switching, which eases the convergence and speed-up the simulation time. It has been shown that the model presents a good correlation with measurements
Fig. 1 .
1 Fig. 1.TEM cross-section (a) and 2D equivalent schematic (b) of the test structure.
Fig. 1 .
1 Fig. 1. The 50nm-thick phase-change material (GST225) layer has been inserted between Top
Fig. 2
2 Fig. 2 represents the chronogram of the measurement protocol for each programmed state. A reset pulse of 2V is applied during 200ns before any programming pulse. The word line (WL) bias is then tuned in order to modulate the bit line (BL) current during an 800ns pulse of 2V on the top electrode, resulting in a wide range of intermediate states (see Fig. 3).
Fig. 3 .Fig. 2 .
32 Fig. 3. Cell resistance as a function of the programming gate voltage, displaying the continuously distributed states achieved
Fig. 4 :
4 Fig. 4: Cell current versus applied voltage (a) in logarithmic scale and (b) in linear scale for several intermediate states with model (line) and measurements (symbol).
Fig. 7 ,
7 Fig. 7, exhibit the snapback behavior without any convergence trouble, which illustrates the robustness of the model. The 19.2nm and 48nm simulated curves correspond respectively to the minimum and maximum size of amorphous dome measured. Those values are coherent with the height of the deposited GST layer of about 50nm. The snapback appears for amorphous domes larger than 28.8nm, this limit corresponding to the minimum size of dome where the threshold is observed.
Fig. 6 .
6 Fig. 6. Threshold field (up) and power (down) versus the resistance of the cell.
Fig. 7 .
7 Fig. 7. Current-driven simulation for cell states corresponding to the range measured at 273K. The snapback is observable for states that have an amorphous dome larger than 28.8nm.
Fig. 8 .
8 Fig. 8. I-V characteristics simulated for ambient temperature ranging from 0°C to 85°C. The inset plot the threshold power as a function of the ambient temperature, showing a constant trend.
for a detection current of 1µA. The simulation presents an excellent correlation with the measurement and both exhibits a linear relationship between M and ua. This proportionality is in agreement with Sebastian's et al., even though the amorphous thickness is not extracted the same way as in article because of the different expression of the subthreshold current used. It confirms the relevance of the choice of ua as a state parameter of the model and validates that the model is suitable for multilevel programing.
Fig. 9 .
9 Fig. 9. New metric M as a function of the amorphous thickness of the state
Table I .
I Model card parameters
Parameter Value
Rth 2.0K.μW -1
A 1.45.10 -4 Ω -1
β 24µeV.V -0.5 .m 0.5
𝐸 𝑎 0 0.3eV
Rheater 6kΩ
𝑅 𝐶 0 10kΩ
𝐸 𝑎 𝑐 0.1eV
a 1.2meV.K -1
b |
01620505 | en | [
"info.info-ni",
"spi.signal"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01620505v3/file/SSP2018.pdf | Luc Le Magoarou
Stéphane Paquelet
PARAMETRIC CHANNEL ESTIMATION FOR MASSIVE MIMO
Keywords: Cramér-Rao bound, Channel estimation, MIMO
Channel state information is crucial to achieving the capacity of multiantenna (MIMO) wireless communication systems. It requires estimating the channel matrix. This estimation task is studied, considering a sparse physical channel model, as well as a general measurement model taking into account hybrid architectures. The contribution is twofold. First, the Cramér-Rao bound in this context is derived. Second, interpretation of the Fisher Information Matrix structure allows to assess the role of system parameters, as well as to propose asymptotically optimal and computationally efficient estimation algorithms.
INTRODUCTION
Multiple-Input Multiple-Output (MIMO) wireless communication systems allow for a dramatic increase in channel capacity, by adding the spatial dimension to the classical time and frequency ones [START_REF] Telatar | Capacity of multi-antenna gaussian channels[END_REF][START_REF] Tse | Fundamentals of wireless communication[END_REF]. This is done by sampling space with several antenna elements, forming antenna arrays both at the transmitter (with nt antennas) and receiver (with nr antennas). Capacity gains over single antenna systems are at most proportional to min(nr,nt).
Millimeter wavelengths have recently appeared as a viable solution for the fifth generation (5G) wireless communication systems [START_REF] Theodore S Rappaport | Millimeter wave mobile communications for 5g cellular: It will work![END_REF][START_REF] Swindlehurst | Millimeter-wave massive mimo: the next wireless revolution?[END_REF]. Indeed, smaller wavelengths allow to densify half-wavelength separated antennas, resulting in higher angular resolution and capacity for a given array size. This observation has given rise to the massive MIMO field, i.e. the study of systems with up to hundreds or even thousands of antennas.
Massive MIMO systems are very promising in terms of capacity. However, they pose several challenges to the research community [START_REF] Rusek | Scaling up mimo: Opportunities and challenges with very large arrays[END_REF][START_REF] Erik G Larsson | Massive mimo for next generation wireless systems[END_REF], in particular for channel estimation. Indeed, maximal capacity gains are obtained in the case of perfect knowledge of the channel state by both the transmitter and the receiver. The estimation task amounts to determine a complex gain between each transmit/receive antenna pair, the narrowband (single carrier) MIMO channel as a whole being usually represented as a complex matrix H ∈ C nr ×nt of such complex gains. Without a parametric model, the number of real parameters to estimate is thus 2nrnt, which is very large for massive MIMO systems. Contributions and organization. In this work, massive MIMO channel estimation is studied, and its performance limits are sought, as well as their dependency on key system parameters. In order to answer this question, the framework of parametric estimation [START_REF] Kay | Fundamentals of Statistical Signal Processing: Estimation Theory[END_REF] is used. A physical channel model is first presented, with the general considered observation model, and the objective is precisely stated. The Cramér-Rao bound for is then derived, which bounds the variance of any unbiased estimator. Then, the interpretation of the bound allows to precisely assess the role of system design on estimation performance, as well as to propose new computationally efficient channel estimation algorithms showing asymptotic performance equivalent to classical ones based on sparse recovery.
PROBLEM FORMULATION
Notations. Matrices and vectors are denoted by bold upper-case and lower-case letters: A and a (except 3D "spatial" vectors that are denoted -→ a ); the ith column of a matrix A by: ai; its entry at the ith line and jth column by: aij or Aij. A matrix transpose, conjugate and transconjugate is denoted by: A T , A * and A H respectively. The image, rank and trace of a linear transformation represented by A are denoted: im(A), rank(A) and Tr(A) respectively. For matrices A and B, A ≥ B means that A-B is positive semidefinite. The linear span of a set of vectors A is denoted: span(A). The Kronecker product, standard vectorization and diagonalization operators are denoted by vec(•), diag(•), and ⊗ respectively. The identity matrix, the m×n matrix of zeros and ones are denoted by Id, 0m×n and 1m×n respectively. CN (µ,Σ) denotes the standard complex gaussian distribution with mean µ and covariance Σ. E(.) denotes expectation and cov(.) the covariance of its argument.
Parametric physical channel model
Consider a narrowband block fading channel between a transmitter and a receiver with respectively nt and nr antennas. It is represented by the matrix H ∈ C nr ×nt , in which hij corresponds to the channel between the jth transmit and ith receive antennas.
Classically, for MIMO systems with few antennas, i.e. when the quantity nrnt is small (up to a few dozens), estimators such as the Least Squares (LS) or the Linear Minimum Mean Squared Error (LMMSE) are used [START_REF] Biguesh | Training-based mimo channel estimation: a study of estimator tradeoffs and optimal training signals[END_REF].
However, for massive MIMO systems, the quantity 2nr nt is large (typically several hundreds), and resorting to classical estimators may become computationally intractable. In that case, a parametric model may be used. Establishing it consists in defining a set of np parameters θ (θ1,...,θn p ) T that describe the channel as H ≈ f (θ) for a given function f , where the approximation is inherent to the model structure and neglected in the sequel (considering H = f (θ)). Channel estimation then amounts to estimate the parameters θ instead of the channel matrix H directly. The parametrization is particularly useful if np ≪ 2nrnt, without harming accuracy of the channel description. Inspired by the physics of wave propagation under the plane waves assumption, it has been proposed to express the channel matrix as a sum of rank-1 matrices, each corresponding to a single physical path between transmitter and receiver [START_REF] Akbar | Deconstructing multiantenna fading channels[END_REF]. Adopting this kind of modeling and generalizing it to take into account any three-dimensional antenna array geometry, channel matrices take the form
H = P p=1 cper( --→ ur,p).et( --→ ut,p) H , (1)
where P is the total number of considered paths (no more than a few dozens), cp ρpe jφp is the complex gain of the pth path, --→ ut,p is the unit vector corresponding to its Direction of Departure (DoD) and --→ ur,p the unit vector corresponding to its Direction of Arrival (DoA). Any unit vector -→ u is described in spherical coordinates by an azimuth angle η and an elevation angle ψ. The complex response and steering vectors er( -→ u ) ∈ C nr and et( -→ u ) ∈ C nt are defined as (ex( -→ u ))i = 1
√ nx e -j 2π λ --→ a x,i . -→ u for x ∈ {r,t}. The set { --→ ax,1,..., ---→ ax,n x } gathers the positions of the antennas with respect to the centroid of the considered array (transmit if x = t, receive if x = r). In order to lighten notations, the matrix Ax 2π λ ( --→ ax,1,... , ---→ ax,n x ) ∈ R 3×nx is introduced. It simplifies the steering/response vector expression to ex( -→ u ) = 1
√ nx e -jA T x -→ u , where the exponential function is applied component-wise. In order to further lighten notations, the pth atomic channel is defined as Hp cper( --→ ur,p).et( --→ ut,p) H , and its vectorized version hp vec(Hp) ∈ C nr nt . Therefore, defining the vectorized channel h vec(H), yields h = P p=1 hp. Note that the channel description used here is very general, as it handles any three-dimensional antenna array geometry, not only Uniform Linear Arrays (ULA) or Uniform Planar Arrays (UPA) as is sometimes proposed.
In short, the physical channel model can be seen as a parametric model with θ = {θ (p) (ρp,φp,ηr,p,ψr,p,ηt,p,ψt,p), p = 1,...,P }. There are thus 6P real parameters in this model (the complex gain, DoD and DoA of every path are described with two parameters each). Of course, the model is most useful for estimation in the case where 6P ≪ 2nrnt, since the number of parameters is thus greatly reduced.
Note that most classical massive MIMO channel estimation methods assume a similar physical model, but discretize a priori the DoDs and DoAs, so that the problem fits the framework of sparse recovery [START_REF] Mallat | Matching pursuits with timefrequency dictionaries[END_REF][START_REF] Tropp | Signal recovery from random measurements via orthogonal matching pursuit[END_REF][START_REF] Bajwa | Compressed channel sensing: A new approach to estimating sparse multipath channels[END_REF]. The approach used here is different, in the sense that no discretization is assumed for the analysis.
Observation model
In order to carry out channel estimation, ns known pilot symbols are sent through the channel by each transmit antenna. The corresponding training matrix is denoted X ∈ C nt×ns . The signal at the receive antennas is thus expressed as HX + N, where N is a noise matrix with vec(N) ∼ CN (0,σ 2 Id). Due to the high cost and power consumption of millimeter wave Radio Frequency (RF) chains, it has been proposed to have less RF chains than antennas in both the transmitter and receiver [START_REF] Ayach | Spatially sparse precoding in millimeter wave mimo systems[END_REF][START_REF] Alkhateeb | Channel estimation and hybrid precoding for millimeter wave cellular systems[END_REF][START_REF] Heath | An overview of signal processing techniques for millimeter wave mimo systems[END_REF][START_REF] Akbar | Millimeter-Wave MIMO Transceivers: Theory, Design and Implementation[END_REF]. Such systems are often referred to as hybrid architectures. Mathematically speaking, this translates into specific constraints on the training matrix X (which has to "sense" the channel through analog precoders vi ∈ C nt , i = 1,...,nRF, nRF being the number of RF chains on the transmit side), as well as observing the signal at the receiver through analog combiners. Let us denote wj ∈ C nr , j = 1,...,nc the used analog combiners, the observed data is thus expressed in all generality as
Y = W H HX+W H N, (2)
where W (w1,... ,wn c ) and the training matrix is constrained to be of the form X = VZ, where Z ∈ C n RF ×ns is the digital training matrix.
Objective: bounding the variance of unbiased estimators
In order to assess the fundamental performance limits of channel estimation, the considered performance measure is the relative Mean Squared Error (rMSE). Denoting indifferently H(θ) f (θ) or H the true channel (h(θ) or h in vectorized form) and H( θ) f ( θ) or Ĥ its estimate (h( θ) or ĥ in vectorized form) in order to lighten notations, rMSE is expressed
rMSE = E H-Ĥ 2 F . H -2 F = Tr cov ĥ Variance + E( Ĥ)-H 2 F Bias . H -2 F , (3)
where the bias/variance decomposition can be done independently of the considered model [START_REF] Kay | Fundamentals of Statistical Signal Processing: Estimation Theory[END_REF]. The goal here is to lower-bound the variance term, considering the physical model introduced in the previous subsection. The bias term is not studied in details here, but its role is evoked in section 3.3.
CRAM ÉR-RAO LOWER BOUND
In this section, the variance term of eq. ( 3) is bounded using the Cramér-Rao Bound (CRB) [START_REF] Calyampudi | Information and the accuracy attainable in the estimation of statistical parameters[END_REF]18], which is valid for any unbiased estimator θ of the true parameter θ. The complex CRB [START_REF] Van Den Bos | A cramér-rao lower bound for complex parameters[END_REF] states,
cov g( θ) ≥ ∂g(θ) ∂θ I(θ) -1 ∂g(θ) ∂θ H , with I(θ) E ∂logL ∂θ ∂logL ∂θ H the Fisher Information Matrix (FIM),
where L denotes the model likelihood, and g is any complex differentiable vector function. In particular, regarding the variance term of eq. ( 3),
Tr cov h( θ) ≥ Tr ∂h(θ) ∂θ I(θ) -1 ∂h(θ) ∂θ H , (4)
with ∂h(θ) ∂θ = ∂h(θ) ∂θ 1 ,..., ∂h(θ) ∂θn p
. A model independent expression for the FIM is provided in section 3.1, and particularized in section 3.2 to the model of section: 2.1. Finally, the bound is derived from eq. ( 4) in section 3.3.
General derivation
First, notice that vectorizing eq. ( 2), the observation matrix Y follows a complex gaussian distribution,
vec(Y) ∼ CN (X T ⊗W H )h(θ) µ(θ) ,σ 2 (Idn s ⊗W H W) Σ .
In that particular case, the Slepian-Bangs formula [START_REF] Slepian | Estimation of signal parameters in the presence of noise[END_REF][START_REF] Bangs | Array Processing With Generalized Beamformers[END_REF] yields:
I(θ) = 2Re ∂µ(θ) ∂θ H Σ -1 ∂µ(θ) ∂θ = 2α 2 σ 2 Re ∂h(θ) ∂θ H P ∂h(θ) ∂θ , (5)
with
P σ 2 α 2 (X * ⊗W)Σ -1 (X T ⊗W H ) where α 2 1
ns Tr(X H X) is the average transmit power per time step. Note that the expression can be simplified to P = 1 α 2 (X * X T ) ⊗ (W(W H W) -1 W H ) using elementary properties of the Kronecker product. The matrix W(W H W) -1 W H is a projection matrix onto the range of W. In order to ease further interpretation, assume that X H X = α 2 Idn s . This assumption means that the transmit power is constant during training time ( xi 2 2 = α 2 , ∀i) and that pilots sent at different time instants are mutually orthogonal (x H i xj = 0, ∀i = j). This way, 1 α 2 X * X T is a projection matrix onto the range of X * , and P can itself be interpreted as a projection, being the Kronecker product of two projection matrices [22, p.112] (it is an orthogonal projection since P H = P).
Fisher information matrix for a sparse channel model
Consider now the parametric channel model of section 2.1, where h = P p=1 hp, with hp = cpet( --→ ut,p) * ⊗er( --→ ur,p). Intra-path couplings. The derivatives of h with respect to parameters of the pth path θ (p) can be determined using matrix differentiation rules [START_REF] Brandt Petersen | The matrix cookbook[END_REF]: ∂φp , ∂h(θ) ∂ηr,p , ∂h(θ) ∂ψr,p , ∂h(θ) ∂ηt,p , ∂h(θ) ∂ψt,p , the part of the FIM corresponding to couplings between the parameters θ (p) (intra-path couplings) is expressed as
•
I (p,p) 2α 2 σ 2 Re ∂h H ∂θ (p) P ∂h ∂θ (p) . (6)
Let us now particularize this expression. First of all, in order to ease interpretations carried out in section 4, consider the case of optimal observation conditions (when the range of P contains the range of ∂h(θ) ∂θ ). This allows indeed to interpret separately the role of the observation matrices and the antenna arrays geometries. Second, consider for example the entry corresponding to the coupling between the departure azimuth angle ηt,p and the arrival azimuth angle ηr,p of the pth path. It is expressed under the optimal observation assumption as since Ar1n r = 0 and At1n t = 0 by construction (because the antennas positions are taken with respect to the array centroid). This means that the parameters ηr,p and ηt,p are statistically uncoupled, i.e. orthogonal parameters [START_REF] David | Parameter orthogonality and approximate conditional inference[END_REF]. Computing all couplings for θ (p) yields
I (p,p) = 2ρ 2 p α 2 σ 2 1 ρ 2 p 0 01×2 01×2 0 1 01×2 01×2 02×1 02×1 Br 02×2 02×1 02×1 02×2 Bt , (7)
where
Bx = 1 nx A T x --→ vη x,p 2 2 --→ vη x,p T AxA T x ---→ v ψx,p ---→ v ψx,p T AxA T x --→ vη x,p A T x ---→ v ψx,p 2 2 , (8)
with x ∈ {r, t}. These expressions are thoroughly interpreted in section 4.
Global FIM. Taking into account couplings between all paths, The global FIM is easily deduced from the previous calculations and block structured, 2) ... I (1,P ) I (2,1) I (2,2) . . . . . . I (P,1) I (P,P ) , where I (p,q) ∈ R 6×6 contains the couplings between parameters of the pth and qth paths and is expressed I (p,q) 2α 2 σ 2 Re ∂h ∂θ (p) H P ∂h ∂θ (q) . The off-diagonal blocks I (p,q) of I(θ), corresponding to couplings between parameters of distinct paths, or inter-path couplings, can be expressed explicitly (as in eq. ( 7) for intra-path couplings). However, the obtained expressions are less prone to interesting interpretations, and inter-paths couplings have been observed to be negligible in most cases. They are thus not displayed in the present paper, for brevity reasons. Note that a similar FIM computation was recently carried out in the particular case of linear arrays [START_REF] Garcia | Optimal robust precoders for tracking the aod and aoa of a mm-wave path[END_REF]. However, the form of the FIM (in particular parameter orthogonality) was not exploited in [START_REF] Garcia | Optimal robust precoders for tracking the aod and aoa of a mm-wave path[END_REF], as is done here in sections 4 and 5.
I(θ) = I (1,1) I (1,
Bound on the variance
The variance of channel estimators remains to be bounded, using eq. ( 4). From eq. ( 5), the FIM can be expressed more conveniently only with real matrices as
I(θ) = 2α 2 σ 2 DT P D, with D Re{ ∂h(θ) ∂θ } Im{ ∂h(θ) ∂θ } , P Re{P} -Im{P} Im{P} Re{P} ,
where P is also a projection matrix. Finally, injecting eq. ( 5) into eq. ( 4) assuming the FIM is invertible, gives for the relative variance (this is actually an optimal SNR, only attained with perfect precoding and combining). Optimal bound. The first inequality in eq. ( 9) becomes an equality if an efficient estimator is used [START_REF] Kay | Fundamentals of Statistical Signal Processing: Estimation Theory[END_REF]. Moreover, the second inequality is an equality if the condition im ∂h(θ) ∂θ ⊂ im (P) is fulfilled (this corresponds to optimal observations, further discussed in section 4). Remarkably, under optimal observations, the lower bound on the relative variance is directly proportional to the considered number of paths P and inversely proportional to the SNR, and does not depend on the specific model structure, since the influence of the derivative matrix D cancels out in the derivation. Sparse recovery CRB. It is interesting to notice that the bound obtained here is similar to the CRB for sparse recovery [START_REF] Ben | The cramér-rao bound for estimating a sparse parameter vector[END_REF] (corresponding to an intrinsically discrete model), that is proportional to the sparsity of the estimated vector, analogous here to the number of paths.
Tr cov h( θ) . h -2 2 ≥ σ 2 2α 2 Tr D( DT P D) -1 DT . h -2 2 ≥ σ 2 2α 2 Tr D( DT D) -1 DT . h -2 2 = σ 2 2α 2 h 2 2 np = 3P SNR , (9)
INTERPRETATIONS
The main results of sections 3.2 and 3.3 are interpreted in this section, ultimately guiding the design of efficient estimation algorithms. Parameterization choice. The particular expression of the FIM allows to assess precisely the chosen parameterization. First of all, I(θ) has to be invertible and well-conditioned, for the model to be theoretically and practically identifiable [START_REF] Thomas | Identification in parametric models[END_REF][START_REF] Kravaris | Advances and selected recent developments in state and parameter estimation[END_REF], respectively. As a counterexample, imagine two paths indexed by p and q share the same DoD and DoA, then proportional columns appear in ∂h(θ) ∂θ , which implies non-invertibility of the FIM. However, it is possible to summarize the effect of these two paths with a single virtual path of complex gain cp +cq without any accuracy loss in channel description, yielding an invertible FIM. Similarly, two paths with very close DoD and DoA yield an ill-conditioned FIM (since the corresponding steering vectors are close to colinear), but can be merged into a single virtual path with a limited accuracy loss, improving the conditioning. Interestingly, in most channel models, paths are assumed to be grouped into clusters, in which all DoDs and DoAs are close to a principal direction [START_REF] Adel | A statistical model for indoor multipath propagation[END_REF][START_REF] Jensen | Modeling the indoor mimo wireless channel[END_REF][START_REF] Michael | A review of antennas and propagation for mimo wireless communications[END_REF]. Considering the MSE, merging close paths indeed decreases the variance term (lowering the total number of parameters), without increasing significantly the bias term (because their effects on the channel matrix are very correlated). These considerations suggest dissociating the number of paths considered in the model P from the number of physical paths, denoted P φ , taking P < P φ by merging paths. This is one motivation behind the famous virtual channel representation [START_REF] Akbar | Deconstructing multiantenna fading channels[END_REF], where the resolution at which paths are merged is fixed and given by the number of antennas. The theoretical framework of this paper suggests to set P (and thus the merging resolution) so as to minimize the MSE. A theoretical study of the bias term of the MSE (which should decrease when P increases) could thus allow to calibrate models, choosing an optimal number of paths P * for estimation. Such a quest for P * is carried out empirically in section 5.
Optimal observations. The matrices X and W (pilot symbols and analog combiners) determine the quality of channel observation. Indeed, it was shown in section 3.3 that the lowest CRB is obtained when im ∂h(θ) ∂θ ⊂ im (P), with
P = 1 α 2 (X * X T ) ⊗ (W(W H W) -1 W H ) .
In case of sparse channel model, using the expressions for ∂h (θ) ∂θ derived above, this is equivalent to two distinct conditions for the training matrix: ux,p) with x ∈ {r, t} and ξ ∈ {η, ψ}. These conditions are fairly intuitive: to estimate accurately parameters corresponding to a given DoD (respectively DoA), the sent pilot sequence (respectively analog combiners) should span the corresponding steering vector and its derivatives (to "sense" small changes). To accurately estimate all the channel parameters, it should be met for each atomic channel. Array geometry. Under optimal observation conditions, performance limits on DoD/DoA estimation are given by eq. ( 8). The lower the diagonal entries B -1
x , the better the bound. This implies the bound is better if the diagonal entries of Bx are large and the offdiagonal entries are small (in absolute value). Since the unit vectors --→ vη x,p and ---→ v ψx,p are by definition orthogonal, having AxA T x = β 2 Id with maximal β 2 is optimal, and yields uniform performance limits for any DoD/DoA. Moreover, in this situation, β 2 is proportional to
1 nx nx i=1 --→ ax,i 2 2
, the mean squared norm of antenna positions with respect to the array centroid. Having a larger antenna array is thus beneficial (as expected), because the furthest antennas are from the array centroid, the larger β 2 is. Orthogonality of DoA and DoD. Section 3.2 shows that the matrix corresponding to intra-path couplings (eq. ( 7)) is block diagonal, meaning that for a given path, parameters corresponding to gain, Algorithm 1 Sequential direction estimation (DoA first)
X H et( -→ v 1 ) X H et( -→ v 1 ) 2 |...| X H et( -→ vn) X H et( -→ vn) 2
6: Find the index ĵ of the maximal entry of er(
-→ u î ) H YKt, set - → ut ← -→ v ĵ (O(n) complexity)
phase, DoD and DoA are mutually orthogonal. Maximum Likelihood (ML) estimators of orthogonal parameters are asymptotically independent [START_REF] David | Parameter orthogonality and approximate conditional inference[END_REF] (when the number of observations, or equivalently the SNR goes to infinity). Classically, channel estimation in massive MIMO systems is done using greedy sparse recovery algorithms [START_REF] Mallat | Matching pursuits with timefrequency dictionaries[END_REF][START_REF] Tropp | Signal recovery from random measurements via orthogonal matching pursuit[END_REF][START_REF] Bajwa | Compressed channel sensing: A new approach to estimating sparse multipath channels[END_REF]. Such algorithms can be cast into ML estimation with discretized directions, in which the DoD and DoA (coefficient support) are estimated jointly first (which is costly), and then the gain and phase are deduced (coefficient value), iteratively for each path. Orthogonality between the DoD and DoA parameters is thus not exploited by classical channel estimation methods. We propose here to exploit it via a sequential decoupled DoD/DoA estimation, that can be inserted in any sparse recovery algorithm in place of the support estimation step, without loss of optimality in the ML sense. In the proposed method, one direction (DoD or DoA) is estimated first using an ML criterion considering the other direction as a nuisance parameter, and the other one is deduced using the joint ML criterion. Such a strategy is presented in algorithm 1. It can be verified that lines 3 and 6 of the algorithm actually correspond to ML estimation of the DoA and joint ML estimation, respectively. The overall complexity of the sequential directions estimation is thus O(m+n), compared to O(mn) for the joint estimation with the same test directions. Note that a similar approach, in which DoAs for all paths are estimated at once first, was recently proposed [START_REF] Noureddine | A two-step compressed sensing based channel estimation solution for millimeter wave mimo systems[END_REF] (without theoretical justification).
PRELIMINARY EXPERIMENT
Let us compare the proposed sequential direction estimation to the classical joint estimation. This experiment must be seen as an example illustrating the potential of the approach, and not as an extensive experimental validation. Experimental settings. Consider synthetic channels generated using the NYUSIM channel simulator [START_REF] Mathew | 3-d millimeterwave statistical channel model for 5g wireless system design[END_REF] (setting f = 28 GHz, the distance between transmitter and receiver to d = 30 m) to obtain the DoDs, DoAs, gains and phases of each path. The channel matrix is then obtained from eq. ( 1), considering square Uniform Planar Arrays (UPAs) with half-wavelength separated antennas, with nt = 64 and nr = 16. Optimal observations are considered, taking both W and X as the identity. Moreover, the noise variance σ 2 is set so as to get an SNR of 10 dB. Finally, the two aforementioned direction estimation strategies are inserted in the Matching Pursuit (MP) algorithm [START_REF] Mallat | Matching pursuits with timefrequency dictionaries[END_REF], discretizing the directions taking m = n = 2, 500, and varying the total number P of estimated paths. Results. Table 1 shows the obtained relative MSE and estimation times (Python implementation on a laptop with an Intel(R) Core(TM) i7-3740QM CPU @ 2.70 GHz). First of all, for P = 5, 10, 20, the estimation error decreases and the estimation time increases with P , exhibiting a trade-off between accuracy and time. However, increasing P beyond a certain point seems useless, since the error re-increases, as shown by the MSE for P = 40, echoing the trade-off evoked in section 3.3, and indicating that P * is certainly between 20 and 40 for both methods in this setting. Finally, for any value of P , while the relative errors of the sequential and joint estimation methods are very similar, the estimation time is much lower (between ten and twenty times) for sequential estimation. This observation validates experimentally the theoretical claims made in the previous section.
CONCLUSIONS AND PERSPECTIVES
In this paper, the performance limits of massive MIMO channel estimation were studied. To this end, training based estimation with a physical channel model and an hybrid architecture was considered. The Fisher Information Matrix and the Cramér-Rao bound were derived, yielding several results. The CRB ended up being proportional to the number of parameters in the model and independent from the precise model structure. The FIM allowed to draw several conclusions regarding the observation matrices and the arrays geometries. Moreover, it suggested computationally efficient algorithm which are asymptotically as accurate as classical ones.
This paper is obviously only a first step toward a deep theoretical understanding of massive MIMO channel estimation. Apart from more extensive experimental evaluations and optimized algorithms, a theoretical study of the bias term of the MSE would be needed to calibrate models, and the interpretations of section 4 could be leveraged to guide system design.
Regarding the complex gain cp = ρpe jφp , the model yields the expressions ∂h(θ) ∂ρp = 1 ρp hp and ∂h(θ) ∂φp = jhp. • Regarding the DoA, ∂h(θ) ∂ηr,p = Idn t ⊗diag(-jA T r --→ vη r,p ) hp and ∂h(θ) ∂ψr,p = Idn t ⊗diag(-jA T r ---→ v ψr,p ) hp, where --→ vη r,p and ---→ v ψr,p are the unit vectors in the azimuth and elevation directions at --→ ur,p, respectively. • Regarding the DoD, ∂h(θ) ∂ηt,p = diag(jA T t --→ vη t,p )⊗Idn r hp and ∂h(θ) ∂ψt,p = diag(jA T t --→ v ψt,p )⊗Idn r hp, where --→ vη t,p and --→ v ψt,p are the unit vectors in the azimuth and elevation directions at --→ ut,p, respectively. Denoting ∂h ∂θ (p) ∂h(θ) ∂ρp , ∂h(θ)
2α 2 σ 2
2 Re ∂h(θ) vη r,p = 0,
2 h 2 2 σ 2
222 where the second inequality comes from the fact that P being an orthogonal projection matrix, P ≤ Id ⇒ DT P D ≤ DT D ⇒ ( DT P D) -1 ≥ ( DT D) -1 ⇒ D( DT P D) -1 DT ≥ D( DT D) -1 DT (using elementary properties of the ordering of semidefinite positive matrices, in particular [26, Theorem 4.3]). The first equality comes from the fact that Tr D( DT D) -1 DT = Tr(Idn p ) = np. Finally, the second equality is justified by np = 6P considering the sparse channel model, and by taking SNRα
=
diag(-jA T x --→ v ξx,p )ex( --→
Acknowledgments. The authors wish to thank Matthieu Crussière for the fruitful discussions that greatly helped improving this work.
This work has been performed in the framework of the Horizon 2020 project ONE5G (ICT-760809) receiving funds from the European Union. The authors would like to acknowledge the contributions of their colleagues in the project, although the views expressed in this contribution are those of the authors and do not necessarily represent the project. |
01716111 | en | [
"info.info-ni",
"info.info-pf",
"info.info-ds",
"info.info-et"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01716111v2/file/Stability%20and%20Security%20of%20the%20Tangle.pdf | Quentin Bramas
email: bramas@unistra.fr
The Stability and the Security of the Tangle
In this paper we study the stability and the security of the distributed data structure at the base of the IOTA protocol, called the Tangle. The contribution of this paper is twofold. First, we present a simple model to analyze the Tangle and give the first discrete time formal analyzes of the average number of unconfirmed transactions and the average confirmation time of a transaction.
Then, we define the notion of assiduous honest majority that captures the fact that the honest nodes have more hashing power than the adversarial nodes and that all this hashing power is constantly used to create transactions. This notion is important because we prove that it is a necessary assumption to protect the Tangle against double-spending attacks, and this is true for any tip selection algorithm (which is a fundamental building blocks of the protocol) that verifies some reasonable assumptions. In particular, the same is true with the Markov Chain Monte Carlo selection tip algorithm currently used in the IOTA protocol.
Our work shows that either all the honest nodes must constantly use all their hashing power to validate the main chain (similarly to the bitcoin protocol) or some kind of authority must be provided to avoid this kind of attack (like in the current version of the IOTA where a coordinator is used).
The work presented here constitute a theoretical analysis and cannot be used to attack the current IOTA implementation. The goal of this paper is to present a formalization of the protocol and, as a starting point, to prove that some assumptions are necessary in order to defend the system again double-spending attacks. We hope that it will be used to improve the current protocol with a more formal approach.
Introduction
Since the day Satoshi Nakamoto presented the Bitcoin protocol in 2008 [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF], the interest in Blockchain technology has grown continuously. More generally, the interest concerns Distributed Ledger Technology, which refers to a distributed data storage protocol. Usually it involves a number of nodes (or processes, or agents) in a network that are known to each other or not. Those nodes may not trust each-other so the protocol should ensure that they reach a consensus on the order of the operations they perform, in addition to other mechanism like data replication for instance.
The consensus problem has been studied for a long time [START_REF] Pease | Reaching agreement in the presence of faults[END_REF][START_REF] Fischer | Impossibility of distributed consensus with one faulty process[END_REF] providing a number of fundamental results. But the solvability of the problem was given in term of proportion of faulty agents over honest agents and in a trustless network, where anyone can participate, an adversary can simulate an arbitrary number of nodes in the network. To avoid that, proof systems like Proof of Work (PoW) or Proof of Stake (PoS) are used to link the importance of an entity with some external properties (processing power in PoW) or internal properties (the number of owned tokens 1 in PoS) instead of simply the number of nodes it controls. The solvability of the consensus is now possible only if the importance of the adversary (given in terms of hashing power or in stake) is smaller than the honest one (the proportion is reduced to 1/3 if the network is asynchronous).
In Bitcoin and in the other blockchain technologies, transactions are stored in a chain of blocks, and the PoW or PoS is used to elect one node that is responsible for writing data in the next block. The "random" selection and the incentive a node may have to execute honestly the protocol make the whole system secure, as it was shown by several formal analysis [START_REF] Nakamoto | Bitcoin: A peer-to-peer electronic cash system[END_REF][START_REF] Garay | The bitcoin backbone protocol: Analysis and applications[END_REF]. Usually, there are properties that hold with high probability i.e., with a probability that tends to one quickly as the time increases. For instance, the order between two transactions do not change with probability that tends to 1 exponentially fast over the time in the Bitcoin protocol, if the nodes executing honestly (or rationally) the protocol have more than a third of the hashing total power.
In this paper we study another distributed ledger protocol called the Tangle, presented by Serguei Popov [START_REF] Popov | The tangle. white paper[END_REF] that is used in the IOTA cryptocurrency to store transactions. The Tangle is a Directed Acyclic Graph (DAG) where a vertex, representing a transaction, has two parents, representing the transactions it confirms.
According to the protocol a PoW has to be done when adding a transaction to the Tangle. This PoW prevents an adversary from spamming the network. However, it is not clear in the definition of the Tangle how this PoW impacts its security.
When a new transaction is appended to the Tangle, it references two previous unconfirmed transactions, called tips. The algorithm selecting the two tips is called a Tip Selection Algorithm (TSA). It is a fundamental parts of the protocol as it is used by the participants to decide, among two conflicting transactions, which one is the valid. It is the most important part in order for the participants to reach a consensus. The TSA currently used in the IOTA implementation uses the PoW contained in each transaction to select the two tips.
Related Work Very few academic papers exist on this protocol, and there is no previous work that formally analyzes its security. The white paper behind the Tangle [START_REF] Popov | The tangle. white paper[END_REF] presents a quick analysis of the average number of transactions in the continuous time setting. This analysis is done after assuming that the evolution converge toward a stationary distribution. The same paper presents a TSA using Monte Carlo Markov Chain (MCMC) random walks in the DAG from old transactions toward new ones, to select two unconfirmed transactions. The random walk is weighted to favor transactions that are confirmed by more transactions. There is no analysis on how the assigned weight, based on the PoW of each transaction, affects the security of the protocol. This MCMC TSA is currently used by the IOTA cryptocurrency.
It is shown in [START_REF] Popov | Equilibria in the tangle[END_REF] that using the default TSA correspond to a Nash equilibrium. Participants are incite to use the MCMC TSA, because using another TSA (e.g. a lazy one that confirms only already confirmed transactions) may increase the chances of seeing their transactions unconfirmed.
Finally, the tangle has also been analyzed by simulation [START_REF] Kuśmierz | The first glance at the simulation of the tangle: discrete model[END_REF] using a discrete time model, where transactions are issued every round following a Poisson distribution of parameter λ. Like in the continuous time model, the average number of unconfirmed transactions (called tips) seems to grow linearly with the value of λ, but a little bit slower (≈ 1.26λ compared to 2λ in the continuous time setting) Contributions The contribution of our paper is twofold. First, we analyze formally the number of tips in the discrete time setting, depending on the value of λ by seeing it as a Markov chain where at each round, their is a given probability to obtain a given number of tips. Unlike previous work, we here prove the convergence of the system toward a stationary distribution. This allows use to prove the previous results found by simulations [START_REF] Kuśmierz | The first glance at the simulation of the tangle: discrete model[END_REF] that the average number of tips is stationary and converge towards a fixed value.
Second, we prove that if the TSA depends only on the PoW, then the weight of the honest transactions should exceed the hashing power of the adversary to prevent a double-spending attack. This means that honest nodes should constantly use their hashing power and issue new transactions, otherwise an adversary can attack the protocol even with a small fraction of the total hashing power. Our results is interesting because it is true for any tip selection algorithm i.e., the protocol cannot be more secure by simply using a more complex TSA.
The remaining of the paper is organized as follow. Section 2 presents our model and the Tangle. In Section 3 we analyze the average confirmation time and the average number of unconfirmed transactions. In Section 4 we prove our main theorem by presenting a simple double-spending attack.
Model
The Network
We consider a set N of processes, called nodes, that are fully connected. Each node can send a message to all the other nodes (the network topology is a complete graph). We will discuss later the case where the topology is sparse.
We assume nodes are activated synchronously. The time is discrete and at each time instant, called round, a node reads the messages sent by the other nodes in the previous round, executes the protocol and, if needed, broadcast a message to all the other nodes. When a node broadcast a message, all the other nodes receive it in the next round. The size of messages are in O(B), where B is an upper bound on the size of the messages when all the nodes follow the protocol honestly i.e., the size of the messages is sufficiently high not to impact the protocol when it is executed honestly. Those assumptions are deliberately strong as they make an attack more difficult to perform.
The DAG
In this paper, we consider a particular kind of distributed ledger called the Tangle, which is a Direct Acyclic Graph (DAG). Each node u stores at a given round r a local DAG G u r (or simply G r or G if the node or the round are clear from the context), where each vertex, called site, represents a transaction. Each site has two parents (possibly the same) in the DAG. We say a site directly confirm its two parents. All sites that are confirmed by the parents of a site are also said to be confirmed (or indirectly confirmed) by it i.e., there is a path from a site to all the sites it confirms in the DAG (see Figure 1). A site that is not yet confirmed is called a tip. There is a unique site called genesis that does not have parents and is confirmed by all the other sites. For simplicity we model a DAG simply by a set of sites G = (s i ) i∈I Two sites may be conflicting. This definition is application-dependent so we assume that there exists a function areConf licting(a, b) that answer whether two sites are conflicting or not.
If the Tangle is used to store the balance of a given currency (like the IOTA cryptocurrency), then a site represents a transaction moving funds from a sender address to a receiver address and two sites are conflicting if they try to move the same funds to two different receivers i.e., if both executing transactions results in a negative balance for the sender. The details of this example are outside the scope of this paper, but we may use this terminology in the remaining of the In each site, the first number is its score and the second is its cumulative weight. The two tips (with dashed border) are not confirmed yet and have cumulative weight of 1.
paper. In this case, signing a transaction means performing the PoW to create the site that will be included in the Tangle, and broadcasting a transaction means sending it to the other nodes so that they can include it to their local Tangle. At each round, each node may sign one or more transactions. For each transaction, the node selects two parents. The signed transaction becomes a site in the DAG. Then, the node broadcast the site to all the other node.
DAG extension
Definition 1. Let G be a DAG and A a set of sites. If each site of A has its parents in A or in the tips of G, then we say that A is an extension of G and G ∪ A denotes the DAG composed by the union of sites from G and A. We also say that A extends G.
One can observe that if A extends G, then the tips of G forms a cut of G ∪ A. Definition 2. Let A be a set of sites extending a DAG G. We say A completely extends G (or A is a complete extension of G) if all the tips of G ∪ A are in A. In other word, the sites of A confirm all the tips of G.
The local DAG of a node may contain conflicting sites. If so, the DAG is said to be forked (or conflicting). A conflict-free sub-DAG is a sub-DAG that contains no conflicting sites.
Weight and Hashing Power When a transaction is signed and become a site in the DAG, a small proof of work (PoW) is done. The difficulty of this PoW is called the weight of the site. Initially, this PoW has been added to the protocol to prevent a node from spamming a huge number of transactions. In order to issue a site of weight w a processing power (or hashing power ) proportional to w needs to be consumed.
With the PoW, spamming requires a large amount of processing power, which increases its cost and reduces its utility. It was shown [START_REF] Popov | The tangle. white paper[END_REF] that site should have bounded weight and for simplicity, one can assume that the weight of each site is 1.
Then, this notion is also used to compute the cumulative weight of a site, which is the amount of work that has been done to deploy this site and all sites that confirm it. Similarly, the score of a site is the sum of all weight of sites confirmed by it i.e., the amount of work that has been done to generate the sub-DAG confirmed by it, see Figure 1 for an illustration.
Tip Selection Algorithm
When signing a transaction s, a node u has to select two parents i.e., two previous sites in its own version of the DAG.According to the protocol, this is done by executing an algorithm called the tip selection algorithm (TSA). The protocol says that the choice of the parents must be done among the sites that have not been confirmed yet i.e., among tips. Also, the two selected parents must not confirm, either directly or indirectly, conflicting sites.
We denote by T the TSA, which can depend on the implementation of the protocol. For simplicity, we assume all the nodes use the same algorithm T . As pointed by previous work [START_REF] Popov | The tangle. white paper[END_REF], the TSA is a fundamental factor of the security and the stability of the Tangle.
For our analysis, we assume T depends only on the topology of the DAG, on the weight of each site in the DAG and on a random source. It is said to be statefull if it also depends on previous output of the TSA by this node, otherwise we say it is stateless.
The output of T depends on the current version of the DAG and on a random source (that is assumed distinct for two different nodes). The random source is used to prevent different nodes that has the same view from selecting the same parents when adding a site to the DAG at the same time. However, this is not deterministic and it is possible that two distinct nodes issue two sites with the same parents.
Local Main DAG The local DAG of a node u may contain conflicting sites. For consistency, a node u can keep track of a conflict-free sub-DAG main u (G) that it considers to be its local main DAG. If there are two conflicting sites a and ā in the DAG G, the local main DAG contains at most one of them.
The main DAG of a node is used as a reference for its own view of the world, for instance to calculate the balance associated with each address. Of course this view may change over the time. When new transactions are issued, a node can change its main DAG, updating its view accordingly (exactly like in the bitcoin protocol, when a fork is resolved due to new blocks being mined). When changing its main DAG, a local node may discard a sub-DAG in favor of another sub-DAG. In this case, several sites may be discarded. This is something we want to avoid or at least ensure that the probability for a site to be discarded tends quickly to zero with time.
The tip selection algorithm decides what are the tips to confirm when adding a new site. Implicitly, this means that the TSA decides what sub-DAG is the main DAG. In more detail, the main DAG of a node at round r is the sub-DAG confirmed by the two sites output by the TSA. Thus, a node can run the TSA just to know what is its main DAG and even if no site has to be included to the DAG.
One can observe that, to reach consensus, the TSA should ensure that the main DAG of all the nodes contain a common prefix of increasing size that represents the transactions everyone agree on.
Adversary Model
Among the nodes, some are honest i.e., they follow the protocol, and some are byzantine and behave arbitrarily. For simplicity, we can assume that only one node is byzantine and we call this node the adversary. The adversary is connected to the network and receive all the transactions like any other honest node. He can behave according to the protocol but he can also create (and sign) transactions without broadcasting them, called hidden transaction (or hidden sites). To make the results stronger, we can assume that the adversary cannot sign a message using another node's identity. Here, even if we use the term "signing", sites may not have identity attached to them so it is actually not relevant to decide whether or not adversary can steal the identity of honest nodes.
When an honest node issues a new site, the two sites output by T must be two tips, at least in the local DAG of the node. Thus, one parent p 1 cannot confirm indirectly the other p 2 , because in this case the node is aware of p 2 having a child and is not a tip. Also, a node cannot select the same site as parent for two different site, thus the number of honest children cannot exceed the number of nodes in the network. This implies the following property.
Property 1. In a DAG constructed by n honest nodes using a TSA, a site cannot have one parent that confirms the other one. Moreover, the number of children of each site is bounded by n.
The first property should be preserved by an adversary as it is easy for the honest nodes to check and discard a site that does not verify it. However the adversary can issue multiple sites that directly confirm the same site and the honest nodes have no way to know which sites are honest.
Assiduous Honest Majority Assumption
The cumulative weight and the score can be used by a node to select its main DAG. However, even if it is true that a heavy sub-DAG is harder to generate than a light one, there is no relation yet in the protocol between the weight of sites and the hashing power capacity of honest nodes.
We define the assiduous honest majority assumption as the fact that the hashing power of honest nodes is constantly used to generate sites and that it is strictly greater than the hashing power of the adversary. In fact, without this assumption, it is not relevant to look at the hashing power of the honest nodes if they do not constantly use it to generates new sites.
Thus, under this assumption, the cumulative weight of the honest DAG grows according to the hashing power of the honest nodes, and the probability that an adversary generates more sites than the honest nodes in a given period of time tends to 0 as the duration of the period tends to infinity. Conversely, without this assumption, an adversary may be able to generates more sites than the honest nodes, even with less available hashing power.
Average Number of Tips and Confirmation Time
In this section we study the average number of tips depending on the rate of arrival of new sites. In this section, like in previous analysis [START_REF] Popov | The tangle. white paper[END_REF], we assume that tip selection algorithm is the simple uniform random tip selection that select two tips uniformly at random.
We denote by N (t) the number of tips at time t and λ(t) the number of sites issued at time t. Like previously, we assume λ(t) follows a Poisson distribution of parameter λ. Each new site confirms two tips and we denote by C(t) the number of sites confirmed at time t, among those confirmed sites, C tips (t) represents the number of tips i.e., the number of sites that are confirmed for the first time at time t. Due to the latency, if h > 1, a site can be selected again by the TSA. We have:
N (t) = N (t -1) + λ(t) -C tips (t)
We say we are in state N ≤ 1 if there are N tips at time t. Then, the number of tips at each round is a Markov chain (N (t)) t≥0 with an infinite number of states [1, ∞). To find the probability of transition between two states (given in Lemma 2) we first calculate the probability of transition when the number of new site is known. Proof. If k new sites are issued, then there are up to 2k sites that are confirmed. This can be seen as a "balls into bins" problem [START_REF] Mitzenmacher | Probability and computing: Randomized algorithms and probabilistic analysis[END_REF] with 2k balls thrown into N bins, and the goal is to see how many bins are not empty i.e. how many unique sites are confirmed. First, there are N 2k possible outcome for this experience so the probability of a particular configuration is 1 N 2k . The number of ways we can obtains exactly C = N -N + k non empty bins, or confirmed transaction (so that there are exactly N tips afterward) is the number of ways we can partition a set of 2k elements into C parts times the number of ways we can select C bins to receive those C parts.
P N k →N = N ! N 2k (N -k)! 2k N -N + k
The first number is called the Stirling number of the second kind and is denoted by
2k N -N +k . The second number is N ! (N -k)! .
Then, the probability of transition is a direct consequence of the previous lemma Lemma 2. The probability of transition from N to N is
P N →N = N k=|N -N | P(Λ = k)P N k →N = N k=|N -N | N !λ k e -λ N 2k (N -k)!k! 2k N -N + k
Proof. We just have to observe that the probability of transition from N to N is null if the number of new sites is smaller than N -N (because each new site can decrease the number of tips by at most one), smaller than N -N (because each site can increase the number of tips by at most one), or greater than N (because each new site is a tip). Lemma 3. The Markov chain (N (t)) t≥0 has a positive stationary distribution π.
Proof. First, it is clear that (N (t)) t≥0 is aperiodic and irreducible because for any state N > 0, resp. N > 1, there is a non-null probability to move to state N + 1, resp. to state N -1. Since it is irreducible, we only have to find one state that is positive recurrent (i.e., that the expectation of the hitting time is finite) to prove that there is a unique positive stationary state.
For that we can observe that the probability to transition from state N to N > N tends to 0 when N tends to infinity. Indeed, for a fixed k, we even have that the probability to decrease the number of tips by k tends to 1:
P N k →N -k = N ! N 2k (N -2k)! = (1 - 1 N )(1 - 2 N ) . . . (1 - 2k + 1 N ) (1)
lim
N →∞ P N k →N -k = 1 (2)
So that for any ε > 0 there exists a k ε such that P(Λ ≥ k ε ) < ε/2 and from (2) an N ε such that ∀k < k ε , P
N k →N -k
< ε 2kε so we obtain:
A Nε = N >N ≥Nε P N →N = P(N (i + 1) > N (i + 1) | N (i) ≥ N ) (3)
< P(Λ ≥ k ε ) + k<kε P N k →N -k (4)
< ε
So that the probability A N to "move away" from states [1, N ] tends to 0 when N tends to infinity.
In fact, it is sufficient to observe that there is a number N 1/2 such that the probability to "move away" from states [1, N 1/2 ] is strictly smaller than 1/2. Indeed, this is a sufficient condition to have a state in [1, N 1/2 ] that is positive recurrent (one can see this by looking at a simple random walk in one dimension with a mirror at 0 and a probability p < 1/2 to move away from 0 by one and (1 -p) to move closer to 0 by 1). From the irreducibility of (N (t)) t≥0 , all the states are positive recurrent and the Markov chain admit a unique stationary distribution π.
The stationary distribution π verifies the formula π N = i≥1 π i P i→N , which we can use to approximate it with arbitrary precision.
When the stationary distribution is known, the average number of tips can be calculated N avg = i>0 iπ i , and with it the average confirmation time Conf of a tip is simply given by the fact that, at each round, a proportion λ/N avg of tips are confirmed in average. So Conf = N avg /λ rounds are expected before a given tip is confirmed. The value of Conf depending on λ is shown in Figure 3.
With this, we show that Conf converges toward a constant when λ tends to infinity. In fact, for a large λ, the average confirmation time is approximately 1.26, equivalently, the average number of tips N avg is 1.26λ. For smaller values of λ, intuitively the time before first confirmation diverges to infinity and N avg converges to 1.
A Necessary Condition for the Security of the Tangle
A simple attack in any distributed ledger technology is the double spending attack. The adversary signs and broadcast a transaction to transfer some funds to a seller to buy a good or a service, and when the seller gives the good (he consider that the transaction is finalized), the adversary broadcast a transaction that conflicts the first one and broadcast other new transactions in order to discard the first transaction. When the first transaction is discarded, the seller is not paid anymore and the funds can be reused by the adversary.
The original motivation behind our attack of the Tangle is as follow: after the initial transaction to the seller, the adversary generates the same sites as the honest nodes, forming a sub-DAG with the same topology as the honest sub-DAG (but including the conflicting transaction). Having no way to tell the difference between the honest sub-DAG and the adversarial sub-DAG, the latter will be selected by the honest nodes at some point. This approach may not work with latency in the network, because the sub-DAG of the adversary is always shorter than the honest sub-DAG, which is potentially detected by the honest nodes. To counter this, the adversary can generate a sub-DAG that has not exactly the same topology, but that has the best possible topology for the tip selection algorithm. The adversary can then use all its available hashing power to generate this conflicting sub-DAG that will at some point be selected by the honest nodes. For this attack we use the fact that a TSA selects two tips that are likely to be selected by the same algorithm thereafter. For simplicity we captured this with a stronger property: the existence of a maximal deterministic TSA.
Definition 3 (maximal deterministic tip selection algorithm).
A given TSA T has a maximal deterministic TSA T det if T det is a deterministic TSA and for any DAG G, there exists N G ∈ N such that for all n ∈ N the following property holds:
Let A det be the extension of G obtained with N G + n executions of T det . Let A be an arbitrary extension of G generated with T of size at most n, conflicting with A det , and let G = G∪A∪A det . We have:
P(T (G ) ∈ A det ) ≥ 1/2
Intuitively this means that executing the maximal deterministic TSA generates an extension this is more likely to be selected by the honest nodes, provided that it contains more sites than the other extensions. When the assiduous honest majority assumption is not verified, the adversary can use this maximal deterministic TSA at his advantage.
Theorem 1. Without the assiduous honest majority assumption, and if the TSA has a maximal deterministic tip selection, the adversary can discard one of its transaction that has an arbitrary cumulative weight.
Proof. Without the assiduous honest majority assumption, we assume that the adversary can generate strictly more sites than the honest nodes. Let W be an arbitrary weight. One can see W has the necessary cumulative weigh a given site should have in order to be considered final. Let G 0 be the common local main DAG of all node at a given round r 0 . At this round our adversary can generate two conflicting sites confirming the same pair of parents. One site a is sent to the honest nodes and the other ā is kept hidden.
The adversary can use T det the maximal deterministic TSA of T to generate continuously (using all its hashing power) sites extending G ∪ {ā}. While doing so, the honest nodes extend G ∪ {a} using the standard TSA T . After r W rounds, it can broadcast all the generated sites to the honest nodes. The adversary can choose r W so that (i) the probability that it has generated N G more sites than the honest nodes is sufficiently high, and (ii) transaction a has the target cumulative weight W .
After receiving the adversarial extension, by definition 3, honest nodes will extend the adversarial sub-DAG with probability greater than 1/2. In expectation, half of the honest nodes start to consider the adversarial sub-DAG as their main DAG, thus the honest nodes will naturally converge until they all chose the adversarial sub-DAG as their main DAG, which discard the transaction a.
If the bandwidth of each channel is limited, then the adversary can start broadcasting the sites of its conflicting sub-DAG at round r W , at a rate two times greater than the honest nodes. This avoids congestion, and at round r W + r W /2 all the adversarial sub-DAG is successfully received by the honest nodes. Due to this additional latency, the number of sites in the honest sub-DAG might still be greater than the number of sites in the adversarial sub-DAG, so the adversary continues to generate and to broadcast sites extending its conflicting sub-DAG and at round at most 2r W , the adversarial extension of G received by the honest nodes has a higher number of sites than the honest extension.
So the same property is true while avoiding the congestion problem. Now that we have our main theorem, we show that any TSA defined in previous work (especially in the Tangle white paper [START_REF] Popov | The tangle. white paper[END_REF]) has a corresponding maximal deterministic TSA. To do so we The rectangle site conflicts with all site in A det so that when executing the TSA on G ∪ A ∪ A det , tips either from A or from A det are selected. The strategy to construct A det can be either to increase the number of children of G tips or to increase their weight ; both ways are presented here. can see that to increase the probability for the adversarial sub-DAG to be selected, the extension of a DAG G obtained by the maximal deterministic TSA should either increase the weight or the number of direct children of the tips of G as shown in Figure 4. We now prove that the three TSA presented in the Tangle white paper [START_REF] Popov | The tangle. white paper[END_REF], (i) the random tip selection, (ii) the MCMC algorithm and (iii) the Logarithmic MCMC algorithm, all have a maximal deterministic TSA, which implies that the assiduous honest majority assumption is necessary when using them (recall that we do not study the sufficiency of this assumption).
The Uniform Random Tip Selection Algorithm
The uniform random tip selection algorithm is the simplest to implement and the easiest to attack. Since it chooses the two tips uniformly at random, an attacker just have to generates more tips than the honest nodes in order to increase the probability to have one of its tips selected. Lemma 4. The Random TSA has a maximal deterministic TSA.
Proof. For a given DAG G the maximal deterministic T det always chooses as parents one of the l tips of G. So that, after n + l newly added sites A det , the tips of G ∪ A det are exactly A det and no other extension of G of size n can produce more than n + l tips so that the probability that the random TSA select a tip from A det is at least 1/2. Corollary 1. Without the assiduous honest majority assumption, the Tangle with the Random TSA is susceptible to double-spending attack.
The MCMC Algorithm
The MCMC algorithm is more complex than the random TSA. It starts by initially putting a fixed number of walker on the local DAG. Each walker performs a random walk towards the tips of the DAG with a probabilistic transition function that depends on the cumulative weight of the site it is located to and its children. In more details, a walker at a site v has a probability p v,u to move to a child u with
p v,u = exp(-α(w(v) -w(u))) c∈Cv exp(-α(w(v) -w(c)) (6)
Where the set C v is the children of v, and α > 0 is a parameter of the algorithm.
The question to answer in order to find the maximal deterministic TSA of MCMC algorithm is: what is the best way to extend a site v to maximize the probability that the MCMC walker chooses our sites instead of another site. The following Lemma shows that the number of children is an important factor. This number depends on the value α.
Lemma 5. Suppose a MCMC walker is at a site v. There exists a constant C α such that if v has C α children of weight n, then, when extending v with an arbitrary set of sites H of size n, the probability that the walker move to H is at most 1/2.
Proof. When extending v with n sites, one can choose the number h of direct children, and then how the other sites extends those children. There are several ways to extends those children which changes their weights w 1 , w 2 , . . . , w h . The probability p H for a MCMC walker to move to H is calculated in the following way:
S H = h 1 exp(-α(W -w i )) S H = C α exp(-α(W -n)) S = S H + S H p H = S H /S
The greater the weight the greater the probability p H . Adding more children, might reduce their weights (since H contains only n sites). For a given number of children h, there are several way to extends those children, but we can arrange them so that each weight is at least n -h + 1 by forming a chain of length n -h and by connecting the children to the chain with a perfect binary tree. The height l i of a children i gives it more weight. So that we have w i = n -h l i . A property of a perfect binary tree is that h 1 2 l i = 1. We will show there is a constant C α such that for any h and any l 1 , . . . , l h , with h 1 2 l i = 1, we have
S H ≥ S H C α exp(-α(W -n)) ≥ h i=1 exp(-α(W -w i )) C α ≥ h i=1 exp(-α(h -l i )) (7)
Surprisingly, one can observe that our inequality does not depend on n, so that the same is true when we arrange the sites when extending a site v in order to increase the probability for the walker to select our sites. Let f h : (l 1 , . . . , l h ) → e -αh h 1 exp(αl i ). So the goal is to find an upper bound for the function f h that depends only on α.
The function f h is convex (as a sum of convex functions), so the maximum is on the boundary of the domain, which is either (l 1 , . . . , l h ) = (1, 2, . . . , h) or (l 1 , . . . , l h ) = ( log(h) , . . . , log(h) , log(h) , . . . , log(h) )
For simplicity, let assume that h is a power of two so that the second case is just ∀i, l i = log(h).
In the first case we have
f h (1, . . . , h) = exp(-αh) exp(α(h + 1)) -exp(-α) exp(-α) -1 = exp(α) -exp(-α(h + 1)) exp(-α) -1
which tends to 0 when h tends infinity, so it admits a maximum C 1 α
In the second case, we have
f h (1, . . . , h) = exp(-αh)h exp(α log(h))
which again tends to 0 when h tends infinity, so it admits a maximum C 2 α By choosing C α = max(C 1 α , C 2 α ) we have the inequality [START_REF] Kuśmierz | The first glance at the simulation of the tangle: discrete model[END_REF] for any value of h.
Lemma 6. The MCMC tip selection has a maximal deterministic TSA.
Proof. Let G be a conflict-free DAG with tips G tips .
Let T be the number of tips times the number C α defined in Lemma 5. The T first executions of T det select a site from G tips (as both parents) until all site from G tips as exactly C α children.
The next executions of T det selects two arbitrary tips (different if possible). After T executions, only one tip remains and the newly added sites form a chain.
Let N G = 2T . N G is a constant that depends only on α and on G. After N G + n added sites, each site in G tips has a C α children with weight at least n. Thus, by Lemma 5, a MCMC walker located at a site v ∈ G tips moves to our extension with probability at least 1/2. Since this is true for all sites in G tips and G tips is a cut. All MCMC walker will end up in A det with probability at least 1/2. One can argue that this is not optimal and we could have improved the construction of the extension to reduce the number of sites, but we are mostly interested here in the existence of such a construction. Indeed, in practice, the probability for a walker to move to our extension would be higher as the honest sub-DAG A is not arbitrary, but generated with the TSA. Our analyze shows that even in the worst configuration, the adversary can still generate an extension with a good probability of being selected.
Corollary 2. Without the assiduous honest majority assumption, the Tangle with the MCMC TSA is susceptible to double-spending attack.
The Logarithmic MCMC Algorithm
In the Tangle white paper, it is suggested that the MCMC probabilistic transition function can be defined with the function h → h -α = exp(-α ln(h)). In more details, a walker at a site v has a probability p v,u to move to a child u with
p v,u = (w(v) -w(u)) -α c∈Cv (w(v) -w(c)) -α (8)
Where the set C v is the children of v, and α > 0 is a parameter of the algorithm. The IOTA implementation currently uses this function with α = 3. With this transition function, the number of children is more important than their weight.
Lemma 7. The logarithmic MCMC tip selection has a maximal deterministic TSA.
Proof. Let G be a conflict-free DAG with tips G tips .
Let T be the number of tips.
T det always selects two sites from G tips in a round-robin manner. After kT executions (k ∈ N), each site from G tips has 2k children.
Let n be the number of sites generated with T det and A an arbitrary extension of G. Let v ∈ G tips and C v be the number of children of v that are in A and C det = 2n/T be the number of children of v generated by T det .
With w(v) the weight of v, we have that w(v) ≤ 2n and C det ≤ w(v) -n ≤ n Let p be the probability that a walker located at v chooses a site generated by T det . We have
p ≥ C det (w(v) -1) -α C v (w(v) -n) -α ≥ C det (2n) -α C v (C det ) -α = C det C v T α = 2n T 1-α C v
With T a constant and C v bounded, we have that p tends to infinity as n tends to infinity. This is true for each site of G tips , so after a given number of generated site N G , the probability that a LMCMC walker located at any site of G tips moves to a site generated by by T det is greater than 1/2.
Corollary 3. Without the assiduous honest majority assumption, the Tangle with the Logarithmic MCMC TSA is susceptible to double-spending attack.
Discussion
Sparse Network
The case of sparse networks intuitively gives more power to the adversary as it is more difficult for the honest nodes to coordinate. Let assume that the communication graph is arbitrary. It can be a geometric graph (eg. a grid), a small-world graph with a near constant diameter, or anything else. In order for the protocol to work properly, we assume that for each link between two nodes, there is enough bandwidth to send all the sites generated by the honest nodes and that the usage of each link is a constant fraction of its available capacity. For simplicity we can assume that no conflict occurs from the honest nodes so that, without adversary, the local DAG of each node is its main DAG. Due to multi-hop communications, the local DAGs of the honest nodes may differ but only with respect to the sites generated during the last D rounds, where D is the diameter of the communication graph. Take an arbitrary node u in this graph. We connect our adversary to node u, so that every sites received by u at round r is received by our adversary at round r + 1.
In this arbitrary network, the attack is exactly the same, except for the number of rounds r W that the adversary waits before revealing the conflicting sub-DAG. Indeed, r W should be greater to take into account the propagation time and ensure that that the first adversarial site a has a cumulative weight greater than W in all the honest nodes (typically we should wait D more rounds compared to the previous case). As in the previous case, the adversary can broadcast its conflicting sub-DAG while ensuring not to induce congestion, for instance, at a rate two times greater than the honest. The topology of the network does not change the fact that after at most 2r W + D rounds, all the honest nodes have a greater probability to choose the adversarial sub-DAG for their next transactions.
Trusted Nodes
The use of trusted nodes is what makes IOTA currently safe against this kind of attacks. Indeed, a coordinator node regularly add a site to the DAG, confirming an entire conflict-free sub-DAG. Trusted sites act like milestones and any site confirmed by a trusted site is considered irreversible. However if the trusted node is compromised or is offline for too long, the other nodes are left alone.
The current implementation of IOTA uses a trusted node called the coordinator and plans to either remove it, or replace it by a set of distributed trusted nodes.
One can observe that the crypto-currency Byteball [START_REF] Churyumov | Byteball: a decentralized system for transfer of value[END_REF] uses a special kind of trusted nodes called witnesses. They are also used to resolve conflicts in the DAG.
An important question could be: is the use of trusted node necessary to secure a distributed ledger based on a DAG?
Avoiding Forks
In the current protocol, conflicting sites cannot be confirmed by the same site. Part of the community already mentioned that this can cause some problem if the latency is high (i.e., if diameter of the communication graph is large). Indeed, by sending two conflicting sites to two different honest nodes, half of the network can start confirming one transaction, and the other half the other transaction. When a node finally receives the two conflicting transactions ( assuming that the convergence of is instantaneous) a single site will be assumed correct and become part of all the main DAG of all the honest node. However all the sites confirming the other conflicting site are discarded, resulting in wasted hashing power and increase in confirmation time for those site as a reattachment must occur. The cost for the adversary is small and constant and the wasted hashing power depends on the maximum latency between any two nodes.
On way to avoid this is to include conflicting sites in the DAG (like in the Byteball protocol [START_REF] Churyumov | Byteball: a decentralized system for transfer of value[END_REF] for instance), by issuing a site confirming the two conflicting sites and containing the information of what site is considered valid and what site is considered invalid. This special site called decider site would be the only site allowed to confirm directly or indirectly two conflicting sites. This has the advantage that all the site confirming the invalid one remains valid and do not need to be reattached. However, the same thing can happen if the adversary send two conflicting decider sites to two end of the network. But again, a decider site could be used to resolve any kind of conflict, including this one. Indeed, this may seems like a circular problem that potentially never ends, but every time a decider is issued, a conflict is resolved and the same conflict could have happened even without the decider site. So having decider site should no change the stability of the Tangle and only help avoiding reattaching sites.
Conclusion
We presented a model to analyze the Tangle and we used it to study the average confirmation time and the average number of unconfirmed transaction over the time.
Then, we defined the notion of assiduous honest majority that captures the fact that the honest nodes have more hashing power than the adversarial nodes and that all this hashing power is constantly used to create transactions. We proved that for any tip selection algorithm that has a maximal deterministic tip selection (which is the case for all currently known TSA), the assiduous honest majority assumption is necessary to prevent a double-spending attack on the Tangle.
Our analyze shows that honest nodes cannot stay at rest, and should be continuously signing transactions (even empty ones) to increase the weight of their local main sub-DAG. If not, their available hashing power cannot be used to measure the security of the protocol, like we see for the Bitcoin protocol. Indeed, having a huge number of honest nodes with a very large amount of hashing power cannot prevent an adversary from attacking the Tangle if the honest nodes are not using this hashing power. This conclusion may seem intuitive, but the fact that it is true for all tip selection algorithms (that have a deterministic maximal TSA) is something new that have not been proved before.
Figure 1 :
1 Figure 1: An example of a Tangle where each site has a weight of 1. In each site, the first number is its score and the second is its cumulative weight. The two tips (with dashed border) are not confirmed yet and have cumulative weight of 1.
Lemma 1 .
1 If the number of tips is N and k new sites are issued, then the probability P N k →N of having N tips in the next round is:
where a b denotes the Stirling number of the second kind S(a, b).
Figure 2 :
2 Figure 2: Stationary distribution of the number of tips, for different values of λ. For each value of λ, one can see that the number of tips is really well centered around the average
Figure 3 :
3 Figure 3: Expected number of round before the first confirmation, depending on the arrival rate of transaction. We see that it tends to 1.26 with λ. Recall that Conf = N avg /λ where N avg refers to the average number of tips in the stationary state.
Figure 4 :
4 Figure4: A and A det are two possible extensions of G. The rectangle site conflicts with all site in A det so that when executing the TSA on G ∪ A ∪ A det , tips either from A or from A det are selected. The strategy to construct A det can be either to increase the number of children of G tips or to increase their weight ; both ways are presented here. |
01757867 | en | [
"info.info-mc",
"info.info-os",
"info.info-es"
] | 2024/03/05 22:32:10 | 2016 | https://hal.science/tel-01757867/file/50376-2016-Serman.pdf | Dr Michael Hauspie Cristal
Issa Traoré
Thomas Gaël
Je
Mickey
Véronique Asli
RPL Requested Privilege
En reposant sur l'existence d'un processeur et d'un système de gestion de mémoire prouvés, seul du code privilégié est susceptible d'outrepasser les droits d'accès configurés par l'hyperviseur. Il n'est donc pas nécessaire d'hyperviser le code non privilégié. Les micro-noyau, généralement choisis pour leur légèreté, ont donc un second avantage une fois hypervisés : ils réduisent au minimum le surcoût de l'hypervision certifiée.
Le document s'articule autour d'un état de l'art sur les différents systèmes de virtualisa-3 Thèse de François Serman, Lille 1, 2016
Résumé en Français
Cette thèse a pour objet la conception d'un hyperviseur logiciel sécurisé, à vocation de certification. Les plus hauts niveaux de certification requièrent l'usage de méthodes formelles, qui permettent de démontrer la validité d'un produit par rapport à une spécification à l'aide de la logique mathématique. Le matériel prouvé n'existant pas, les mécanismes d'hypervision sont ici implémentés de façon logicielle. Cela contribue à réduire la base de confiance, et donc la quantité de modélisation et de preuve à produire. En outre, cela rend possible la virtualisation de systèmes sur des plateformes qui ne sont pas dotées de ces instructions de virtualisation.
• Those features are done with microcode, whereas the core instruction set is wired logic. Because of its nature, microcode is more error prone: it is more complex, implies machine states, which makes it harder to verify than boolean functions. • From a formal proof point of view, we would like to be able to specify the minimum requirements for a processor to actually satisfy our requirements. • We want to address some processors (armv7 or micro-controllers) where those instructions are not available.
It seems important to unify all those specifications and requirements of the hardware to 7 improve reusability of this work in each layer: hardware specification, code production (in the compiler's backend) and in the hypervisor design itself.
The first part is a state of the art of virtualization on both x86 and ARM. Certified systems are also aborded. Then, the context of the thesis is presented. Afterwards, two parts of this thesis are presented: first an hypervisor for embedded system that does not rely on hardware virtualization features, and an intensive anlysis on the ARM instructionset analysis. Finally, future work is presented in the conclusion of the document.
Table of contents
CHAPTER 1 Introduction
If you know the system well enough, you can do things that aren't supposed to be possible Linus Torvalds 15 Claim
Context
My thesis was funded by Prove&Run SAS. This company was created in 2009 by Dominique Bolignano. It aims to increase security using formally proven solutions. Prove&Run has developed a toolchain to edit and prove computer programs written in Smart, their modeling language. Smart is a strongly typed, functional language. After writing Smart source-code, Prove&Run's toolchain will generate C code, which can be compiled using traditional tools such as GCC. This language was successfully used to write a specialized and secured operating system for embedded devices. It is now used to create a Trusted Execution Environement (TEE), and a hypervisor.
The academic side of my thesis was done in Lille, in 2XS team. 2XS is a member of CRIStAL, the research center in computer sciences, signal processing, and automation of the University of Lille. 2XS stands for eXtra Small, eXtra Safe. This team was created after a fork from POPS, a larger team which was involved in sensor networks and system design. 2XS has been involved in system design, while FUN has kept the WSN part. In 2XS, several axis are studied, but they often involve two parties, and thus co-design. Between hardware and system first. Because of the embedded systems constraints, developers have to know exactly on which target their code is supposed to run. A second axis of that part, is the energy consumption. One of the short-term goals is to be able to produce an energy consumption cartography of individual blocks of code, and to make this information actually usable by a developer. Between OS developers and Proof developers. A new topic is arising in 2XS: producing formal proofs on systems. A subset of the team is currently writing a mesovisor, which provides isolation between partitions. This mesovisor can be used to make a hypervisor, to run several systems in one single machine. After a proof of concept in C and a Haskell model, it was decided to write proofs directly in Coq, and to extract Coq into C language, in order to make it actually run.
For three years, I've been in 2XS 4 days a week, and one day at Prove&Run. During the fourth year, I have been at 2XS 2 days a week for six months.
Claim
This thesis presents a novel design to implement a hypervisor for ARM processors, suitable for the highest levels of certifications. Such levels require both a model of the product, and an implementation. Moreover, a formal proof of correctness between the model and the implementation is also required. Because no proofs on the hardware are available, we claim that the hardware mechanisms should be limited to the minimum. In particular, virtualization extensions are black boxes, which cannot be verified.
Document structure
This thesis presents a proof of concept of a minimal hypervisor that limits the hardware requirements to the minimum. Only the ARM core-instruction set is used, and the CP15 co-processor for the MMU 1 . We believe that this simple design can ease the evaluation of such a product. With a proper guest, this hypervisor can achieve less than 20% overhead.
Document structure
Here is the outline of the document: First comes the state of the art. This chapter covers theoretical background of virtualization and existing solutions. Then comes a presentation of the common-criteria, an internationally recognized consortium, which evaluates the security of IT products. In the second part comes the problem statement. A tour will be made on the landscape of certified products, and their issues. Afterward I will insist on the importance of keeping a connection between each layers in the stack: hardware, compiler and sofware. The third part is an in-depth description of the hypervisor. This section also presents the results on several benchmarks, and comment those results. Finally a conclusion will sumarize the document, and perspectives will be given, mainly on how to optimize performances. I will finish with a personal feedback on those four years.
CHAPTER 2 State of the art
The problem with engineers is that they tend to cheat in order to get results. The problem with mathematicians is that they tend to work on toy problems in order to get results. The problem with program verifiers is that they tend to cheat at toy problems in order to get results. science jokes, ver 6.7 mar 1, 1995 http://lithops.as.arizona.edu/~jill/humor.text
Lexicon
Because of trending topics on cloud computing, virtualization, hypervisors and so on, we will provide some definitions here. We begin with real life comparisons to give an intuition on two key concepts: virtualization and abstraction. Afterwards, we will illustrate these two concepts with computing-related examples.
Virtualization or abstraction?
In optics, a virtual image refers to an image of an actual object which appears to be located at the point of apparent divergence. In practice, the rays never converge, which is why a virtual image cannot be projected on screen (see Fig 2.1). A virtual object is a representation of an actual object that could exist, but does not.
A A' A
A'
Figure 2.1: A is a source sending light through a mirror. All the rays seem to come from A', the virtual image of A. Thus, it seems that A is located behind the mirror, when in fact it is in front of it.
In real life, talking about television is an abstraction. Everybody (almost) has an object called television, which can represent slightly different objects. It is a screen with speakers that displays animated pictures and sounds. When talking about a television, we refer to it has its function, not what it is. For instance, no information is given on its size, resolution, display technology etc...
Definition 1 (Virtualization). Virtualizing an object consists of creating a virtual version of that object. A virtual object does not physically exist as such, but is made by software to appear to do so.
Definition 2 (Abstraction). Abstraction comes from the latin verb "abstrahere" which means "draw away". An abstraction considers something as a general quality or characteristic, apart from concrete realities, specific objects, or actual instances.
Lexicon
Example of virtual systems in computing
Computer science makes a heavy use of abstractions, and virtualization. The limit of those concepts is sometimes fuzzy, but it often corresponds to links between different layers. The following sections illustrate those concepts applied to computer hardware, and more specifically memory and microprocessor.
Virtual memory
A common usage for virtualization in computing is virtual memory. Tanenbaum [START_REF] Tanenbaum | Operating Systems Design and Implementation[END_REF] provides an overview of virtual memory. The basic idea behind virtual memory is that combined size of program, data and stack may exceed the amount of physical memory available. The operating system keeps the parts of the program that are currently in use in the main memory, and the rest on disk. This mechanism is called swapping. Virtual memory also allows multiprogramming, that is multiple processes running at once. Of course, if only one CPU is available, only one process will run at a time. The scheduler will implement time sharing on the CPU, to make users feel like several processes run concurrently. Another issue tackled by virtual memory is relocation. When compiling a program, no information are provided to the linker regarding usable addresses. Additionally, programs shoud run on any system without memory restriction (size or available address space). To achieve that, virtual addresses are used. Processes manipulate virtual addresses, which are not the actual addresses where data is physically stored. The latter are called physical addresses. The operating system ensures that the memory management system is configured accordingly so that virtual addresses are mapped on the physical addresses associated to the running process. Each process is free to use any virtual address, as long as there is no conflict in physical addresses 1 . The physical address conflict or swapping (in case of memory exhaustion) is handled by the operating system.
Virtual CPU
Another example is virtual CPU. There are several CPU emulators: Bochs [START_REF]Bochs the open source ia-32 emulation project[END_REF] or Qemu [START_REF]QEMU open source processor emulator[END_REF], Valgrind [START_REF]Valgrind's homepage[END_REF], EM86 [START_REF]Em86's homepage[END_REF]. I will focus only on Qemu and Bochs, which are the most spread emulators. According to Bochs' description, "Bochs is a highly portable open source IA-32 (x86) PC emulator written in C++, that runs on most popular platforms. It includes emulation of the Intel x86 CPU, common I/O devices, and a custom BIOS." Basically, the code mainly consists of a large decoding-loop which models the fetch-decode-execute actions of Lexicon the CPU [START_REF] Lawton | Bochs: A portable pc emulator for unix/x[END_REF]. "Qemu is a FAST! processor emulator using a portable dynamic translator." [START_REF]Qemu internals[END_REF] . It uses dynamic translation to native code for reasonable speed. When it first encounters a piece of code, Qemuconverts it to the host instruction set. Usually dynamic translators are very complicated and highly CPU dependent. Qemuuses some tricks which make it relatively easily portable and simple while achieving good performances [START_REF] Bellard | Qemu, a fast and portable dynamic translator[END_REF].
Despite the obvious design differences (Bochs beeing static based and Qemu dynamic), they both are an exact replica of the emulated CPU. That is, emulated code cannot make the difference between native on-cpu execution and an emulated one.
The processes case
Another example is processes [START_REF] Knott | A proposal for certain process management and intercommunication primitives[END_REF]. Processes are fundamental for multiprogramming systems (also known as multitask) [START_REF] Bovet | Understanding The Linux Kernel[END_REF]. They are the objects manipulated by the scheduler to implement context-switching. A process is an instance of a computer program being executed. It contains an image of the associated program, and a state of the underlying hardware:
• virtual interrupts (signals); • virtual memory;
• virtual CPU (general purpose and status registers); • hardware access through software interruptions.
Accessing hardware through interruptions characterizes it in term of features, not as a specified piece of hardware. For instance, performing a write syscall makes no difference whether the data will be written on a mechanical hard drive or on a USB flash drive. This turns the process into an abstraction of the hardware rather than a virtualized representation. [START_REF]Intel R 64 and IA-32[END_REF].
Remark. virtual addresses can also be seen as an abstraction. On x86, PAE paging translates 32-bit linear addresses to 52-bit physical addresses
Example of abstractions in computing
We have seen that the limit between abstraction and virtualization can be thin. In this section, we will illustrate the use of abstractions in well known areas of computing.
Lexicon
TCP/IP
We use TCP over IP on a daily basis for many different usages like browsing internet, sending emails, synchronize agendas etc... TCP is a reliable transport protocol, built on top of IP, the network layer in the OSI stack (ISO/IEC 7498-1). It provides an endto-end service to applications running on end hosts. If you send a message over TCP, it will eventually be delivered without being altered. IP on the other hand, provides an unreliable datagram service and must be implemented by all systems addressable on the Internet. TCP deals with problems such as packet loss, duplication, and reordering that are not detected nor corrected by the IP layer. It provides a reliable flow of data between two hosts. It is concerned with things such as dividing the data passed to it from the application into appropriately sized chunks for the network layer below, acknowledging received packets, and setting timeouts to ensure that the other end acknowledges packets that are sent, and because this reliable flow of data is provided by the transport layer, the application layer can ignore all these details [START_REF] Stevens | TCP/IP Illustrated[END_REF].
TCP is what computer scientists like to call an abstraction: a simplification of something much more complicated that is going on under the covers [START_REF] Spolsky | Joel on Software: And on Diverse and Occasionally Related Matters That Will Prove of Interest to Software Developers, Designers, and Managers, and to Those Who, Whether by Good Fortune or Ill Luck, Work with Them in Some Capacity[END_REF]. It lets the user communicate with remote or local nodes without handling complex problems such as packets fragmentation, retransmission, reordering etc... Hence, TCP provides an abstraction for a reliable communication channel.
Operating systems
Operating systems can be depicted by two unrelated functions. Namely, extending the machine and managing resources [START_REF] Tanenbaum | Operating Systems Design and Implementation[END_REF]. The latter is not relevant for us now, since it only ensures (fair if possible) sharing of resources among different processes. The former, on the other hand is precisely an abstraction as defined by Def. 2. Fig. 2.2 illustrates a command run in an operating system. It looks very simple and straightforward. In fact, the operating system executes many operations to perform this simple task, as shown in Fig 2 .3.
Code
$ cat /etc/passwd root:x:0:0:root:/root:/bin/bash ... In this shell session, the user asks the system to display the content of the file /etc/passwd. The filename itself is an abstraction which exposes complex file layouts under a conveniant, Lexicon human-readable file hierarchy. The user can browse this hierarchy using a path from the root item to the file separating each node from another with a delimiter. In fact, most filesystems address files using inode numbers. When typing this command, several operations happen: the current shell will fork to create a new child process, then exec to replace the current (child) shell process by cat, with the parameter /etc/passwd. Cat will then open (2) /etc/passwd, read (2) its content, and write (2) it back on its standard output, and finally close (2) the file. That abstraction is provided by processes, as claimed in 2.1.2.3 (p 22). The operating system's abstraction is hidden behind the read implementation. For this example, we assume that /etc/passwd is actually stored on disk (and not on an NFS share for instance). As shown in Figure 2.3, there are quite many functions called by sys_read. The kernel provides an abstraction of the underlying filsystem (so that read can work on every filesystem). The common part is handled by the VFS layer. Afterwards, the VFS deals with memory pages, and constructs bio which are gathered in requests. Later on, those requests are submitted to the disk. Disk specificities (cache size, buffer size, mechanical or flash-based) are handled by lower level code.
Java Virtual Machine
Java is a popular programming language which uses an extra layer in form of JVM (Java Virtual Machine), which makes it platform independent. The developer writes .java code compiled to .class files, which will be executed on top of the JVM. The JVM performs the translation from byte-code to the host machine's language on the fly [START_REF] Smith | The architecture of virtual machines[END_REF]. Because it runs on different platforms (from PDA to servers), Sun Microsystems has promoted Java with the famous slogan: "Write Once, Run Anywhere" [START_REF] Olausson | Java-past, current and future trends[END_REF]. Figure 2.4 depicts the differences between traditional platform dependent executable, and High-Level Language VM (HLL) à la java [START_REF] Smith | Virtual Machines: Versatile Platforms for Systems and Processes[END_REF]. According to [START_REF] Venners | Inside the Java Virtual Machine[END_REF], the Java Virtual Machine is called "virtual" because it is an abstract computer defined by a specification [START_REF] Lindholm | The Java Virtual Machine Specification, Java SE 8 Edition[END_REF]. The Java Virtual Machine is an abstract computing machine. Like a real computing machine, it has an instruction set and manipulates various memory areas at run time. In particular, it defines data types (which may not exist on the underlying hardware) and instructions involving higher level objects, such as invokespecial . Thus, besides its name, JVM is rather an abstraction.
Conclusion
Mathematical functions can be surjective (every element of the codomain is mapped to by at least one element of the domain), injective (every element of the codomain is mapped Lexicon to by at most one element of the domain) or bijective (every element of the codomain is mapped to by exactly one element of the domain). With regard to these definitions, the abstraction function would be surjective: the abstraction is a subset of the actual hardware. In the contrary, the virtualization function would be injective: virtualized domain is a superset of the hardware.
SSD
System virtualization mechanisms
Because cloud solutions are so widely spread, we use virtualization every day : either using software as a service (SAAS) such as Gmail, Dropbox or Netflix, or using the platform as a service (PAAS) like Amazon EC2 or Microsoft Azure. Cloud computing is just a marketing word which refers to on-demand delivery of IT resources and applications via the Internet with pay-as-you-go pricing2 . However, virtualization is not a new concept in computing. In 1973, Robert P. Goldberg and Gerald J. Popek published the first articles on virtualization, which are still used to classify virtualization solutions.
In this chapter, we present an overview of virtualization, and its fundamental concepts. Both Intel and ARM will be described to have a concrete application of theoretical concepts. Afterward, we present the common criteria as a target for a security evaluation.
Prologue
General definitions
Nowadays, virtualization is a key concept in computer science, and let us build large and complex systems. In particular, Virtual Machine Monitor uses virtualization to present the illusion of several virtual machines (VMs), each running a separate operating system instance. An Hypervisor (or Virtual Machine Monitor) is a piece of software or hardware that creates and runs virtual machines. Hypervisors are traditionally classified into two categories after Goldberg's thesis [START_REF] Goldberg | Architecture of virtual machines[END_REF]: Type 1: Type 1 hypervisors run directly on the host's hardware. To provide security (with regard to Confidentiality / Integrity / Availability), they must control all accesses on the hardware. This tedious task is usually performed by commodity kernel. e.g.: Xen [START_REF] Barham | Xen and the art of virtualization[END_REF], Hyper-V [START_REF] Velte | Microsoft Virtualization with Hyper-V[END_REF], ESX Server [START_REF] Waldspurger | Memory resource management in vmware esx server[END_REF].
Type 2: Type 2 are the most widely used hypervisors. They rely on an existing operating system (Host OS) and only need to implement virtualization mechanism. In particular, all the hardware initialization, sharing and arbitration are already performed by the Host OS. e.g.: KVM [START_REF] Kivity | kvm: the linux virtual machine monitor[END_REF], Qemu [START_REF] Bellard | Qemu, a fast and portable dynamic translator[END_REF], Bhyve [START_REF]bhyve -the bsd hypervisor[END_REF], VirtualBox [START_REF] Virtualbox | The virtualbox architecture[END_REF].
In their now famous paper "Formal Requirements for Virtualizable Third Generation Architectures" [START_REF] Popek | Formal requirements for virtualizable third generation architectures[END_REF], Popek and Goldberg have presented requirements and definitions for efficient virtualization. First of all, they define a machine model defined by a processor, a memory, an instruction set, and general purpose registers. The state of such a machine can be represented by the following:
System virtualization mechanisms
S = E, M, P, R
where:
E is an executable storage ; a conventional word addressed memory of fixed size. M is the CPU mode (Supervisor and User). P is the program counter which acts as an index into E. R is a set of registers.
The ISA is classified into 3 groups:
Privileged instructions:
they can only run in supervisor mode. In user mode, they trap.
Control sensitive instructions:
they attempt to change the configuration of resources in the system.
Behavior sensitive instructions:
their results depend on the configuration of resources.
A trap can be seen as an uncatchable error thrown by the hardware. This restores control to the hypervisor, which can analyse the origin of the trap. Traps can occur when a privileged instruction is executed in user mode, or when an invalid memory access is performed.
The hypervisor should exibit three essential characteristics. First, the VMM provides an environment for programs which is essentially identical to the original machine; secondly, programs, that run in this environment, show at worst only minor decreases in speed; and last, the VMM is in complete control of system resources.
System virtualization mechanisms
Property 1 (Equivalence/Fidelity). Any program K executing performs in a manner indistinguishable from the case when the VMM did not exist and K had whatever freedom of access to privileged instructions that the programmer had intended. This means that the virtualization layer should be invisible to the guest. The guest's behaviour should be the same as if it had been run on the original machine directly, with the possible exception of differences caused by the availability of system resources and differences caused by timing dependencies.
Property 2 (Efficiency).
A statistically dominant subset of the virtual processor's instructions has to be executed directly by the real processor, with no software intervention by the VMM.
In particular, all innocuous instructions are executed by the hardware directly, and do not trap.
Property 3 (Resource control).
It must be impossible for that arbitrary program to affect the system resources. The VMM is said to have complete control of these resources if [START_REF] Tanenbaum | Operating Systems Design and Implementation[END_REF] it is not possible for a program running under it in the created environment to access any resource not explicitly allocated to it, and (2) it is possible under certain circumstances for the VMM to regain control of resources already allocated.
With these concepts defined, Popek and Goldberk have stated a theorem that provides a sufficient condition to guarantee the virtualizability of an instruction set.
Theorem 1. A virtual machine monitor may be constructed if the set of sensitive instructions for that computer is a subset of the set of privileged instructions (sic).
This theorem states that each instruction that could affect the correct functioning of the VMM (sensitive instructions) always traps and passes control to the VMM. It is the basis of the trap-and-emulate virtualization principle, which lets every innocuous instruction run natively, and whenever a sensitive instruction arrives, the VMM traps, and emulates that instruction instead of executing it on the hardware.
x86 virtualizability
According to [START_REF] Robin | Analysis of the intel pentium's ability to support a secure virtual machine monitor[END_REF], there are seventeen instructions in x86 that are sensitive but not privileged.
• SGDT, SIDT, SLDT (a GP is raised in 64bits if CR4.UMIP = 1 and CPL > 0)
• SMSW leaks PE / MP / EM / TS / ET / NE flags in bits 0-5 from CR0
System virtualization mechanisms
• PUSHF and POPF (reverse each other's operation) PUSHF pushes lower 16 bits from the EFLAGS onto the stack and decrements the stack pointer by 2. EFLAGS contains flags that control the operating mode and state of the processor such as TF (Trap, to debug) / IF (Interrupt enable) / DF (Direction of string instructions) / and so on. Note that if an instruction is executed without enough privilege, no exception is generated, but EFLAGS are not changed either. • LAR, LSL, VERR, VERW. LAR loads access rights from a segment descriptor into a GPR. LSL instruction loads the unscrambled segment limit from the segment descriptor into a GPR. VERR and VERW verify whether or not a code or data segment is readable or writable from the current privilege level. • POP / PUSH (A process that thinks it is running in CPL 0 pushes the CS register to the stack. It then examines the contents of the CS register on the stack to check its CPL. Upon finding that its CPL is not 0, the process may halt.) • CALL, JMP, INT n, RET/IRET/IRETD. Task switches and far calls to different privilege levels cause problems because they involve the CPL, DPL and RPL. A task uses a different stack for every privilege level. Therefore, when a far call is made to another privilege level, the processor switches to a stack corresponding to the new privilege level of the called procedure. Since VM normally operates at user level (CPL 3), these checks will not work correctly when a VMOS tries to access call gates or task gates at CPL 0.
• STR (This instruction prevents virtualization because it allows a task to examine its requested privilege level (RPL). This is a problem because a VM does not execute at the highest CPL or RPL (RPL = 0), but at RPL = 3. However, most operating systems assume that they are operating at the highest privilege level and that they can access any segment descriptor. Therefore, if a VM, running at a CPL and RPL of 3, uses STR to store the contents of the task register and then examines the information, it will find that it is not running at the expected privilege level.) • MOVE (CS and SS both contain CPL in bits 0 and 1, thus a task could store the cs or ss in GPR and could examine the content of that register to find that it is not operating at the expected privileged level.
Remark
In 64 bits:
• a GP is raised in 64bits if CR4.UMIP = 1 and CPL > 0 when performing SGDT SIDT SLDT • *GDT entries are in virtual memory. Thus, MMU may be configured to raise faults on illegal access.
Consequently x86 is not virtualizable in the sense of Goldberg and Popek.
System virtualization mechanisms
ARM virtualizability
In this section, we do not consider the ARM virtualization extensions brought by ARMv8. This topic will be discussed in section 2.2.3.2 (p 38).
The ARM instruction set is not virtualizable. Indeed, several instructions are control or configuration sensitive, but not privileged. For instance, the following instruction (Fig. 2.6) will read the CPSR. This instruction is behaviour sensitive, but will not trap in user mode. In [START_REF] Penneman | Formal virtualization requirements for the arm architecture[END_REF][START_REF] Suzuki | Implementing a simple trap and emulate vmm for the arm architecture[END_REF][START_REF] Dall | Kvm for arm[END_REF], the authors reference 60 instructions which break Popek and Goldbergs requirements.
Code mrs r0, cpsr
• instructions accessing coprocessor's registers (reading or writing); • instructions modifying the processor mode;
• instructions modifying the processor state (CPSR); • instructions modifying the program counter.
Sometimes, those instructions are both control-sensitive and configuration-sensitive. This shows that ARM instruction set is not virtualizable.
Software approach
We have seen that both ARM and x86 instruction-set are not virtualizable out of the box. To address this issue, developers have created software which is able to analyse programs, and detect problematic instructions. Several approaches are possible, each of them having advantages and disavantages. In this section, we describe classical solutions to address software virtualization.
Software interpretation
A software interpreter is a program that reads an instruction of the source architecture one at a time, performing each operation in turn on a software-maintained version of that architecture state. Figure 2.7 depicts a pseudo code of a minimal CPU emulator. Basically, a program is stored in Memory. A program counter PC is used to track the progression. For System virtualization mechanisms each instruction (OpCode), the emulator will perform an operation to emulate the expected behaviour. This snippet also takes a countdown before interrupt events. Such events can arise between each instructions.
First generation interpreters would simply interpret each source instruction as needed. These tend to exhibit poor performance due to the interpretation overhead.
The second generation interpreters dynamically translate source instructions into target instructions one at a time, caching the translations for later use.
The third generation interpreters, improved upon the performance of second generation, dynamically translate entire blocks of source instruction at a time [START_REF] Aycock | A brief history of just-in-time[END_REF].
Static recompilation
A binary recompiler is a software system that takes executable binaries as input, analyzes their structure, applies transformations and optimizations, and outputs new optimized executable binaries. Translated code reproduces faithfully the calling standard, implicit state, instruction side effects, branching flow, and other artifacts of the old machine. Translators can be classified as follows [START_REF] Sites | Binary translation[END_REF]: Bounded translation systems: all the instructions of the old program must exist at translation time and must be found and translated to new instructions. Open-ended translation systems: part of codes may be discovered, created or modified at execution time.
Sometimes binary translation is not just simply replacing op-codes and adjusting the order of the operands. Some factors must be considered when doing binary translation [START_REF] Juan Rubio | Binary-to-binary translation literature survey[END_REF]:
Distinguish code and data
In assembly program, one can insert arbitrary data. In order to disassemble effectively, we have to be able to detect whether byte chunks represent code or data. Figure 2.8 depicts a real-life sample which fools the decompiler. The latter makes no difference between code and data, which leads to decode andeq and undefined instructions instead of plain data. [START_REF] Kelley | Statically recompiling nes games into native executables with llvm and go[END_REF] proposes an algorithm to tackle this issue: 1. taint each byte as data chunks 2. replace the data taint with code taint for each known reference (Interrupt table for instance) 3. calculate the address referenced for each previously untainted entry 4. mark that entry as data 5. based on the instruction, mark recursively other locations as instructions:
BL (absolute) + function return jump target and next addr as instruction
System virtualization mechanisms (mind the SP alteration) B (absolute) jump target as instruction Indirect branch do nothing Otherwise (not modifying control flow) mark the next address as an instruction Branching The number of instructions in the target might differ from the one in the source binary. This means that the location of the routines in the translated code could be at different addresses than the original code. Because of this, it will likely be necessary to adjust the target address of some branch instructions. This is fairly easy for statically determined branches, but it can become tedious for dynamically determined branches. In the latter case, the target of the branch cannot generally be determined at translation time. Pipelining Pipelining creates data dependencies. Code produced for the target machine must not violate these constraints even though the order or the number of instructions may be changed. This also causes issues when branches destinations are PC relative, or for any instruction breaking the sequential execution of the code. Self modifying code Self-modifying code is usually specific to the machine for which the program was targeted. This makes it difficult to write a binary translator which can handle it. Finding self-modifying code section is also difficult.
Dynamic recompilation
The alternative to static recompilation is dynamic recompilation. Dynamic recompilation is theoretically slower than static recompilation for two main reasons:
1. translation must be performed at runtime. Translation must be as fast as possible.
2. Because of the previous reason, only a limited time can be used for optimization.
Dynamic recompilation has advantages over the static approach. In particular, it can emulate all the code in a given machine. It works like an interpreter emulating the code, and only decodes instructions that are actually executed. This helps to handle indirect jumps but also self-modifying code. Fig. 2.9 presents an implementation for dynamic recompilation:
• for efficiency, a cache is built indexed by addresses;
• for each instruction, the cache is consulted: if it misses, the cache is filed with a program supposed to emulate the current instruction; • this program is executed.
Summary
This section gave an overview of the software technique available to perform software virtualization. We have described interpreters, static recompilers and dynamic recompilers.
System virtualization mechanisms
The following table summarizes the advantages and disavantages of each approach:
• interpreters are platform independent and can faithfully reproduce the behavior of self-modifying programs, or programs branching to data, or using relative branches (such as jmp r4 ).
• Static recompilation provides good performances but may not be feasible.
• Dynamic recompilation is a trade-off between interpreters and static translators.
The next section will describe hardware facilities to write efficient hypervisors.
Hardware approach
To help writing efficient hypervisors, hardware designers have extended their architectures to make them "virtualization aware". In this section, we consider x86_64 and ARMv7 with virtualization extensions [START_REF]Virtualization extensions -arm[END_REF].
x86 architecture
Intel and AMD implement the same features, but are called differently. Basically, the required features are the following: Provide a "hypervisor" privilege level: Before hardware virtualization support, guest operating systems were launched deprivileged, which means having a privilege level greater than 0, so that the most privileged instructions were not executable by the guest. As we have described, some instructions were not properly handled by the hardware. Although they are mutually incompatible, both Intel VT-x (codenamed "Vanderpool") and AMD-V (codenamed "Pacifica") create a new "Ring -1" so that a guest operating system can run Ring 0 operations natively without affecting other guests or the host OS. ARM processors may include TrustZone, a hardware facility which provides a dedicated execution context for a secure operating system (secure OS) next to a normal operating system. The instruction Secure Monitor Call (SMC) bridges the secure and normal modes. TrustZone enables protection of memory and peripherals. Applications running in secured world can access non-secure world, whereas the opposite is impossible. It offers a secure and easy-to-implement trusted computing solution for device manufacturers.
Hardware Page
ARM processors may also include virtualization features. These features provide a new processor mode, and several features to improve performances [START_REF] Varanasi | Hardware-supported virtualization on arm[END_REF][START_REF] Dall | Kvm/arm: the design and implementation of the linux arm hypervisor[END_REF]: CPU virtualization: a new processor mode (HYP mode) was introduced, dedicated for a VMM. Hence, this mode is more privileged than the kernel mode. To reduce the virtualization overhead, the ARM architecture allows traps to be configured in order to trap directly into a VM's kernel mode instead of going through Hyp mode.
System virtualization mechanisms
Memory virtualization: ARM provides a hardware support for virtualization. A guest now manages Intermediate Physical Addresses (IPA, also known as guest physical addresses) which need to be translated into physical addresses (PA, or host physical addresses) by the hypervisor. TLB tags are also implemented: the TLB is now associated with a VMID so that TLB operations can be performed per VM instead of globally. With this new facility, shadow page table is no longer required. Interrupts virtualization: the ARM architecture defines a GIC (Generic Interrupt Controller), which routes interrupts from devices to CPUs. CPUs use the GIC in return, to get the source of an interrupt. The GIC is separated in two parts: the distributor, and the CPU interfaces. Interrupts can be configured to trap in Hyp or kernel mode. The VMM can generate virtual interrupts to a guest, which will be handled transparently, as if the interrupt came from a genuine device. A trade-of must be found between trapping all the interrupts to kernel (high speed, not applicable in virtualized environment) or to the VMM (expensive solution, but the VMM retains control).
Discussion
Both ARM and x86 architecture have evolved to provide virtualization extensions to make it easier to write efficient virtual machine monitors. Despite some implementation specificities (such as guest context saving to be done by the hypervisor in ARM, whereas it is done in hardware on x86), the features are equivalent:
• a new processor mode was created. On ARM, this is a dedicated CPU execution mode, whereas on x86 it is a root/non-root mode. • A new layer of memory virtualization was added, which lets the VMM operate an additional layer of translation. This makes shadow page-tables useless. • Interrupts are virtualized. The VMM can inject interrupts which are handled by the guest the same way as hardware interrupts. Without this feature, the VMM must guess the interrupt entry-points, and branch at that address in the guest context.
Both architectures still require IOMMU to isolate devices (which operate on physical memory), and protect against DMA. This feature is called SMMU (System MMU) on the ARM architecture.
Currently, almost all the hypervisors rely on those hardware mechanisms. In particular, kvm and Xen do use these features on ARM and x86 architectures.
Hybrid approaches
Hybrid approaches can be considered when virtualization extensions are not available, or to improve performances (avoiding trap to the VMM and back). KVM [START_REF] Dall | Kvm for arm[END_REF] used to patch the Linux kernel (automatically) to replace sensitive instructions, and encode their operand. SWI instruction was used to trap back to the hypervisor. But because SWI only has 24 bits for its operand (which is not enough to encode all the parameters), a trick was used: coprocessors zero through seven are not defined by the ARM architecture, but trap regardless of their operands. That way, 24 additional bits could be used to encode additional parameters. Nowaday, KVM uses hardware virtualization features.
Paravirtualization
"Para-" is of greek origin that means "beside", "with" or "alongside". Paravirtualization (also known as OS assisted virtualization) refers to a cooperation between the guest OS and the hypervisor to improve performance and efficiency [START_REF]Understanding Full Virtualization, Paravirtualization, and Hardware Assist[END_REF]. Cooperation does not mean that the guest has to be trusted, but only that it is aware not to run on baremetal but on top of an hypervisor. This technique consists of modifying the guest source to remove non-virtualizable instructions and replace them with "hypercalls" which will delegate the privileged tasks to the hypervisor. Paravirtualization does not require any changes to the ABI (Application Binary Interface). Hence, no modification is required for guest applications.
The Xen project [START_REF]The xen project: the powerful open source industry standard for virtualization[END_REF] is an example of a hypervisor which relies on paravirtualization to virtualize processor and memory using a modified kernel, and a custom protocol to virtualize I/O. The latter uses a ring buffer located in the shared memory to communicate between virtual machines. It was successfully ported on both x86 [START_REF] Barham | Xen and the art of virtualization[END_REF] and on ARM [START_REF] Hwang | Xen on arm: System virtualization using xen hypervisor for arm-based secure mobile phones[END_REF]. Basically, Xen must expose 3 interfaces: Memory management: the guest OS will use hypercalls to manage virtual memory and TLB caches. CPU: the guest OS will run in a lower privilege level than Xen, register exception handlers to Xen, implement an event system in place of interrupts, and manage a "real" and "virtual" time. Device I/O: guest OS will use asynchronous I/O rings to transfer data toward disk or network devices.
Paravirtualization reduces the virtualization overhead because no traps back and forth to the hypervisor are required since those calls are made explicit. Its major drawback is that it cannot be used with unmodified guests. Nevertheless, paravirtualization can also be used for subsystems, such as device drivers like VirtIO [START_REF] Russell | Virtio: Towards a de-facto standard for virtual i/o devices[END_REF]. VirtIO is an alternative to
System virtualization mechanisms
Xen paravirtualized drivers. There are currently three drivers built on top of an efficient zero-copy ring buffer: a block driver, a network driver and a pci driver. Porting an operating system on top of a paravirtualization engine:
Improves security: Because guests are deprivileged, they cannot perform critical tasks directly. Even if an unprivileged guest gets corrupted, the risk will be limited [START_REF] Chisnall | The Definitive Guide to the Xen Hypervisor[END_REF]; Eases writing of operating system: Exposing a high(er) level of abstraction of the machine reduces the need of low level language (formerly required for low-level operations). There exists several unikernels targeting Xen [START_REF] Rosenblum | The reincarnation of virtual machines[END_REF]: MirageOS, HalVM, ErlangOnXen, Osv, GUK ... Still requires modification on guests: Today the total number of lines required to port Linux on Xen is about three thousands lines of code. This is estimated to be less than two percent of the x86 code base. Removes the need of hardware extensions: At the early days of the porting of KVM for ARM, no hardware virtualization was supported. Lightweight paravirtualization was introduced: it is a script-based method to automatically modify the source code of the guest operating system kernel to issue calls to KVM instead of issuing sensitive instructions [START_REF] Dall | Kvm for arm[END_REF]. It is architecture specific, but operating system independent.
As mentionned in 2.2.3.2 (p 38), this has been abandoned in favour of hardware mechanisms.
Trap and emulate
Trap-and-emulate was introduced by VMWare in [START_REF] Adams | A comparison of software and hardware techniques for x86 virtualization[END_REF] to subvert x86 virtualization issues. Suzuki and Oikawa [START_REF] Suzuki | Implementing a simple trap and emulate vmm for the arm architecture[END_REF] have presented their implementation on an ARMv6 CPU. Trap and emulate is somehow related to dynamic recompilation (see 2.2.2.3 (p 33)), for which the source instruction set is the same as the targeted one. Hence, only sensitive instructions must be replaced by traps to the hypervisor, which will emulate the effect of those specific instructions.
Other implementations
There are some other trends that have not been covered here, that I think worth to present.
Apple hypervisor framework:
The Hypervisor framework [START_REF] Inc | Hypervisor framework reference[END_REF] provides C APIs for interacting with virtualization technologies in user-space, without the need for writing kernel extensions. Hardware-facilitated virtual machines (VMs) and virtual processors (vCPUs) can be created and controlled by an entitled sandboxed user space process, the hypervisor client. The Hypervisor framework abstracts virtual machines
System virtualization mechanisms
as tasks and virtual processors as threads. The framework requires x86 virtualization extensions. BSD Jail: is a superset of chroot(2) system call. In the case of the chroot(2) call, a process' visibility of the file system name-space is limited to a single subtree. Jail facility provides a strong partitioning solution. Processes in a jail are provided full access to the files that they may manipulate, processes they may influence, and network services they can make use of. Any access to files, processes or network services outside their partition is impossible [START_REF] Kamp | Jails: Confining the omnipotent root[END_REF]. Linux containers: mainly built on top of Linux cgroup [START_REF] Heo | Cgroups documentation in linux kernel[END_REF], a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner. It features isolation and resource control (CPU, memory, disk I/O, network etc...) for a collection of processes, associated to namespaces. Like jails, processes confined in a namespace are bounded to their allocated resources, and cannot require or access other. Windows Subsystem for Linux (WSL): WSL is a collection of components that enables native Linux ELF64 binaries to run on Windows. It contains both user mode and kernel mode components [START_REF]Windows subsystem for linux overview[END_REF][START_REF]Windows subsystem for linux, the underlying technology enabling the windows subsystem for linux[END_REF]. It is composed of: 1. user mode session manager that handles the Linux instance life cycle; 2. pico provider drivers that emulate a Linux kernel by translating Linux syscalls; 3. pico processes that host the unmodified user mode Linux (e.g. /bin/bash).
Summary
As previously discussed, guests have a limited access to the resources. There are several ways for the VMM to intercept unwanted operations.
It can be transparent: the VMM interprets the guest os' intent and provides its own mechanism to meet that intent. In that case, the interception can be made by binary analysis, or assisted by hardware facilities, which makes this trap more efficient.
The guest may cooperate with the VMM, by calling the hypervisor instead of performing privileged operations. That requires source-code modification, but is suitable for performances. This principle is called paravirtualization. It is used for a full kernel (think Xen) or for dedicated drivers (think VirtIO).
Certified systems and Common Criteria
Certified systems and Common Criteria
Landscape
Assessing information system' security is a hard task. The increase size and complexity of applications and services make it harder to stay secured and updated. To ease the evaluation of security, security standards have been writen by government agencies (for instance TCSEC (Trusted Computer System Evaluation Criteria) for United States, BS7799 for Great-Britain) or by private councils, such as PCI-DSS. PCI-DSS was founded in 2006 by American Express, Discover, JCB International, MasterCard and Visa Inc and is focused on security related to payment card processing. Some of these documents have been merged under ISO standards. In particular, ISO/IEC 27000-series focus on information security standard. It defines best practices and recommendations on information security management, and risk analysis. The information security management system preserves the confidentiality, integrity and availability. To achieve this, risk management processes are applied. It gives confidence to interested parties that risks are adequately managed. These standards tend to define a shared terminology and methodologies among countries. This has led to the creation of ISO/IEC 15408, used as the basis for evaluation of security properties of IT products. ISO/IEC 15408 is also known as Common Criteria. In [START_REF] Beckers | A Structured Comparison of Security Standards[END_REF], Beckers et al. propose a method to evaluate security standards, applied on ISO 27001, Common Criteria and IT-Grundschutz standards. In the rest of the document, we focus on Common Criteria, because of their wide acceptance world-wide.
Common Criteria
General presentation
The Common Criteria (CC) is a global standard against which security products are evaluated. The Common Criteria, an internationally approved set of security standards, provides a clear and reliable evaluation of the security capabilities of Information Technology products. By providing an independent assessment of a product's ability to meet security standards, the Common Criteria gives customers more confidence in the security of Information Technology products and leads to more informed decisions. CC product certifications are mutually recognized by 25 nations, thus an evaluation that is conducted in one Certified systems and Common Criteria country is recognized by the other countries as well [START_REF]The Common Criteria Recognition Agreement Members[END_REF]. The authorizing nations are: Australia, Canada, France, Germany, India, Italy, Japan, Malaysia, Netherlands, New Zealand, Norway, Republic of Korea, Spain, Sweden, Turkey, United Kingdom, United States and the consuming nations are: Austria, Czech Republic, Denmark, Finland, Greece, Hungary, Israel, Pakistan as consuming members. In France, the contact is the National Agency for Information System's Security (ANSSI, Agence Nationale de la Sécurité des Systèmes d'Information).
Despite the ISO standardization, the foundation documents are freely available on the common criteria portal [START_REF]Common critieria: New cc portal[END_REF]. There are three main documents:
Part1, introduction and general model: [START_REF]Common Criteria for Information Technology Security Evaluation Part 1: Introduction and general model[END_REF] is the introduction to the CC. It defines the general concepts and principles of IT security evaluation and presents a general model of evaluation. Part2, security functionnal components: [START_REF]Common Criteria for Information Technology Security Evaluation Part 2: Security functional components[END_REF] establishes a set of functional components that serve as standard templates upon which functional requirements for TOEs (Target Of Evaluation) should be based. CC Part 2 catalogues the set of functional components and organises them in families and classes. Part 3, security assurance components: [55] establishes a set of assurance components that serve as standard templates upon which to base assurance requirements for TOEs. CC Part 3 catalogues the set of assurance components and organises them into families and classes. CC Part 3 also defines evaluation criteria for PPs (Protection Profiles) and STs (Security targets) and presents seven pre-defined assurance packages which are called the Evaluation Assurance Levels (EALs).
These documents should be read by the interested party, which are:
consumers:
Consumers can use the results of evaluations to help decide whether a TOE fulfils their security needs. These security needs are typically identified as a result of both risk analysis and policy direction. Consumers can also use the evaluation results to compare different TOEs. The CC gives consumers, an implementation-independent structure, (Protection Profile) in which to express their security requirements in an unambiguous manner.
Developers and products vendors:
The CC is intended to support developers for both preparing and assisting the evaluation of their TOE, but also to identify the security requirements to be satisfied. Those security requirements are contained in an implementation-dependent construct named Security Target (ST). The security target can be based on one or more Protection Profile (PP) to establish that it conforms to the security requirements associated to that protection profile. The CC can be used to determine the evidence to be provided in order to support the evaluation against those requirements, but also the content and presentation of that evidence.
Certified systems and Common Criteria
Evaluators and certifiers:
The CC contains criteria to be used by evaluators when forming judgements about the conformance of TOEs to their security requirements. The CC describes the set of general actions the evaluator is to carry out. The CC might specify procedures to be followed, but it is not mandatory.
To submit a product for certification, the vendor must first specify a Security Target (ST). This includes an overview of the product, security threats, detailed information on the implementation of all security features included, and a claim of conformity against protection profile (PP) at specified Evaluation Assurance Level (EAL). The number and strictness of the assurance requirements to be fulfilled depends on the Evaluation Assurance Level (EAL) [START_REF] Vetterling | Secure systems development based on the common criteria: The palme project[END_REF]. Afterward, the vendor must submit the ST to an accredited testing laboratory for evaluation. The end of a successsful evaluation includes an official certification of the product against a specific protection profile at specified Evaluation Assurance Level.
Definitions
This section provides definitions, taken from the CC references (Part1 and Part3).
Definition 3.1: Target of evaluation (TOE)
The target of evaluation is the subject of an evaluation. Is a set of software, firmware and/or hardware possibly accompanied by user and administrator guidance documentation. Examples of TOEs include: software application, operating system, a software application in combination with an operating system, a cryptographic co-processor of a smart card integrated circuit, a LAN including all terminals, serveurs, network equipment and software etc... A TOE can occur in several representations: a list of files, a compiled copy, a ready to be shipped product, or an installed and operational version. Since there might exist several configuration for a TOE (in the case of an operating system, it could be the type of users, number of users, options enabled or disabled), it is often the case that the guidance part of the TOE strongly constraints the possible configuration: the guidance of the TOE may be differente from the general guidance of the product. TOE evaluation is concerned primarily with ensuring that a defined set of security functional requirements (SFRs) is enforced over the TOE resources.
Certified systems and Common Criteria
Definition 3.2: Security Target
The security target is a set of implementation-dependent security requirements for a category of products. It is the document on which the evaluation is based, always associated to a specific TOE. It describes the assets and their associated threats. Afterward, contermeasures are described (in form of Security Objectives) and a demonstration that these contermeasures are sufficient to counter the threats is provided. The countermeasures are divided in two groups:
• Security objectives for the TOE which describe the countermeasure(s) for which correctness will be determined in the evaluation; • Security objectives for the Operational Environment which describe the countermeasures for which correctness will not be determined in the evaluation.
To sum-up, (i) the security target demonstrates that the SFRs (Security Functional Requirements) meets the security objectives for the TOE, (ii) Which security objectives are associated to the TOE and to the environment, and (iii) the SFR and the security objectives for the operational environment counter the threats.
Definition 3.3: Security Functional Requirements
The Security Functional Requirements are a translation of the security objectives for the TOE. They are usually at a more detailed level of abstraction, but they have to be a complete translation (the security objectives must be completely addressed) and be independent of any specific technical solution (implementation). The SFRs define the rules by which the TOE governs access to and use of its resources, and thus information and services controlled by the TOE. There are eleven classes of SFR: security audit, communication, cryptographic support, user data protection, identification & authentication, security management, privacy, protection of the TOE security functions, resource utilization, TOE access, trusted path channels. Description of those classes is provided in Part 2. It can define multiple Security Function Policies (SFPs) to represent the rules that the TOE must enforce.
Definition 3.4: Security Function Policy
A security Function Policy is a set of rules describing specific security behaviour enforced by the TSF and expressible as a set of SFRs.
Certified systems and Common Criteria
Definition 3.5: TOE Security Functionality (TSF)
All hardware, software and firmware that is necessary for the Security Functionality of the TOE.
Definition 3.6: Security Objective
Statement of an intent to counter identified threats and/or satisfy identified organisation security policies and/or assumptions.
Definition 3.7: Security Assurance Requirements
Security assurance requirements establish a set of assurance components as a standard to express the TOE assurance requirements. Consumer, developers and evaluators use the assurance requirements as guidance and for reference when determining assurance levels and requirements, assurance techniques and evaluation criteria. SAR are described in Part3, which contains nine assurance categories from which assurance requirements for a TOE can be chosen: Configuration management, delivery and operation, development, guidance documents, life cycle support, tests, vulnerability assessment, protection profile evaluation and security target evaluation.
To express security needs and facilitate writing Security Targets (ST), the CC provides two special constructs: packages and Protection Profiles (PP).
Definition 3.8: Protection Profile
A protection profile defines an implementation-independent set of security requirements and objectives for a category of TOEs, which meet similar consumer needs for IT security. Protection Profiles may be used for many different Security Targets in different evaluations.
For example, there is a protection profile for operating systems, whose goal is to describe the security functionality of operating systems in terms of CC, and to define functionnal and assurance requirements for such products.
If a protection profile is available for the product to certify, then a large part of the job is done already. Otherwise, Security Functional Requirements have to be defined to qualify the scope of the evaluation.
Certified systems and Common Criteria
Evaluation Assurance Levels (EAL)
The common Criteria provides seven predefined assurance packages known as Evaluation Assurance Levels (EAL). These EALs provide an increasing scale that balances the level of assurance obtained with the cost and feasibility of acquiring that degree of assurance. The following description is based on the Part3 document.
EAL1, functionally tested: EAL1 is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious. EAL1 requires only a limited security target. It is sufficient to simply state the SFRs that the TOE must meet, rather than deriving them from threats, OSPs and assumptions through security objectives. It is intended that an EAL1 evaluation could be successfully conducted without assistance from the developer of the TOE, and for minimal outlay. EAL2, structurally tested: EAL2 requires the co-operation of the developer in terms of the delivery of design information and test results, but should not demand more effort on the part of the developer than is consistent with good commercial practise. As such it should not require a substantially increased investment of cost or time.
EAL2 is therefore applicable in those circumstances where developers or users require a low to mode rate level of independently assured security in the absence of ready availability of the complete development record. This EAL represents a meaningful increase in assurance from EAL1 by requiring developer testing, a vulnerability analysis (in addition to the search of the public domain), and independent testing based upon more detailed TOE specifications. EAL3, methodically tested and checked: EAL3 permits a conscientious developer to gain maximum assurance from positive security engineering at the design stage without substantial alteration of existing sound development practises. EAL3 is applicable in those circumstances where developers or users require a moderate level of independently assured security, and require a thorough investigation of the TOE and its development without substantial re-engineering. This EAL represents a meaningful increase in assurance from EAL2 by requiring more complete testing coverage of the security functionality and mechanisms and/or procedures that provide some confidence that the TOE will not be tampered with during development. EAL4, methodically designed, tested and reviewed: EAL4 permits a developer to gain maximum assurance from positive security engineering based on good commercial development practises which, though rigorous, do not require substantial specialist knowledge, skills, and other resources. EAL4 is the highest level at which it is likely to be economically feasible to retrofit to an existing product line. EAL4 is therefore applicable in those circumstances where developers or users require a moderate to high level of independently assured security in conventional commodity TOEs and are prepared to incur additional security-specific engineering costs.
Certified systems and Common Criteria
This EAL represents a meaningful increase in assurance from EAL3 by requiring more design description, the implementation representation for the entire TSF, and improved mechanisms and/or procedures that provide confidence that the TOE will not be tampered with during development. EAL5, semiformally designed and tested: EAL5 permits a developer to gain maximum assurance from security engineering based upon rigorous commercial development practises supported by moderate application of specialist security engineering techniques. Such a TOE will probably be designed and developed with the intent of achieving EAL5 assurance. It is likely that the additional costs attributable to the EAL5 requirements, relative to rigorous development without the application of specialised techniques, will not be large. EAL5 is therefore applicable in those circumstances where developers or users require a high level of independently assured security in a planned development and require a rigorous de velopment approach without incurring unreasonable costs attributable to specialist security engineering techniques. This EAL represents a meaningful increase in assurance from EAL4 by requiring semiformal design descriptions, a more structured (and hence analysable) architecture, and improved mechanisms and/or procedures that provide confidence that the TOE will not be tampered with during development. EAL6, semiformally verified design and tested: EAL6 permits developers to gain high assurance from application of security engineering techniques to a rigorous development environment in order to produce a premium TOE for protecting high value assets against significant risks. EAL6 is therefore applicable to the development of security TOEs for application in high risk situations where the value of the protected assets justifies the additional costs. This EAL represents a meaningful increase in assurance from EAL5 by requiring more comprehensive analysis, a structured representation of the implementation, more architectural structure (e.g. layering), more comprehensive independent vulnerability analysis, and improved configuration management and development environment controls. EAL7, formally verified design and tested: EAL7 is applicable to the development of security TOEs for application in extremely high risk situations and/or where the high value of the assets justifies the higher costs. Practical application of EAL7 is currently limited to TOEs with tightly focused security functionality that is amenable to extensive formal analysis. This EAL represents a meaningful increase in assurance from EAL6 by requiring more comprehensive analysis using formal representations and formal correspondence, and comprehensive testing.
The Table 2.11 depicts the required levels for assurance families for each EAL. The highests level of certification require formal methods to establish the correctness of the model.
Certified systems and Common
Example
In this section, we compare the effort required for the different EAL levels for a given vulnerability assessment class. The purpose of the vulnerability assessment activity is to determine the exploitability of flaws or weaknesses in the TOE in the operational environment. It is based upon anlysis of the evaluation evidence and a search of publicly available material by the evaluator and is supported by evaluator penetration testing.
More specifically, we consider the vulnerability analysis activity (AVA_VAN). Vulnerability analysis is an assessment to determine whether potential vulnerabilities identified, during the evaluation of the development and anticipated operation of the TOE could allow attackers to violate the SFRs.
Leveling is based on an increasing rigour of vulnerability analysis by the evaluator and increased levels of attack potential required by an attacker to identify and exploit the potential vulnerabilities. There exist the following sub-activities (described in the CEM [57], the evaluation methodology used by the evaluator):
AVA_VAN.1 Vulnerability survey:
The objective is to determine whether the TOE, in its operational environment, has easily identifiable exploitable vulnerabilities.
AVA_VAN.2 Vulnerability analysis:
The objective is to determine whether the TOE, in its operational environment, has vulnerabilities exploitable easily by attackers possessing basic attack capability.
AVA_VAN.3 Focused vulnerability analysis:
The objective is to determine whether the TOE, in its operational environment, has vulnerabilities exploitable by attackers possessing enhanced basic attack potential.
AVA_VAN.4 Methodical vulnerability analysis:
The objective is to determine whether the TOE, in its operational environment, has vulnerabilities exploitable by attackers possessing moderate attack potential. AVA_VAN.5 Advanced methodical vulnerability analysis: This one has no general guidance.
The dependance graph of the required levels for AVA_VAN for EAL 1, 3, 5 and 7 is illustrated in Figures 2.12, 2.13, 2.14 and 2.15, respectively.
Discussion
While the ISO/IEC 27k serie targets organisations, the Common Criteria provides certificates for products. These certificates provide confidence to a buyer, that the manufacturer has taken care of the security of the product. The confidence can be evaluated with the EAL. Above level 4, the evaluation is performed in white box, meaning that the evaluator must have access to the code, and developpers may be involved. According to TÜViT3 (an accredited laboratory to perform CC evaluations), an audit for EAL 1 takes two months, EAL 4 from five to 9 months and more than 9 months for EAL 6 and above.
Reading the Common Methodology for Information Technology Security Evaluation (targeted for the evaluator), few hints are given for the vulnerability assessment. Instead, it remains generic. From AVA_VAN.4 (which should be methodical) "This method requires the evaluator to specify the structure and form of the analysis will take". No methods are provided, so every method would be acceptable. This leads to a (potentially) poor vulnerability analysis. In contrast, MITRE (a non-for-profit organization which manages several american research centers in security) provides CAPEC, for Common Attack Pattern Enumeration and Classification. Even if CAPEC is not the panacea, the Common Criteria should have its own attack pattern collection.
In France in 2008, the ANSSI (National Agency for Information System's Security) has developed a first level security certification for information technology products called CSPN 4 . It is said to be an alternative for Common Criteria evaluation, when cost or evaluation duration can be an issue, and the targeted level of security is less important.
Certified systems and Common Criteria
There are currently 56 products evaluated. I think it is a good thing to provide an easier evaluation, so that more actors could try and get one, which should provide more confidence in these solutions.
54
CHAPTER 3 Problem statement
It is not about whether we trust the hardware or not (like if it is malicious or not) -it is whether we know how to include it in the formal model or not. Hypervisors and security
Joanna Rutkowska
Hypervisors and security
Introduction
Virtualization was born in the 60's with pioneer companies such as General Electric, Bell-Labs and IBM. At that time, the only available computers were mainframes. They could only do one job at a time. So tasks had to be run in batches. The first publicly available virtualization software was CP/CMS (Control Program, Console Monitor System): a single-user operating system designed to be interactive. CP was in charge of creating virtual machines, with which user could interact.
Afterward, personal computers became affordable for the masses, and virtualization was neglected, until 1998 when a company called VMWare was created. In 1999, they began selling "VMWare workstation". VMWare Workstation lets users run operating system inside an existing one (at the beginning, only Windows was supported as a host). Virtualization was born in desktop market. Later on in 2001, they released the first version of ESX, which targets the server market. In 2003, the first releases of Xen were launched, and in 2006 Amazon launched AWS, a cloud computing service. The 2000's are definitely the decade of virtualization in the wild: in the server market, and with the rise of cloud computing. Nowaday, everybody uses virtualization on a daily basis. The next step for virtualization is embedded systems. With the forthcoming Internet Of Things (IOT) and its needs for security will certainly lead to a secured, trustworthy hypervisor which would provide security for secrets, and isolation between partitions. This approach to enforce security from the bottom layers is easier than verifying the security of each flashed software. For example, in 2014, 12 million devices were remotely exploitable because they use an insecured version of RomPager, an embedded webserver used in network gateways [START_REF]Too many cooks -exploiting the internet-of-tr-069-things[END_REF]. Because manufacturers shipping firmware to vendors, who customize them, and because of the time to market, a lot of on-the-shelf devices are still vulnerable. Moreover, most people don't patch their internet gateway anyway.
How does virtualization instructions enable security?
Qubes OS is a security-oriented operating system, which takes an approach called security by compartmentalization. Several security domains are created (by default: work, personnal and untrusted), and applications are bound to one security domain. Each security domain is running on top of a virtual machine, so that the isolation between each context is enforced by virtualization technologies. In the project's architecture document [START_REF] Rutkowska | Qubes os architecture[END_REF], Joanna Rutkowska (Qubes OS project leader) defends Qubes' design.
Hypervisors and security
"Qubes' OS architecture" "Virtualization allows to create isolated containers, the Virtual Machines (VM). VMs can be much better isolated between each other than standard processes in monolithic kernels of popular OSes like Windows or Linux. This is because the interface between the VM and the hypervisor can be much simpler than in case of a traditional OS, and also the hypervisor itself can be much simpler (e.g. hypervisors, unlike typical OS kernels, do not provide many services like filesystem, networking, etc). Additionally modern computers have hardware support for virtualization (e.g. Intel VT-x and VT-d technology), which allows for further simplification of the hypervisor code, as well as for creating more robust system configurations, e.g. using so called driver domainsspecial hardware-isolated containers for hosting e.g. networking code that is normally prone to compromise." Indeed, virtualization provides benefits in isolation capabilities [START_REF] Pelzl | Virtualization technologies for cars[END_REF]: In automotive systems, each application (like cruise control, brake control, etc...) is mapped onto a different Electronic Control Unit (ECU) [START_REF] Broy | Cross-layer analysis, testing and verification of automotive control software[END_REF]. Newest vehicules consist of up to 100 ECUs. Today's processing cores scale horizontally (increasing number of cores) rather than vertically (increasing speed). Migrating and merging several functions into a shared hardware resource decreases unnecessary redundancy, wiring, maintenance effort and reduces costs. As opposed to traditional systems which isolate real-time components onto separated processors, hypervisors can be used to mix critical systems, that is provide strong isolation (both spatial and temporal) between components executing on a single processor. Virtualization can also be suitable in domains having an existing critical (sometimes certified) code-base. Virtualization can be used to increase flexibility, interoperability and backward-compatibility. Legacy software wouldn't need to be modified, and the virtualization layer would perform critical operations to make the new hardware like an already-supported one.
Hardware designers also add security mechanisms in their processors. On x86, Intel has added SGX instructions [START_REF] Inc | Intel sgx homepage[END_REF][START_REF] Anati | Innovative technology for cpu based attestation and sealing[END_REF][START_REF] Mckeen | Innovative instructions and software model for isolated execution[END_REF] which enhance security features allowing developers to create secured enclaves, a dedicated area where code and data are protected against disclosure or modification. ARM provides TrustZone extension [START_REF] Ltd | Trustzone arm homepage[END_REF], which provides security features as a service, like encryption, rights management or secure storage. But this requires drivers in the client operating-system. While TrustZone can be seen as a CPU which has two halves (secured, and non-secured), SGX only has one CPU which can have many secure enclaves. Virtualization can be used to provide security (in terms of isolation of guests for instance) by interposition. It means that no cooperation from the guest is required to enforce those policies. Such solution is not achievable with hardware mechanism; hence a software approach (similar to the one defended in this thesis) should be used.
But hardware is not bug-free. In the past, Intel [START_REF] Pratt | Anatomy of the pentium bug[END_REF] has released a bogus CPU which was missing five entries in a lookup table used for division algorithm. Sometimes, these bugs can be exploited to breach the system's security [START_REF] Duflot | Cpu bugs, cpu backdoors and consequences on security[END_REF]. Moreover, most of the time, hardware 57 Hypervisors and security is not designed with security in mind. The intrinsic design of processors make some classes of attack possible, such as side channels attacks [START_REF] Canteaut | Understanding cache attacks[END_REF]. In 2005, Bernstein [START_REF] Bernstein | Cache-timing attacks on aes[END_REF] and Osvik et al [START_REF] Osvik | Cache Attacks and Countermeasures: The Case of AES[END_REF] have demonstrated that processors' cache could leak memory accesses, which can be abused to recover private data, such as encryption keys (AES keys in those papers, but also part of RSA keys in [START_REF] Yarom | Cachebleed: A timing attack on openssl constant time RSA[END_REF]). Despite the isolation claimed by hypervisors, similar attacks are achievable across virtual machines running on top of the same host [START_REF] Sunar | Cache attacks enable bulk key recovery on the cloud[END_REF]. There are other classes of side channels attacks, for example timing attacks, branch prediction abuse, power analysis, acoustic channels, or electromagnetic attacks. This class of attacks is tedious, but obviously achievable. We do not claim to prevent this kind of attacks with our solution. In their paper, Osvik et al. note that access control mechanisms, used to provide isolated partitionning (such as user space/kernel space, memory isolation, filesystem permissions) rely on a model of the underlying machine, which is often idealized and does not reflect many intricacies of the actual implementation.
How do formal methods provide additional security?
They do not. On the invisible things lab's blog (a security research company, involved in Qubes OS), Joanna Rutkowska (Qubes OS project lead) claims that formally verified microkernel (such as SeL4) do not provide end-to-end security guarantees, and would even be vulnerable to remote attacks, such as the one presented by Duflot et al [START_REF] Duflot | What if you can't trust your network card?[END_REF]. She also critizises that the compiler and a large part of the hardware remains absent from the specification. Finally, such microkernel wouldn't be ready for production use, because they do not support important features such as SMP or IOMMU. Gerwin Klein (L4.verified project leader) agrees on most points, but still points out that there are two separated targets for SeL4: one verified for a restricted ARM platform targetting embedded devices (providing trusted boot, and fixed configuration), and an x86 port, where further work is being done to implement aforementioned features. Anyhow, Klein says "We also don't claim to have proven security. In fact, if you look at the slides of any of my presentations I usually make a point that we do *not* prove security, we just prove that the kernel does what is specified. If that is secure for your purposes is another story." [START_REF] Gu | A state-of-the-art survey on real-time issues in embedded systems virtualization[END_REF] presents various hypervisors designed for real-time embedded systems. Safety is more addressed than security, but if we consider availability as a security criterion, it makes sense to mention it. In particular, ARINC 653 is a standard which addresses software architecture for spacial and temporal paritioning for safety-critical integrated modular avionics. In this context, hypervisors such as Xtratum [START_REF] Masmano | Xtratum: a hypervisor for safety critical embedded systems[END_REF] are used to enforce isolation. The same holds for automative [START_REF] Reinhardt | An embedded hypervisor for safety-relevant automotive e/e-systems[END_REF], where isolation is required between entertainment and safety-critical Software security and certified products systems, which tend to use the same hardware for cost effectiveness.
Existing hypervisors with security focus
In computers, virtualization is sometimes used to enforce security. For instance, QubeOS a "reasonably secure operating system". QubesOS is based on the Xen hypervisor, which claims to have security sensitive approach [START_REF] Chisnall | The Definitive Guide to the Xen Hypervisor[END_REF]. In practice, vulnerabilities on Xen do exist1 , but are publicly available.
NOVA [START_REF] Steinberg | Nova: A microhypervisor-based secure virtualization architecture[END_REF] is a microhypervisor. It is a microkernel, that minimizes the trust computing base of virtual machines, by one order of magnitude. Indeed, Nova consists of the microhypervisor (9 KLOC), a user-level environment (7 KLOC) and the VMM (20 KLOC). In comparison with Xen that requires the VMM, a full linux kernel for dom0, and some parts of Qemu. This layer of code is evaluated to more than 400 KLOC. Our approach is comparable to Nova, except for the fact that no modern virtualization features are used. Hence, there should be no performance gain in using the latest processors available.
Finally, SeL4 [START_REF] Klein | seL4: Formal verification of an operating system kernel[END_REF] is a micro-kernel based on the work achieved by L4 [START_REF] Härtig | The performance of micro-kernel-based systems[END_REF]. According to their brochure2 , (SeL4) "is the only operating system that has undergone formal verification, proving bug-free implementation, and enforcement of spatial isolation (data confidentiality and integrity)." An impressive work is done on SeL4, but several points need to be mentionned:
• two versions are available (x86 and ARM), but only the latter is verified; • a modification of the guest is required to make it run on top of SeL4 (paravirtualization); • the hardware is trusted;
• the system initialization is assumed correct;
• the correctness of compiler, assembly code, hardware and boot code are assumed.
Our implementation is far less advanced, and not even proven. But the approach is different: we do not require any modification of the guest code, and do not assume the hardware correctness: we only used certifiable hardware features (roughly ALU and MMU).
Software security and certified products
A market requirement
According to Apple Inc. 3 , security-conscious customers, such as the U.S. Federal Government, are increasingly requiring Common Criteria certification as a determining factor Software security and certified products in purchasing decisions. Since the requirements for certification are clearly established, vendors can target very specific security needs while providing broad product offerings.
Certified products
Certifying a product is a long and expensive task. The higher the EAL is, the more expensive. The following table presents the number of certified products by categories.
Category
Certified Fort Fox Hardware Data Diode: this target is a hardware-only device that allows data to travel only in one direction. The intention of is to let information be transferred optically from a low security classified network (Low Security Level) to a higher security classified network (High Security Level), without compromising the confidentiality of the information on the High Security Level. Once manufactured, there is no way to alter the function of the TOE. Virtual Machine of Multos M3: this target is a smart-card conceived to let several applications be loaded and executed in a secured way. The security features provided are:
• handling memory with regard to the life-cycle (opening, loading, creation, selection, deselection, exit, erasing);
Certification's impact on design
• interpreting primitives and instructions called by loaded applications;
• handling interactions between loaded applications. Only the Application Memory Mamager (MM) which handles memory belonging to applications, and providing services for loading, executing and erasing applications, and the Application Abstract Machine (AM) which interprets the instructions of the applications handled by the MM are considered. Virtual Machine of ID Motion V1 (two of them): This virtual machine has the same security functionalities than the previous one. Memory Management Unit for SAMSUNG micro-controllers: This target concerns three micro-controllers (S3FT9KF, S3FT9KT and S3FT9KS) which use the same MMU. The security services provided are the following:
• handling memory accesses (read/write/execute) on the memory areas of the micro-controllers; • triggering alarms as interrupts, whenever unauthorized access arise. Secure Optical Switch: this target is a hardware based tamper evident fibre-optic switching device that connects a single common port to any one of four selectable ports while maintaining isolation between the selectable ports within the body of the switch.
There are two variants of the SOS; the local operation option has a selector dial incorporated onto the unit; while the SOS remote operation option has connectors to allow remote selection and optical feedback of the selected switch position. The security services provided are the following:
• the common port can be connected to only one selectable port at any one time;
• the selectable ports can never be connected to each other via the TOE;
• indication is provided by the TOE unequivocally confirming to which selectable port the common port is connected; • tamper evidence of the physical case of the TOE.
Certification's impact on design
As described in 2.3, certification leverages efforts in producing high level models, tasks descriptions, documentation and extensive tests. The development becomes a rigourous and systematic process, which tends to reduce human errors and bad practices. At some point, the certification enforces traceability for each task. This implies ticketing, source control, code reviews and so on. Highest levels (above 5) also require formal methods to demonstrate the equivalence between a high level model and the actual implementation. This can become complicated with large software: if we consider the Linux code-base, there is very few (up to date) documentation available. As long as the userspace is not impacted, kernel developers don't mind refactoring API. It makes it hard for new-comers to get to know the code, and even harder for an auditor, which is not an expert in low-level programming.
Certification's impact on design
How to gain confidence in software?
Several points of views should be considered. Informed consumer will probably read reviews on specialized press or websites. Known vulnerability database (CVE) can also be consulted. Companies managers may rely on certifications or warranties. Developers will likely read architectural documents, documentations and code (if available). Even though, such docmuents do not provide warranty on the software itself. In contrast, tests or formal verification can provide additional confidence: it's easier to read the specification or the tests cases than the whole source-code.
Testing software
In this section, we will take SQLite [START_REF]Sqlite home page[END_REF] as a case study. "Sqlite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed database engine in the world". SQLite is a heavily tested software [START_REF]How sqlite is tested[END_REF]: 100% branch test coverage, millions of test cases, out-of-memory tests, fuzzing, boundary value test, regression tests and valgrind analysis. All releases of SQLite are tested with the TH3 (an aviation-grade) test harness test suite. These tests are written in C, and provide 100% branch test coverage and 100% modified condition/decision coverage (MC/DC) on the SQLite library. These tests come from avionic software guidance DO-178B and DO-178C to ensure adequate testing of critical software (critical in sense of being able to provide safe flight and landing). The following points are enforced:
• every entry points and exit in the program have been invoked at least once; • every conditions in the program have taken all possible outcomes at least once;
• every conditions of a decision are shown to have an effect on the outcome independently.
But all those tests come at the price of an extensive codebase: SQLite consists of 120k lines of code (approximately), but the project has 91600k lines of code. That makes roughly 765 times more code. Despite the impressive results achieved by the tests, John Regehr and people from TrustInsoft managed to find issues in the codebase using a nearly automated tool: tis-interpreter [START_REF]Trust-in-soft: tis interpreter[END_REF]. Tis-interpreter is a software analysis tool based on Frama-C, designed to find subtle bugs in C programs. It works by interpreting C programs statement by statement, verifying whether each statement invokes any undefined behaviour. With that tool, several bugs were found [83] (and are now corrected). Those bugs are mainly undefined behaviour in the C standard [START_REF] Iso | The ANSI C standard (C99)[END_REF] which happen to work "most of the time" (since tests are valid).
Dangling pointers: consists of using the value of a pointer after the lifetime of the pointed object.
Certification's impact on design
Uses of uinitialized storage: for instance allocating a variable on the stack without initialization, and accessing it. There are no warranties on the initial value of such variable. Out-of-bounds pointers: Some SQLite structs use 1-based array indexing. When allocating memory, the pointer tracking that memory region is decremented to make it transparent for developer to access 1-based whereas C is 0-based. Illegal arguments: mostly on memset and memcpy which do not work properly with NULL pointers arguments.
Comparison of pointers to unrelated objects: comparison operators (> >= <= <)
can raise undefined behaviour (see §6.5.8 of the C standard). SQLite has used some corner case comparisons which led to undefined behaviour. This real-life example, gives some more credits to Edsger W. Dijkstra's famous quote: "testing only shows the presence of bugs, not their absence". Despite the extensive work done by SQLite developers, testing software is not sufficient to reach a high level of confidence. Formal methods are a candidate to achieve this task. Using them used to be expensive, because they were not industrialized. Companies such as Prove&Run4 or TrustInSoft 5were created to ease the adoption of formal methods in the industry.
Using formal methods
Formal methods consist of applying a rigourous mathematic logic on models, representing an implementation. Afterward, tools are used (automated or interractive) to produce a computer-verifiable proof of correctness, of theorems made on these models. Such methods have been successfuly used to produce operating systems, such as SeL4 [START_REF] Klein | seL4: Formal verification of an operating system kernel[END_REF] or ProvenCore by Prove&Run [START_REF] Lescuyer | Provencore: Towards a verified isolation micro-kernel[END_REF]. They have also been used to verify properties on hardware after the Pentium bug. But formal methods are not limited to computing scope, for instance Lille's subway has been verified using B-method.
Model Checking
Model checking belongs to the familly of properties checking. Property checking is used for verification instead of equivalence checking. An important class of model checking methods have been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula [START_REF] Browne | Automatic verification of sequential circuits using temporal logic[END_REF][START_REF] Bochmann | Hardware specification with temporal logic: An example[END_REF]. Pioneering work in the model checking of temporal logic formulae was done by E. M. Clarke and E. A. Emerson and by J. P. Queille and J. Sifakis.
The main principle is to describe a model (an automaton) and a possible execution (a path in the graph). Then model checker will try to validate user's properties on the execu-Certification's impact on design tion. It may respond "true", that is to say "the property is verified", "false" which means "the property is wrong, here is an example why", or "I don't know because the execution is out of bound". This may apply when the model is large, because the model checker is constrained by finite memory and a given execution time. Several approaches exist [START_REF] Mcmillan | Interpolation and sat-based model checking[END_REF], but generally, SAT solving is used. Model checking is then a NP-hard problem. The number of states of a model can be enormous. For example, consider a system composed of n processes, each of them having m states. Then, the asynchronous composition of these processes may have m n states. Similarly, in a n-bit counter, the number of states of the counter is exponential in the number of bits, ie 2 n . In model checking this problem is referred to as the state explosion problem [START_REF] Clarke | Model checking and the state explosion problem[END_REF].
However, several work tend to demonstrate that model checking is achieveable in real-life applications. Shwartz et al. performed a software model checking for security properties on large scale: a full Linux distribution (839 packages, and 60 million lines of code) [START_REF] Schwarz | Model checking an entire linux distribution for security violations[END_REF].
Nasa researchers K. Havelund and T. Pressburger used model checking for Java programs using PathFinder. In [START_REF] Havelund | Model checking java programs using java pathfinder[END_REF], they state that even though today's model checkers cannot handle real sized programs, and consequently cannot handle real sized Java programs, there are aspects that make the effort worthwhile:
• providing an abstraction workbench makes it possible to cut down the state space;
• model checking can be applied for unit testing, and thus focused on a few classes only.
Abstract interpretation
Abstract interpretation was formalized by Patrick and Radhia Cousot in the late 1970s [START_REF] Cousot | Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints[END_REF]. Abstract interpretation is about creating an abstract model of a concrete one, and stating that every property verified in the abstract model will also be verified in the concrete model. An example used by Cousot is the interpretation of an arithmetic expression -1515 * 17 (concrete model) applied to the abstract model {(+), (-), (±)} where the semantic of arithmetic operators is defined by the rule of signs. The abstract execution of -1515 * 17 ⇒ -(+) * (+) ⇒ (-) * (+) ⇒ (-) proves that -1515 * 17 is a negative number. One can also represent abstract interpretation in a n-dimension space, where each dimension is bound to a variable, and several "forbidden zones" are to be avoided. Abstract interpretation will approximate the values of each variables using (more or less precise) heuristics, to show that there is no intersection between "forbidden zones" and possible valuations of the analyzed expression. The use of heuristics is compulsory, because general program verification is undecidable.
Abstract interpretation is used in the industry. The following examples have been successfully used in the industry, by AbsInt (Absint.com):
Certification's impact on design
Astrée Astrée (http://www.astree.ens.fr/) is a static program analyzer aiming at proving the absence of Run Time Errors (RTE) in programs written in the C programming language. Astrée was successfuly used in industry to verify electric flight control [START_REF] Souyris | Experimental assessment of astrée on safety-critical avionics software[END_REF][START_REF] Delmas | Astrée: From research to industry[END_REF] or space vessels maneuvers [START_REF] Bouissou | Space software validation using abstract interpretation[END_REF]. aiT aiT WCET Analyzers statically compute tight bounds for the worst-case execution time (WCET) of tasks in real-time systems [START_REF] Ferdinand | aiT: Worst-Case Execution Time Prediction by Static Program Analysis[END_REF]. Frama-C Frama-C is Open Source software, which allows to verify that the source code complies with a provided formal specification. For instance, the list of global variables that a function is supposed to read from or write to is a formal specification. Frama-C can compute this information automatically from the source code of the function, allowing you to verify that the code satisfies this part of the design document. Tisinterpreter was built on top of Frama-C. Polyspace Polyspace Code Prover TM proves the absence of overflow, divide-by-zero, outof-bounds array access, and certain other run-time errors in C and C++ source code. It uses formal methods-based abstract interpretation to prove code correctness.
Formal proof
Formal proof is for a proof, what a source code is for an algorithm. It translates the proof in an actual proof object, which is usable by a computer. For instance, Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together. But Coq is not dedicated to software prooving. It was used for the fomalization of mathematics (e.g. the full formalization of the 4 color theorem or constructive mathematics at Nijmegen).
Formal proofs can use a Hoare-like axiomatic method [START_REF] Hoare | An axiomatic basis for computer programming[END_REF]. Hoare logic is a 3-upplet of values P {Q}R, where P is called a pre-condition, Q a program, and R a post-condition. Hoare logic states that "If the assertion P is true before initiation of a program Q, then the assertion R will be true on its completion".
To help reasoning, one uses hypothesis (or axioms) which are assumed to be true, and tries to prove theorems (or lemmas). The output is either a success, and a certificate is given, or a failure, which does not mean that the reasoning is false, but that a proof was not found. The easiest way to prove something wrong is to find a counter-example (like model-checking approaches).
There exist two kinds of tools [START_REF] Geuvers | Proof assistants: History, ideas and future[END_REF]:
(Interactive) Proof assistant Proof assistants are computer systems that allow a user to do mathematics on a computer, but not so much the computing (numerical or symbolical) aspect of mathematics but the aspects of proving and defining. So a user can set up a mathematical theory, define properties and do logical reasoning with them. In many proof assistants, users can also define functions and perform actual computation with them. (automated) Theorem prover Theorem prover are systems consisting of a set of well chosen decision procedures that allow formulas of a specific restricted format to be proved automatically. Automated theorem provers are powerful, but have limited expressivity, so there is no way to set up a generic mathematical theory in such a system.
Certification's impact on design
Specification
Formal proof can also be applied to software verification, for instance, Why3 is a platform for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, such as Coq, to prove correctness.
Issues with end to end proof
Prove the code's correctness
First of all, one should prove that implementation is correct with regard to its specification. That means that the semantic expressed by the high level model is successfully translated into source code. To gain confidence one can choose several approaches, illustrated in Fig 3 .1:
• writing a model, writing an implementation, and proving the functionnal equivalence between those implementations. • Writing a model and extracting the code from that model. Notice that it leverages a trust dependency on the extractor.
Certification's impact on design
Prove the compiler's correctness
The verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.
In a future where formal methods are routinely applied to source programs, the compiler could appear as a weak link in the chain that goes from specifications to executables. The safety-critical software industry is aware of these issues and uses a variety of techniques to alleviate them, such as conducting manual code reviews of the generated assembly code after having turned all compiler optimizations off. These techniques do not fully address the issues, and are costly in terms of development time and program performance. An obviously better approach is to apply formal methods to the compiler itself in order to gain assurance that it preserves the semantics of the source programs. CompCERT [START_REF] Leroy | Formal verification of a realistic compiler[END_REF] is such a compiler. It is shiped with a proof of the semantic continuation among the different intermediate languages.
Prove the underlying hardware
Once the implementation and the compilation phases are trusted, we can now try to execute the software on actual hardware. In order to maintain the chain of trust, the hardware should also be proven. Such hardware does not exist on the shelf. Worse, the state of art shows that most hardware are not reliable:
Hardware design issues Some bugs have been shown in the wild: Intel Pentium microprocessor occasionally makes errors in floating point division due to five missing entries from a lookup table [START_REF] Pratt | Anatomy of the pentium bug[END_REF]. More recently Matt Dillon, a DragonFly BSD developer, found out that some AMD processor could incorrectly update the stack pointer [START_REF] Dillon | archive from kernel@crater.dragonflybsd.org: Buildworld loop segfault update -i believe it is hardware[END_REF][START_REF] Dillon | archive from kernel@crater.dragonflybsd[END_REF].
Abuse genuine hardware features
• DMA stands for Direct Memory Access. It is a way to access physical memory from hardware without CPU conscent (and thus, without memory translation). It is available over FireWire, Thunderbolt, ExpressCard, PC Card and any PCI/PCI-express interfaces. This lets an attacker read memory at arbitrary location, which could let him grab the screen content, scan possible key material, or extract information from memory layout. DMA can also be used to write data to arbitrary location, which could change the screen content, change UID/GID of certain process, inject code into a process etc ... • CPU Caches are small banks of memory that store the contents of recently accessed A new architecture for embedded hypervisors memory locations. They are much faster than main memory, and have an important impact on system's speed. Modern CPUs have several levels of cache hierarchy. CPU caches have been abused to extract cryptographic secrets [START_REF] Yarom | Cachebleed: A timing attack on openssl constant time rsa[END_REF] • Intel Active Management Technology (AMT) is a hardware technology for out-ofband management. It is built on top of a secondary service processor located on the motherboard; its memory is separated from the host, and it has a dedicated link to the network card. The AMT is active even if the computer is put into sleep mode. This mechanism was abused by Alexander Tereshkin and Rafal Wojtczuk from Invisible Things Lab. Their work was presented at BlackHat 2009 [103].
Non-temper resistant implementation A temper resistance can occur in safety and security. The environment can be hostile, such as space where hardware has to be hardened against important temperature ranges, and ionizing radiations. Because this thesis is focused on security rather, this part excludes safety concerns.
As described before, most hardware have not been built with security in mind. The SmartCard industy has tried to improve the tamper resistance of its hardware. A smartcard is a tamper-resistant computer, which can securely store and process information. We use them on daily basis: in our SIM cards, credit cards, pay tv.
Even though, successful attacks were conducted using passive measurements on physical elements. In [START_REF] Kocher | Timing attacks on implementations of diffie-hellman, RSA, DSS, and other systems[END_REF], Kocher has demonstrated how timing attacks could lead to cyptographic leakage on common cryptographic algorithms. In [START_REF] Kocher | Differential power analysis[END_REF], similar attacks are achieved by analysing power consumption. Finally, in [START_REF] Quisquater | ElectroMagnetic analysis (EMA): Measures and counter-measures for smart cards[END_REF], Quisquater and Samyde noticed that radiation is directly connected to the current consumption of the processor. Hence, it could be used to perform the same attacks in a non-intrusive way.
Those attacks on very specific system illustrate that there is no silver bullet, but protections can be used. It also remind us that we should be suspicious against hardware security features. Most of the time, these are just black boxes with very few information. This is why this thesis takes the side of avoiding to rely on hardware virtualization features.
A new architecture for embedded hypervisors
What is to be formalized?
To provide an end-to-end chain of trust on the VMM, the VMM itself must be specified, implemented and proven correct.
A new architecture for embedded hypervisors
VMM software
The VMM must be fully specified. Considering the snippet given in Fig 2 .7, where each instruction from the source ISA is emulated:
• The source instruction set must be fully described. This description should include an operational semantic, corner cases, accepted values for each arguments, raised errors, side effects, etc. • The analyser should be proven correct. That means that each instruction from the source ISA will be successfully decoded, and interpreted, with regard to the destination ISA. Otherwize, an error should be raised. I suggest a bottom-up approach, in which lower level specifications are used on the upper layers. In practice, the underlying CPU specification should be reused by both the compiler, and virtual machine monitor's code. In addition, the latter must also integrate the formalisation done on the devices. Since the instruction set may not be virtualizable (with regard to Popek&Goldberg's definition), we should distinguish the two cases.
Popek and Golderg compliant ISA
In that case, all the instructions from the source instruction set will be executed directly on the processor. The whole security contracts for those instruction relies on the hardware proof. The latter will raise signals to the VMM. The correct behaviour requires a correctness proof of the handling of those signals.
A new architecture for embedded hypervisors
Non Popek and Goldberg compliant ISA
When the instruction set is not virtualizable, a large part of the instruction will still be executed directly on the processor. For the same reason than in the previous section, the hardware proof is suficient to guarantee the correct execution. But the non virtualizable instructions will have to be properly detected by the VMM. Thus, the scanning part of the VMM must also be proven.
Factorizing the interpreter by executing code directly on the CPU
The second axis to reduce the work to be done on both the formalization and the implementation of the interpreter, is to rely on the underlying hardware, formally described and modeled. Hence, non-sensitive instructions do not need to be emulated anymore, but can be run directly on the processor. The native execution of the source-code leads us to modify our basic CPU interpreter (Fig 2 .7) into an optimized version (Fig 3 .2). In the newer version, only the sensitive instructions are emulated. Hence, the second property (efficiency) formulated by Popek&Goldberg is satisfied: "All the innocuous instructions are executed by the hardware directly , with no intervention at all on the part of the VMM".
Definition 4.1: Basic block
According to the IDA Pro book [START_REF] Eagle | The IDA Pro book[END_REF]:
A basic block is a maximal sequence of instructions that executes, without branching, from beginning to end. Each basic block therefore has a single entry point (the first instruction in the block) and a single exit point (the last instruction in the block). The first instruction in a basic block is often the target of a branching instruction, while the last instruction in a basic block is often a branch instruction.
The branch instructions include function calls, jumps, but also conditional statements. In the latter case, the conditional jump generates two possible destinations: the yes edge (if the condition is satisfied) and the no edge otherwise.
Discussion
Definition 4.2: Hyperblock
By analogy with the basic blocks, I introduce the concept of hyperblock as follows:
A hyperblock is a maximal sequence of innocuous instructions. Hence, no intervention from the VMM is required. It is safe to let these instructions execute directly on the hardware.
In an emulator (without JIT), the largest hyperblock is one instruction long. With this optimisation, a hyperblock remains smaller than a basic block.
Discussion
In this chapter, we have described the security features provided by hypervisors, and how hardware can support such features. We have shown that hardwware could not be trusted because of its design and potential bugs. We have mentionned that highest levels of certification require formal verification, which have limited expressivity. Moreover, the formal verification can only proof that a model is valid with respect to a specification. Hence, formal verification itself does not prove security features. Finally, we have illustrated that a correct source-code could only achieve a correct behaviour on verified hardware, if the compiler is also proven to maintain the semantic from the source-code down to the binary.
Based on these elements, we have proposed a design which tries to reuse the largest models from one layer to the other among source-code, compiler and target of execution. This would make a TCB (Trusted Computing Base) as restricted as possible. Without such factorization, proving a VMM would require a proof of the underlying CPU, and a proof of the targeted CPU. The first one would ensure a correct execution of instructions, whereas the latter would ensure that the emulated semantic would be correct. In our approach, only the proof of the underlying hardware is required, because most instruction will map directly on the underlying instructions, and run on the physical CPU. Considering the hardware available on the shelf, (only a proven MMU was certified at EAL7), it was decided to restrict the instruction-set required to run the hypervisor; no virtualization extension should be used. Such instructions are actually micro-instructions, that are firmware programs whose proofs are not available (if they even exist); hence, not included in the models of the processor.
Finally, we have introduced the concept of hyperblocks. Larger hyperblocks avoid context switches to the hypervisor, which reduces the overhead and increases the system's performance. The main goal of the hypervisor is to provide an isolated context for guest virtual machines. An isolated context contains memory and states (general purpose registers, configuration registers). To provide isolation, we need integrity and confidentiality. Integrity states that the data and state of that context can only be modified by a running instruction associated with this guest. This instruction can also be run by the VMM, executing the instruction on behalf of the guest. Confidentiality is the dual of integrity. While integrity addresses the write permission, confidentiality addresses the read permission.
CHAPTER 4 Contribution
The hypervisor acts as a bootloader: it is in charge of starting the board, setting up interruptions, memory protection, and running the guest kernel. Only one guest is supported at the moment, because the main aspect of this thesis is to evaluate the feasibility and overhead of such an approach.
Its second role is to schedule guest(s). Because some instructions do not raise faults in user-mode, the guest will run at the same level of privilege as the hypervisor. Thus, our main concern is to keep control on the execution of the guest code, even when it performs privileged operations. It is assumed that the guest OS will protect itself from userland code. As a consequence, only the code running in privileged mode is analyzed.
Basically, the hypervisor is divided into two parts: an instruction scanner, and an arbitration engine. The former is in charge of analysing raw data, decoding instructions and infering behaviors. Using those information, the latter implements callbacks to decide whether the instruction should run unmodified. The hypervisor's implementation is detailed in the chapter 5 (p 93). This chapter provides a high-level description of the hypervisor's design, and evaluate its performance.
Design
The hypervisor was conceived with platform independence in mind, but currently only ARM architecture has been implemented. Two platforms were targeted: Versatile-Express and Raspberry Pi (the latter is more advanced though). Tests have been performed on Qemu and on a real Raspberry Pi. The hypervisor and the guest both run in privileged mode. The hypervisor can be seen more in a filtering mechanism than an interposition one.
Our main concern is to keep control on the execution of the guest code, even when it performs privileged operations. Thus, no privileged guest code is executed if it has not Prototype previously been analysed by the hypervisor. This ensures that the guest cannot escape the hypervisor analysis. To achieve this, the hypervisor will scan the guest code to identify the next sensitive instruction. Sensitive instructions are instruction that read or write processor configuration registers (e.g. CPSR/SPSR), coprocessors (e.g. MRS/MCR), when trapping (e.g. SWI, SVC, SMC, HVC, WFE, WFI) or when they access hardware devices. There are two way to access the hardware: using coprocessors, or accessing memory-mapped registers. The first case is handled by the code analysis (presented in the following section), and the latter by the memory isolation primitives (such as MMU).
When a sensitive instruction is detected, it is saved in the guest context, and replaced by a trap to the arbitration engine of the hypervisor. Then the guest context is restored, and the execution is restarted. The guest code runs natively on the CPU, and eventually a trap will arise and the control will be given back to the hypervisor, in the arbitration engine. Given the exact context of the guest, it will decide whether to let the instruction execute, or to skip it. Some emulation can be performed to update the guest context. This mechanism is illustrated in Figure 4.1. next_trap <-find_next_sensitive(
perform_arbitration () 5.
unpatch (next_trap) done
Analysing guest code
This analysis is performed statically by matching instructions against patterns and sometimes dynamically, crafting an instruction which has the same side effects, such as testing condition flags.
Static analysis
This part of the analysis can be compared to the static analysis presented before. In our case, the concrete model would be the guest program (i.e a sequence of instructions), and the abstract model would be whether an instruction in the sequence is sensitive or not.
Prototype
The hypervisor will match each instruction against a matching table in the same manner a disassembler would do. Because of the complex encoding of the ARM instruction set, a simple "switch-case" could not be done. The whole ARM, Thumb and Thumb32 instruction sets were encoded in Python tables. Those tables are used to generate C code. Some caveats had to be handled:
Mixing instructions and data ARM instruction set allows mixing code and data inside a code section. This can lead to false positive detection, as illustrated in Fig 4 .2.
Switching to Thumb, and back Most ARM processors have two running states: ARM and Thumb mode. Thumb is a different encoding scheme for instructions, providing a better code density. Switching from ARM to Thumb can be done in four ways:
Branching an odd address: Since all ARM instructions will align themselves on either a 32 or 16-bit boundary, the LSB of the address is not used in the branch directly. However, if the LSB is 1 when branching from the ARM state, the processor switches to the Thumb state before it begins executing from the new address; if 0 when branching from Thumb state, back to ARM state it goes. This is done with explicit branch, but also with several operations on PC; Using "exchange" instruction: Some instructions have the side effect of switching mode from ARM to Thumb and back. Such instructions are usually suffixed by X, for instance the BLX instruction. On exception: The same vector table is used for both Thumb-state and ARM-state exceptions. An initial step that switches to the ARM state is added to the exception handling procedure. This means that when an exception occurs, the processor automatically begins executing in the ARM state. Setting the SPSR.T bit: This is used to restore the Thumb state after an interrupt handling. a RFE or SUBS PC, LR, #imm8 will restore the CPSR with the SPSR . So if the later was set in Thumb, the processor will switch to Thumb.
As a consequence, the hypervisor has to detect all instructions which break the control flow explicitly (e.g. B 0x12345678) or implicitly (e.g. SUBS PC, LR, #imm8). Some instructions cannot be matched statically, for instance register to register operations, or conditional execution. To address this issue, dynamic analysis is performed.
Dynamic analysis
Some cases cannot easily be handled by the static analysis. For instance, register-to-register operation would have to be interpreted and emulated in order to track the actual value of This example shows that disassembling each instruction one after another can lead to false results. In this case, the first operand's value is a numeric value, but it can also be decoded as CPS. Replacing it by a trap to the hypervisor would change the program behavior.
Prototype
those registers. This is unsuitable because:
• It would require a lot of code (basically writing an emulator for the instruction set);
• This code would have to be proven correct;
• This would break the efficiency properties of Popek&Goldberg, stating that most of the instructions should be executed without hypervisor intervention.
To keep a hypervisor as small as possible, with the highest level of fidelity to the hardware, a dynamic approach was used. It is used to tackle several issues: 4.1.3.2.1 Conditional execution ARM supports conditional execution. The easiest way to determine whether a condition is fulfilled is to execute a dummy instruction which shares the same condition flags. That instruction could be a simple conditional move of a given value in a given register; reading this register's value gives the state of the flag. An example is given in Fig 4 .3.
1 @ initial instruction 2 @ bne 0x00508000 The original instruction was a conditional branch. Given the guest context (general registers and CPSR), we can craft an instruction which will conditionally be executed. Knowing the side effect (assigning 1 to r0), we can detect whether the condition was verified or not.
Dynamic assignment
Assignment instructions can be used to perform branch, for example: mov pc, r3 or add pc, pc, #4. There are some implicit rules (documented though) which need to be known. For instance, when executing an ARM instruction, PC reads as the address of the current instruction plus 8. But the instruction may also be much more complex, like ldrls pc, [pc, r2, lsl #2].
In order to preserve the highest fidelity to the underlying processor, every instruction which has PC as a destination register is handled using self-modifying code. Given a guest context, the destination register (PC) is replaced by another register (for instance r0), and the instruction is executed on the processor. Afterward, we get the actual destination in r0 accurately. The hypervisor is the recipient for interrupts. It must handle its own interrupt handler, and sometimes forward the interrupt to the guest. Thus, the hypervisor must keep track of the interrupt vector. On ARM, there are two ways to setup an interrupt vector:
Writing at 0x00000000 or 0xFFFF0000: The ARM interrupt vector can be located either in low memory or in high memory. When a guest writes between this base address to base address + table size, it is setting the interrupt handlers. Setting VBAR: VBAR is the Vector Base Address Register. Modifying its value has side effect to actually set news interrupt handlers.
The hypervisor keeps track of the guest interrupt handlers updates, but prevents the hardware from being updated. The hypervisor must always receive all the interrupts.
Handling interrupts
Because it controls the exception vector, the control flow is restored to the hypervisor whenever an exception or an interrupt arises. If the interrupt was targeted to the guest, the hypervisor wraps the guest interrupt-handlers with the analysis mechanism described in 4.1.2 (p 76). That way, even interrupt code is handled by the hypervisor, and no sensitive operation can be performed without prior validation by the hypervisor.
Reset
Reset
Reset is invoked on power up or a warm reset. The exception model treats reset as a special form of exception. When a reset is received, the hypervisor performs the board Prototype initialization, sets-up memory protection (to prevent the guest from accessing or configuring hardware), sets-up a guest context, loads the guest at the specified address, and starts analysing the guest from its start address.
Tracking privileges changes
The trap-and-emulate approach is known to be expensive in terms of context-switches from one privilege level to the other. Moreover, ARM processors only have two execution modes: privileged and unprivileged. Because some sensitive instructions are context dependent, and do not trap in unprivileged mode (mrs r0, cpsr for instance), it was decided that both the guest and the hypervisor will run in privileged mode. This way, no tricks must be setup to lure the guest on its actual privilege level.
Despite some instructions will not trap in unprivileged mode, it cannot have a significant impact on the system's setup, except accessing specific memory regions. This part of the isolation of the hypervisor is handled by the MMU. Moreover, the guest-OS is supposed to protect itself from userspace programs. The hypervisor only provides isolation, and is not supposed to provide additional security on the guest. Thus, unprivileged code (userspace code) is not analyzed. The hypervisor tracks privilege changes in the guest so that when user mode is triggered, the hypervisor stops hypervising, and is rescheduled in privileged mode (via interrupts handlers). This way, usermode code runs directly on the CPU without performance degradation.
Metrics
Linux currently ships two hypervisors1 : Xen and lguest. They both require guest OS to be modified2 , that is called paravirtualization.
Lguest is roughly 6,000 lines of code, including userspace code. [START_REF] Steinberg | Nova: A microhypervisor-based secure virtualization architecture[END_REF] states that Xen hypervisor is 100,000 lines of code, 200,000 lines for the dom0 (management virtual machine required to perform the administration operations), and some 130,000 lines for Qemu. In version 3.0.2 of Xen, the source-tree is separated in ∼ 650 files containing 9,000 lines of assembly and 110,000 lines of C, whereas version 4.7.0 is composed of ∼ 1200 files, 7,000 lines of assembly, and 265,000 lines of C3 . Xen also requires some adaptation in the guest code. [START_REF] Barham | Xen and the art of virtualization[END_REF] mentions a total of 2995 lines to modify on Linux to support Xen. Nova [START_REF] Steinberg | Nova: A microhypervisor-based secure virtualization architecture[END_REF] is a micro-hypervisor composed of 9,000 lines of code, 7,000 user-level code and 20,000 lines of code for the VMM.
In comparison, our hypervisor is composed of 4,400 lines plus 12,000 generated lines.
CPU description
We have used a pseudo-dedicated language to encode the instruction set. The chosen language was Python because it is easily readable, and has interesting built-in features such as map or filter on lists. Using these two functions made it easy to isolate instructions in the global instruction Because we wanted to extract multiple information from the same instructions, we have used "behaviors". When recognized, the information associated with an instruction is bound to several behaviors. These behaviors provide insights for the arbitration engine. The behavior list is the following: Behavior_OddBranch: Targets PC, needs a dynamic execution to resolve the address; Behavior_Link: Will update LR;
Prototype
Implementing the dynamic analysis mechanism was tricky, because of some instruction's side effects, but it is quite easy to isolate. Once written, these features did not need additional work.
In contrast, implementing the analysis was much more troublesome. The ARM instruction set specification is lengthy and intricate. Exploiting this documentation is hard. First of all, we had to extract the binary encoding from printed documentation. Afterward, an internal representation had to be found. That piece of work was complex enough for us to decide to write a generator. That generator was written in Python. It is composed of instruction description (90%) and code processing this data to produce C code (10% remaining). These 2,300 lines of Python code (2,000 describing the processor, 300 to generate code) produce ∼ 12, 000 lines of C code. This represents the hard part of this work. To gain sufficient confidence in the implementation, this part should be extracted from either a formalized version of the ARM instruction set; or the evaluation candidate should provide a proof of correction of that table. This is the real challenge to achieve a genuine, formally proven implementation of an hypervisor, because at the moment, even SeL4 (which is a reference in this domain) assumes correctness of hand-written assembly code, boot code, management of caches, and the hardware [START_REF] Klein | Comprehensive formal verification of an os microkernel[END_REF]. The benchmarks are run on top of an operating system. On the right side: the underlying operating system is run on top of our hypervisor.
Performance overview
Benchmarks
On top of an operating system
This configuration is the traditional case of benchmarks running on top of an operating system. In our case, the operating system is a homemade preemptive, multi-task operating system, implementing a round-robin scheduler. Five tasks are created, one containing the benchmarks and four doing random work (homogeneous though). At the beginning, one task per benchmark was created. Because all the benchmarks don't have the same execution time, some tasks were terminated before others. As a side effect, the quantum associated for each task was modified: since execution are terminating, there are fewer task to schedule. Hence, the time spent by the system in the scheduler was modified. Using one unique task for the benchmarks and infinite tasks to feed the scheduling queue solved this issue.
Bench
Bench
Analysis
As described in 4.2.1 (p 86), these benchmarks do not perform operating system like instructions; in particular, they do not access hardware (except the UART, which is allowed) nor coprocessors. Because of their nature, these benchmarks are well suited for userspace execution. Where does the overhead come from? As explained in 4.2.3.1 (p 87), each task is run in userspace, and is preempted by a timer (set to 10ms). Thus, at every tick, the hypervisor takes back control, and performs analysis on system code, responsible for scheduling. In order to quantify the overhead in kernel time, the same benchmarks are run in privileged mode (hence, under hypervision). The corresponding results are presented in the next section.
The NOP benchmark is equivalent to a userspace program for which a scheduling policy would be cast before each instruction. A software interrupt is generated, which transfers control to the hypervisor (because the hypervisor handles all the interrupts). A new analysis for the guest's interrupt handler is performed. The guest handler simply performs a "return from interrupt". As a side effect, the processor is switched back to unprivileged mode. Hence, the analysis terminates. The overhead is important, because of:
1. the hypervisor interposes between the guest and the interrupts; 2. the cost induced by the context-switches starting and stoping the hypervisor (hence backuping states); 3. the actual hypervision of the guest code. The same reasons hold for the PUTS benchmark. In that case, more time is spent in kernel mode handling the busy-wait loop on the UART hardware, so (1) and ( 2) are small(er) compared to [START_REF]QEMU open source processor emulator[END_REF]. Also, the overhead of analysing the guest code (third item) overlaps with the time spent by the kernel waiting for the hardware to be ready: under hypervision, fewer iterations are made waiting for the FIFO to be ready.
In the general case, the hypervisor cannot be compared to hardware-assisted products. However, in specialised applications applied to embedded systems, our approach has shown a reasonable overhead (10% average). Our proposal might be naive compared to the aggressive strategies used by modern hypervision techniques. Nevertheless the performance impact is not critical in light of those benchmarks. This is mainly due to the fact that only the privileged code is hypervised, not the applications.
Those results are good essentially because the MMU does its job. It ensures that userspace will behave correctly: it ensures a proper isolation between userspace and kernelspace, so that the userspace cannot compromise the security of the kernel, nor the hypervisor's. This reinforces the need of a high level of hardware certification. Such hardware exist, in 2013 Samsung Eletronics Co. got an EAL7 certification for a Memory Management Unit for RISC microcontroller [111,112].
The worse benchmark result is PUTS, which has 276% cost. In 2007, a report showed that a paravirtualized linux running on top of Xen has an overhead of 310% in reception, and 350% in emission [START_REF] Santos | Xen network i/o performance analysis and opportunities for improvement[END_REF]. Also, we have measured the same overhead in the microbenchmarks presented in the following section.
Hypervising cost -Micro benchmarks overview
Hypervising cost -Micro benchmarks overview
Benchmarks
This section discusses the total cost of hypervision. As opposed to the previous section which qualified the overhead on a system, this one provides an analysis on a bare-metal system. The whole system plus benchmarks will be evaluated under hypervision. In that case, the hypervisor basically does nothing, because no hypervision is needed for those benchmarks to work. Even-though the benchmarks do not access hardware, their control flow is much larger. We expect an important overhead.
Benchmarks
Results
Bench
Discussion
Those micro-benchmarks present much worse results than macro-benchmarks. The multiple context-switches between guest and hypervisor become hot-spots.
Full virtualization (as opposed to system virtualization from the previous section) is not realistic. The kernel code should be limited as much as possible. This reflects the Native Hypervised micro-kernel design, which reduces the kernel code to the minimum in favor of userspace. That is the credo followed by Minix [START_REF] Tanenbaum | Operating Systems Design and Implementation[END_REF], and also in some embedded systems (Erika [114] in automotive industries, Camille [START_REF] Deville | Smart card operating systems: Past, present and future[END_REF] for Smart Cards), but not on general purpose system, where linux is number one.
Hypervisors are candidates to provide security by isolating security vulnerabilities from bare-metal software such as micro-controllers firmwares. For this kind of software, no distinction is made between userspace and kernel-space. All the code is run in privileged mode. For such applications, our approach is not feasible: two orders of magnitude in performance degradation is prohibitive. Thereby reducing that hypervision cost is advisable.
The analysis performed in this work is the simplest, in order to ease the certification work. A cache mechanism was introduced, to avoid analysing several times the same instructions. If the targeted guest is a monolytic kernel, or bare-metal privileged application, it is necessary to extend the analysis. The context-switch cost is expressed as follow:
C = S * N
where C is the total cost, S the context-switch cost, and N the number of context-switch. Because N >> S, the number of context switch should be addressed first. The analyser would need to be smarter in the detection of sensitive instructions, or in the control flow tracking. That way, it would allow more code to be executed without context-switches; hence, increasing overall performances.
Conclusion
Conclusion
This chapter presents a new design for a "certifiable" hypervisor. For that purpose, several axes are presented:
Limit the size of the formal model: which implies to re-use specification from one layer to another (hardware spec in both compiler and software layer), Rely only on provable elements: this is why no virtualization features are used. We argue that using MMU is fair, because it has already been certified at level 7 for SmartCard. Reduce the codebase: and avoid writing specific code. Most of the job should be done on the CPU, even if it seems simpler to emulate the instruction.
With this design in mind, a proof of concept was written in C and assembly. Python code was used to generate C tables. The code base is quite small: less than 5,000 lines of code, excluding the generation part. Despite extensive tests, we cannot ensure that the Python description is error-free. But because the hypervisor uses a large table, the verification job can be done separately on the table (or generator) and on the hypervisor. Currently, the Python table does this job, but it is not sufficient. This should be done either by a formal specification provided by the manufacturer, or provided by the certification candidate.
In favorable cases, the performances are decent (< 15%) and encourage a micro-kernel style architecture for embedded systems. This design is suitable because of the responsibilities isolation between kernel (privileged code) and actual tasks (like sensing, processing and communicating). There are two main axis to improve performances: either reduce the cost of a context switch (from guest to hypervisor), or reduce the number of context switch.
Security is said to be a trade-of between usability and security: having a very secured system will likely be a burden for users (think of password patterns, frequent changes...), whereas having an easy to use system will probably have less security mechanism. The same kind of comparison can be done between performance improvements and certifiability. The underlying theory to express formal properties is a limiting factor in this case. For instance, expressing concurrent access is a hard problem. The current implementation is (in my opinion) a fair share between tricky mechanism and understandability.
CHAPTER 5
In depth implementation details You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.
Theo de Raadt
General concepts about the ARM architecture and instruction set
This section describes technical details of the hypervisor. First, an overview of the ARM architecture is given. Next, the instructions encoding schemes are presented. Then, the instruction scanning layer of the hypervisor is detailed. Finally, the dynamic solutions for the issues presented in the previous chapter are explained.
The ARM architecture overview
The ARM core uses a RISC (Reduced Instruction Set Computing) architecture. While RISC emphasizes compiler complexity, CISC (Complex Instruction Set Computing) architecture emphasizes hardware complexity. Each platform is composed of a processor and peripherals, which includes coprocessors. Coprocessors are dedicated hardware mechanism which perform specific tasks such as virtual memory handling for example. Coprocessors are addressed using specific instructions (MCR and MRC) while more classical peripherals (such as network card) are memory-mapped. The ARM processor uses a load-store architecture. Two kinds of instructions exist to transfer data in and out of the processor: load and store. There are no data processing instructions that directly manipulate data in memory. All data processing is carried out in registers. General purpose registers hold either data or an address, and program status register contains information on the current execution state (such as processor mode, interrupt state, and condition flags). There are 16 general purpose registers (from r0 to r15), but some of them have a dedicated role:
• r13 (or sp) holds the stack pointer;
• r14 (or lr) holds the return address in a subroutine;
• r15 (or pc) holds the program counter.
Depending on the generation used, ARM processor supports several processor modes, which are either privileged or non-privileged. Switching from one mode to the other exposes new registers (called banked registers). The processor keeps a safe copy of them; and will restore them upon returning in the previous mode. The number of banked registers depends on the target mode. Finaly, is is important to recognize the difference between a processor's family name and the Instruction Set Architecture (ISA) version it implements. For example, the first Raspberry Pi includes a 700 MHz ARM1176JZF-S processor. This means that it belongs to the 11th family, and implements cache, MMU with physical address tagging, with Jazelle support (J flag), TrustZone extension (Z flag), VFP instructions (F flag) and synthesizable (S flag).
But according to [START_REF] Sloss | ARM System Developer's Guide: Designing and Optimizing System Software[END_REF], the ARM core is not a pure RISC architecture:
General concepts about the ARM architecture and instruction set Variable cycle execution: not every instruction takes the same number of cycles to execute. In [START_REF] Sloss | ARM System Developer's Guide: Designing and Optimizing System Software[END_REF], the appendix D is dedicated to the instruction cycle timings. For example, ALU operations take one cycle, whereas branch instructions take 3.
Accessing coprocessors takes between one to three cycles, plus a number of busy-wait cycles issued by the coprocessor. Inline barrel shifter: the barrel shifter is a hardware component that preprocesses one register before the instruction is executed. This leads to more complex instructions. Thumb instruction set: some ARM processors can execute in ARM mode, or in Thumb mode. Further details are provided in the following sections. Conditional execution: an instruction is only executed if a specific condition has been met. Enhanced instructions: some specific instructions were added to the standard ARM instruction set, such as DSP, which support fast 16x16bits multiplier operations and saturations.
Despite the break with the RISC traditional architecture, those features make the ARM instruction set suitable for efficient embedded applications.
The ARM instruction set families
The ARM instruction set is composed of five instruction classes: This constructor-defined classes can be summarized in the following:
Register to register instructions: all the instructions that perform operations on general purpose registers such as arithmetic operations, move operations, shift operations. Memory access instructions: instructions that load or store data from (resp. to) mem-General concepts about the ARM architecture and instruction set ory. Control-flow instructions: all the instructions whose role is to change the program counter. Trapping instructions: which are used (mainly) to perform system calls, and lead to (basically) switching to privileged mode. Hardware configuration instructions: all the instructions which access (reading or writing) to hardware configuration, coprocessors, or status-register.
In practice, some instructions can exhibit unexpected behaviours ; for example, arithmetic instructions can be used to produce a branch if the destination register is PC .
The ARM instruction set encoding
The ARM instruction set is actually separated into several sub-instruction set. The following subsection provides an overview of those features. The first subsections (5.1.3.1 and 5.1.3.2) are relevant, and handled by the hypervisor, whereas the other one are not: they should not be required in kernel code, and the hypervisor could trap on them to prevent their execution.
ARM
The ARM instruction set encodes instructions on 32 bits, as illustrated in Fig. 5 The ARM instruction set encoding seems regular and straightforward, thus easy to analyze. But the reality is different. The "Opcode" field (illustrated in Fig. 5.1) is four bits encoded. Hence, it can store up to 16 distinct values. But instruction belonging to the "branch instructions" class aforementioned, such as branch with link, and chance CPU mode (BLX) has an opcode value of 1 (%0001) whereas the immedia branch (B) as an opcode value of 10 (%1010). This misuse of the opcode qualifier is also present among different classes: the opcode 1 (%0001) is also used to encode load-store class instructions, such as LDRD. There also exist unconditionnal instructions (such as MCR2), that do not have a proper conditionnal value and use 15 (%1111), which is different from the execute always condition flag 14 (%1110). These cases are illustrated in Fig. 5 as one would think. Hence, using a simple "switch/case" on the opcode value is not longer suitable.
cond 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 Rm LDRD cond 0 0 0 1 U 0 W 0 Rn Rd 0 0 0 0 1 1 0 1 Rm MCR2 1 1 1 1 1 1 1 0 op1 0 Cn Cd copro op2 1 Cm
Thumb -Thumb32 and ThumbEE
ARMv4T and later define a 16-bit instruction set called Thumb. Most of the functionality of the 32-bit ARM instruction set is available, but some operations require more instructions. The 16-bit Thumb instruction set provides better code density, at the expense of performance.
ARMv6T2 introduces a major enhancement of the 16-bit Thumb instruction set by providing 32-bit Thumb instructions. The 32-bit and 16-bit Thumb instructions together provide almost exactly the same functionality as the ARM instruction set. The 32-bit Thumb instruction set achieves the high performance of ARM code and better code density like 16-bit Thumb code.
ARMv7 defines the Thumb Execution Environment (ThumbEE). The ThumbEE instruction set is based on Thumb, with some changes and additions to make it a better target for dynamically generated code, that is, code compiled on the device either shortly before or during execution.
Wether you start in ARM or Thumb mode depends on your board. The CPU mode can be infered by reading the CPSR.J and CPSR.T flags. The instruction set is given in the Fig. 5.3. There is a (funny) chicken/egg problem here: how does the compiler know how to encode the instructions to read the CPU mode, without knowing the CPU state, hence which encoding to chose? J T Instruction set state 0 0 ARM 0 1 Thumb 1 0 Jazelle 1 1 ThumbEE The tricky part of Thumb instruction set is that it has a variable instruction size. Those instruction can either be 16 or 32 bits. Also, even if they are 32 bits in size, Thumb General concepts about the ARM architecture and instruction set instruction will be 16 bits aligned. The CPU does not detect automatically whether the data pointed by PC is ARM or Thumb code. Switch from one mode to the other are tracked into CPSR.T, but one cannot change this bit in the CPSR to switch mode. Basically, almost all branches, function call, return, exception entry or return can switch mode.
Branching an odd address: Since all ARM instructions will align themselves on either a 32 or 16-bit boundary, the LSB of the address is not used in the branch directly. However, if the LSB is 1 when branching from ARM state, the processor switches to Thumb state before it begins executing from the new address; if 0 when branching from Thumb state, back to ARM state it goes. This is done with explicit branch, but also with several operations on PC; Using "exchange" instruction: Some instruction have the side effect to switch mode from ARM to thumb and back. Those instruction are usually suffixed by X, for instance BLX instruction. On exception: The same vector table is used for both Thumb-state and ARM-state exceptions. An initial step that switches to ARM state is added to the exception handling procedure. This means that when an exception occurs, the processor automatically begins executing in ARM state. Setting the SPSR.T bit: This is used to restore Thumb state after an interrupt handling.
a RFE or SUBS PC, LR, #imm8 will restore the CPSR with the SPSR . So if the later was set in Thumb, the processor will switch to Thumb.
Neon and VFP
NEON is a SIMD accelerator processor part of the ARM core. It means that during the execution of one instruction, the same operation will occur on every data sets in parallel.
VFP stands for Vector Floating Point. It often comes with NEON on ARM processors. As its name stands, VFP is floating point capable, whereas NEON works on integers. From now on, we consider NEON and VFP as a single computation mode. There are several ways to use such features:
Instructing the compiler: directs the compiler to auto-vectorize, and produce NEON code. Using NEON intrinsics: those compiling macro provide low access to NEON operations. Writting assembly code: provides a de facto access to NEON mnemonics. This unit comes up disabled at power on reset. User willing to use that unit has to manually start it using coprocessor dialect. This computation mode is not handled by the hypervisor. This limitation can easilly be explained; such instruction are not required for an operating system's kernel. Moreover, we could argue that we would detect probe / enabling of NEON features thanks to the copro access which are catched by the hypervisor.
Analyser's implementation: the instruction scanner been extracted using "human OCR" from the ARM reference documentation [START_REF] Arm | ARM Architecture Reference Manual. ARMv7-A and ARMv7-R edition[END_REF]. A proper specification would have been much faster, easier and less error-prone (because OCR may have false positive, but as humans, we are even more subject to misreading or miswriting). Figure 5.4 illustrates a fragment of the generated C table. This mechanism is similar to routing algorithm: the outgoing route is determined by looking-up entries in the routing table(s), in which each destination is associated to a netmask.
Mask
Value Instruction fff0000 c4f0000 MCRR fe000000 fa000000 BLX (immediate) fff8000 8bd8000 POP (ARM) f000000 a000000 B fe00010 800000 ADD (register, ARM) Figure 5.4: Binary-encoded instruction is matched against the mask, expecting value.
Example: decoding the instruction e0823003
Consider the following binary-encoded instruction: e0823003. How does the hypervisor performs the instruction decoding? The relevant bits of the instruction are extracted using a binary and, then compared to their expected value. If the computed value matches the stored one, then it is a match, and the line entry is returned. Otherwise, the same computation is performed until a match is found, or the last entry is reached.
Scanner metrics
Because a matching-table is used, we provide some metrics to compare the size of the tables with regard to the size of the instruction set. The ARM documentation includes implementation defined and unpredictible instructions. Those categories are gathered into a larger "undefined behaviour" one. The following table (5.1) presents those numbers with and without the undefined behaviours (UB) entries:
Without undefined behaviour instructions, there are about three times more entries in the table than existing instructions. The actual table includes those unpredictable behaviour. In that case, the table is even larger: about seven times more entries for the ARM encoding, and almost five times for the Thumb encoding. Because the table is browsed linearly, it should be ordered by most frequent instructions. But this has not been done, because there might exist relations between several lines in the table: for instance instructions having PC as a target register are matched first. Hence, this order should be preserved. With the used encoding scheme, there is no easy way to implement this sort. Instead, a cache was used in the analyser, which holds the corresponding line entry in cache. At the beginning, no entries are filed. Upon execution, the cache contains the last destination address known for a source address. The cache entries are clean when the memory is writen. We have measured the cache hit to more than 90% in our benchmarks.
The next table (5.2) associates the instruction families previously described, with the number of instruction for each one.
Arbitration's implementation
Care must be taken to flush the cache entries when a write access is performed on the concerned memory region.
Arbitration's implementation
This section provides further details on the dynamic part of the hypervisor. After statically matching an instruction, a trap is inserted in the guest code to switch back to the hypervisor. Afterward, a dynamic arbitration is performed, to decide whether to modify the current instruction or not.
Trap to the hypervisor: the callback mechanism
To switch back to the hypervisor, a trap mechanism must be implemented. The ARM architecture provides several way to do this: Emit an hypercall (HVC): in a processor that implements the Virtualization Extensions, the HVC instruction causes a Hypervisor Call exception. This means that the processor mode changes to Hyp, the CPSR is saved to the SPSR_hyp, and execution branches to the HVC vector. Emit a supervisor call (SVC): the SVC instruction causes the SVC exception. Emit a breakpoint (BKPT): the BKPT instruction causes the processor to enter Debug state.
Craft an invalid instruction: this technique was used in the earliest version of KVM
(presented in [START_REF] Dall | Kvm for arm[END_REF]). Branching to the hypervisor (B): this instruction causes a branch to the provided address. Loading an address into PC: same as branching, but oversteps the branch range.
Because the implementation does not require virtualization nor debug extensions, the HVC and BPKT cannot be used. SVC would cause an exception, and save r13,r14. The same holds when issuing an invalid instruction (which traps to the "undefined" entry in the interrupt vector). Because both the guest kernel and the hypervisor run in privileged mode, the current implementation uses a simple branch instruction to perform the hypercall. Since the hypervisor is further than 32Mb (the maximum value encodable in the B instruction) from the guest (see Fig. 5.8), the implementation uses the last item: loading a value into PC.
The trap mechanism (illustrated in Fig. 5.7) is implemented as follow (implementation in Fig. 5
Synthesis
This thesis examines the security guarantees that can be provided by certification. The most frequently used architectures such as ARM and x86 are not natively virtualizable. That is why current solutions tend to rely on hardware mechanisms which make these architectures virtualizable, by providing a dedicated privilege level for the hypervisor. This hypervisor will be invoked whenever the hardware requires it. This places a heavy responsibility on the proper functioning of these newly introduced hardware features ; assuming developers have correctly apprehended the extensive documentation, and specific erratas, which may or may not be freely available.
Based on the principle that this dependency on the hardware's feature is too strong, a new design was proposed to achieve virtualization with the minimal hardware requirements. It is believed that a more precise formalisation can be done by expressing all the instructions' specifications and behavior. The prototype is split in two layers:
• The first one analyses each guest's instruction keeping track of the execution mode.
Either that instruction is benign, in which case the same analysis will be performed on the rest of the code, or it is juged to be sensitive, and a trap will be setup so that the second layer of the hypervisor will be invoked in lieu of that instruction being executed. • The second one consists of deciding whether to execute an instruction or not, given the actual execution context of a guest. This allows the hypervisor to put itself in interposition with the underlying hardware, preventing the guest from doing uncontrolled operations.
In order to provide decent performance, the hypervisor will only hypervise privileged code. Assuming a correct protection of memory-mapped hardware, hardware can only be reconfigured in privileged mode. Letting unprivileged code execute unsupervised does not leverage security issues for the guest kernel and hence the hypervisor. Under those conditions, the measured overhead on actual hardware was around 10% in most cases, which is reasonable considering the simple design and the limited level of optimization applied on the prototype. This approch looks reasonable because most of the intensive tasks could be performed in user-space, by delegating hardware access to kernel tasks, as general purpose systems do.
Another evaluation was performed by evaluating the whole system; benchmarks where running in privileged mode. This use-case corresponds to more specific systems, such as firmware, in which no privilege separation is achieved. This scenario is much less effective. We have a two order of magnitude overhead, which is not practicable. Considering that the bottleneck is the frequent context-switch between guest and hypervisor, the architecture could be improved in order to gain performances. This can be achieved by two ways: the cost and the frequency of such context-switches. And because the frequency is a bigger Future work factor, that seems to be the first point to tackle.
To conclude, this thesis demonstrates that writing a tiny hypervisor is achievable. Having the smallest code should reduce the code complexity, which eases the certification process, at the expense of performances. Instead of optimizing the code of the hypervisor, we decided to rely on the kernel/user-space splitting principle, which tends to offload the kernel from the most computational intensive tasks. Security (integrity and confidentiality) can be enforced by a certified MMU. This work has highlighted the challenges to virtualize an ARM processor, and resulted in 2 academic publications [START_REF] François Serman | Achieving virtualization trustworthiness using software mechanisms[END_REF][START_REF] François Serman | Hypervision logiciel et défiance matériel[END_REF] and several communication material (lab presentation and poster session).
Future work
This section presents the future work. Three perspectives are described here: increasing performances, discussing the expectations and expectations of the formal verification and finally, discuss how the security of the hypervisor should be challenged.
Increasing the hypervisor performances
To achieve that, I claim that hyperblocks should be as large as possible: currently they are a subset of basic blocks, but they should be a superset. The following sub-sections present some insights to accomplish that.
Reducing the context-switch cost 6.2.2.1 Reducing the cost of a single context-switch
Currently when a guest is switched, all its registers, cpsr, spsr are saved. This makes 34 memory access instead of a single instruction. Assuming every instruction has the same cycle consumption, that makes a 34x performance overhead.
Future work
ARM IRQ bank 7 registers plus the SPSR. FIQ (Fast IrQ) only bank 2 registers plus the SPSR. Adding constraints on the register used by the hypervisor could let us reduce the number of register to save, thereby reducing the switch context cost.
Another solution, would be to use an alternative interrupt mechanism, such as SWI or crafting an undefined instruction. This would produce an exception, caught by the hypervisor. The hardware would have saved the guest context, and the hypervisor would just have to check whether that exception is an interruption or a call to the hypervisor.
Finally, some overhead might be expected on the MMU. Currently, no operation on the MMU is performed, because at this stage of development, the guest is assumed benevolent. Otherwise more overhead should be expected due to the TLB flush and MMU updates. But some MMUs provide an execute only permission, which would be the best case scenario.
Reducing the number of context-switches
The second axis to reduce the overhead of the hypervision mechanism is to avoid trapping. For instance, in the following code, the loop performs 43 iterations. The hypervisor will not let the branch take place, instead it will set the trap back to the hypervisor in order to track the next executed instruction. In that case, the body of the loop is harmless. The hypervisor could simply skip the loop, and perform the analysis after 0x30.
Another idea would be to perform some computation on the guest binary off-card, or before starting the guest itself. Basically, that would consist in feeding the cache with initial data. Of course, great care must be taken regarding self modifying code. The guest code should be marked read-only and memory fault on writing should invalidate associated cache entries.
Expectations and foreseeable issues with a formal verification
There are two main components in the hypervisor: the instruction scanner, and the hypervisor itself. Given an address, the former determines whether the instruction located there is sensitive or not. Given a context (including trap's location) allows or forbids the instruction from being executed (with or without being modified).
The formal verification establishes a link between a model and an implementation. The model verifies some properties. The role of the verification process is to prove that the same properties also holds in the implementation. Those properties can be related to security or correctness. Hence, we expect two models: one for each main components. In both
Future work
test to validate the instruction matcher. The principle was to generate all the possible instructions from 0 to 2 32 -1, and analyse each of them, and match the result of the match, with regard to the one expected. The obvious problem with this approach is that I did not use another matching table at that time. Another "database" should be used, such as the one from Unicorn Engine1 . This framework is based on Qemu , and provides bindings in several languages such as Python which makes it really handy to perform code analysis or experimenting on small pieces of code. It should be noted that this framework did not exist at that time.
Verifying the hypervisor
After an instruction is marked as sensitive, it is replaced by a trap to the hypervisor. The main issue to verify for this component, is the following: if an instruction is allowed to run (be it modified or not), its behaviour will not break any other assumption. That is, no other behaviour should raise in runtinme, that was not expected during the analysis. This implies a precision-related verification, rather than security. Also, because the hypervisor does not scan the guest while running in user-mode, the verification must ensure that the sandbox remains robust. Hence, properties should be verified against an execution model, which would ensure that all the cases that can be used by the guest to escape the sandbox are properly handled.
Security assessment
Security is often described as the CIA triad: Confidentiality, Integrity and Availability. The confidentiality basically states that private data should remain private; the integrity is concerned by the non alteration of the data while the availability ensures that a service will always be responsive.
Designing security
Because this hypervisor is designed as a proxy between a single guest operating system and the underlying hardware, only those two subjects are to be considered. Hence, the study should no be concerned by non-interference with multiple guests. The security assessment will focus on the guest beeing the attacker, whereas the hypervisor is the target. The attack scenario is a malicious guest trying to escape its sandbox, or outperform the hypervisor. No hardware-based attacks are considered.
Future work
Because the confidentiality and the integrity are dual, they rely on the same principle. Enforcing the confidentiality consists of preventing the guest from reading the hypervisor's memory, while enforcing the integrity consists of preventing the guest from writing the hypervisor's memory. The sealing of the hypervisor relies on a hardware mechanism: a MMU when available, but a MPU could also be considered. A minimum part of the hypervisor should be mapped on the guest address space: this is required to perform the callback on sensitive instructions. This piece of code should be sufficient to switch into the hypervisor context, and restore the appropriate mappings to make the code actually reachable.
On the other hand, the availability consists in preventing the hypervisor from crashing, or being subverted. This imply two things: from the analyser point of view: if an instruction is marked innocuous, then no sensitive behaviour must be achievable through it; from the hypervisor point of view: the composition of innocuous instructions (i.e inside an hyperblock) must not exhibit any sensitive behaviour.
Also, care must be taken when handling data from the guest, which might have been maliciously crafted to subvert the hypervisor. For example, memory locations, coprocessor numbers or operations should be validated.
Challenging security
Because the guest runs at the same level of privilege as the hypervisor, an attacker should not expect to increase its privileges. Instead, it would rather try to escape the hypervisor. Several scenarios can be imagined:
Abusing the hardware configuration: if a guest manages to access some hardware without the hypervisor conscent, the guest may obtain additional privileges. An actual example in Xen is XSA-182 (CVE-2016-6258), in which a malicious guest can leverage fast-path in the MMU validation code, and escalate their privilege to that of the host. Abusing the hypervisors' algorithm: the hypervisor traps on branches, or sensitives instructions. If a guest manages to craft a sequence of instructions that seems legitimate for the hypervisor, it manage to make the hypervisor lose track. Another actual example from Xen is XSA-106 (CVE-2014-7156), in which the emulator misses some privileges checks on the emulation of software interrupts. Abusing race conditions: when the hypervisor performs an analysis, it assumes that the memory of the guest will remain the same (otherwise, a trap should arise). If the guest manages to exploit a race condition, or modify its state on the behalf of the hypervisor, the latter may be fooled. For instance, if a guest branches to an address A, which contains an innocuous instruction, the hypervisor will start an analysis from A, and place a trap further away. When the context is given back to the guest, if it manages to change the instruction at the address of A, it can escape the hypervisor. Those vulnerabilities are also common. For example, XSA-155 (CVE-2015-8550) exploits a double fetch in a shared memory region, and abuses it by changing the content of the memory between those two fetches. Exploiting traditionnal vulnerabilities: such as buffer overflows, unsanitized user-inputs and so on. This family is not specific to hypervisor's security. Hence, no further details are provided here.
To leverage those bugs, one could reverse engineer the hypervisor, or simply have a look on the code. But the compiler may have introduced some changes between the C code and the binary machine code. Hence, for specific details (such as the double-fetch previously described) both approaches can be considered. Finally, fuzzying seems to be suitable for such scenario. Altering genuine guests could help detect some corner cases which could not be properly handled by the hypervisor. In practice, such approache is very effective. Recently, lcamtuf's AFL (American Fuzzy Loop) was modified to support full-system fuzzing using Qemu2 . This project has already found several bugs: CVE-2016-4997 and CVE-2016-4998 in the Linux kernel setsockopt implementation.
Personal feedback
For this thesis to succeed, two dual-objectives had to be handled. The first one was that I was funded by a company. Hence, the objectives may not be the same than a public research institute. The cooperation was easy, but I had to put some distance between the research part and the company, in order to avoid interferences, especially in publications material. It was nice to be autonomous, but at the same time frustrating not to be part of engineering tasks. The second objective, was to conciliate two different worlds: formal methods and low level programming. In the academic world, it seems hard for people from different communities to work together, because they don't speak the same language, and don't have the same objectives. At Prove&Run, everybody want to make it work, in order to have an actual product. Hence, things worked really good. So in one case, the need of an "immediate application" driven by the economy is suitable, because it makes people work together. On the other hand, it creates issues to share information, because of classified materials.
After quite some time spent experimenting on operating systems and bare-metal programs for x86, I was given a slightly variation of my objectives: work on an ARM based hypervisor, without virtualization extensions. It must be held that very few documentation Personal feedback was available on the subject, but ARM being a Reduced Instruction Set Computing with approximately 200 instructions, it shouldn't be too hard to decode each instructions. It was wrong. Or at least, not as easy as it should be. On older ARM encoding schemes, the first bits were dedicated to the condition encoding, the following to the instruction class (like arithmetic operations, memory based and so on), and the latest for the operands. That's no longer the case. Not in the general case anyway. There were a lot of particularities which forced me to abandon my switch-case in favor of a matching instruction table. All the instructions had to be encoded at once, rather than encoding only the ones assumed to be sensitive.
In the beginning, my ambition was to boot a Linux kernel on the VMM. Linux is a nice piece of (complex) software. The early stage of the kernel decompress itself, to another memory location, jumps at that location, and starts messing up with the MMU. Moreover, at that point I discoverd that Thumb code was actually used. This made me discover the If-Then instructions, which can conditionally execute the following one, two or three instruction if the condition is either true or false. After a lot of time spent on those issues, the kernel could decompress, but it took approximately 15 minutes on top of the hypervisor, for about 1sec on the CPU. It was decided that a smaller target should be considered. Nevertheless, this gave us confidence in our matching table: to dynamically verify the correctness of the operating performed on the guest context, I have written a concurent debugging platform, which execute concurently Linux on top of Qemu, and Linux on top of the hypervisor, itself on top of Qemu, both commanded by gdb. The native case was stepping on each instruction, whereas the virtualized one was stepped until the next instruction, in order to let the hypervisor do the work without. It helped us (Pierre, an intern) and I to solve numerous bugs on the VMM, and the match table, and validate the proper handling of IT blocks. Afterward, work was done on FreeRTOS. First conclusion: don't rush into something too large, things, and split the work on realistic steps. Maybe a PoC on MSP430 or MIPS and a tiny operating system would have been better to start with. Second conclusion: x86 is said to be bloated, from my point of view, so are armv6 and armv7. It seems that armv8 tends to come back to the roots, and simplify some design elements. I hope it will get a better acceptance than Intel Itanium architecture, which had the same goals. I do have regrets though. I used to find the system community hard to please: you need a good idea, implement it, get benefits from the published results and try to get accepted by a community which is English driven, and where (often) most accepted papers are american. In their result, they state a couple years of work, with several active contributors. How should (let's say it) French PhD students, which are supposed to get a diploma within three years achieve a comparable amount of work?
To conclude on a more positive tone, this thesis gave me the opportunity to get an expertise on low-level programming, which was a daily challenge. I had to deal with a wide spectrum of software to get things done, which was very educational. The work, was
1 Introduction 1 . 1
111 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Claim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 State of the art 2.1 Lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Virtualization or abstraction? . . . . . . . . . . . . . . . . . . . . . 2.1.2 Example of virtual systems in computing . . . . . . . . . . . . . . . 2.1.3 Example of abstractions in computing . . . . . . . . . . . . . . . . 2.2 System virtualization mechanisms . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Prologue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Software approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Hardware approach . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 2 . 1 20 2. 2 27 2. 3
21202273 Lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System virtualization mechanisms . . . . . . . . . . . . . . . . Certified systems and Common Criteria . . . . . . . . . . . . . 43 19 Lexicon
Figure 2 . 2 :
22 Figure 2.2: Reading the /etc/passwd file from command line
Figure 2 . 3 :
23 Figure 2.3: Call trace of sys_read inside the linux kernel.
Lexicon
Figure 2 . 5 :
25 Figure 2.5: Type 1 (left) hypervisors are bare-metal software. They have a direct access to the hardware. In contrast, Type 2 (right) rely on an existing operating system.
Figure 2 . 6 :
26 Figure 2.6: Reading CPSR into R0 in user mode won't trap.
Some of the most important items of the 270xx serie are: ISO/IEC 27000 Information security management systems -Overview and vocabulary ISO/IEC 27001 Information technology -Security Techniques -Information security management systems requirements. ISO/IEC 27002 Code of practice for information security management ISO/IEC 27003 Information security management system implementation guidance ISO/IEC 27004 Information security management -Measurement ISO/IEC 27005 Information security risk management ISO/IEC 27006 Requirements for bodies providing audit and certification of information security management systems ISO/IEC 27007 Guidelines for information security management systems auditing (focused on the management system) ISO/IEC TR 27008 Guidance for auditors on ISMS controls (focused on the information security controls) ISO/IEC 27010 Information security management for inter-sector and inter-organizational communications ISO/IEC 27011 Information security management guidelines for telecommunications organizations based on ISO/IEC 27002 ISO/IEC 27013 Guideline on the integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 ISO/IEC 27014 Information security governance.[9] Mahncke assessed this standard in the context of Australian e-health.[10] ISO/IEC TR 27015 Information security management guidelines for financial services ISO/IEC 27017 Code of practice for information security controls based on ISO/IEC Certified systems and Common Criteria 27002 for cloud services ISO/IEC 27018 Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors ISO/IEC 27031 Guidelines for information and communication technology readiness for business continuity ISO/IEC 27032 Guideline for cybersecurity ISO/IEC 27033-x Network security -Part 1: Overview and concepts ISO/IEC 27033-2 Network security -Part 2: Guidelines for the design and implementation of network security ISO/IEC 27033-3 Network security -Part 3: Reference networking scenarios -Threats, design techniques and control issues ISO/IEC 27033-5 Network security -Part 5: Securing communications across networks using Virtual Private Networks (VPNs) ISO/IEC 27034 Application security -Part 1: Guideline for application security ISO/IEC 27035 Information security incident management ISO/IEC 27036 Information security for supplier relationships ISO/IEC 27037 Guidelines for identification, collection, acquisition and preservation of digital evidence ISO 27799 Information security management in health using ISO/IEC 27002. The purpose of ISO 27799 is to provide guidance to health organizations and other holders of personal health information on how to protect such information via implementation of ISO/IEC 27002.
Contents 3 . 1 56 3. 2 59 3. 3 61 3. 4 A 68 3. 5
31562593614685 Hypervisors and security . . . . . . . . . . . . . . . . . . . . . . Software security and certified products . . . . . . . . . . . . . Certification's impact on design . . . . . . . . . . . . . . . . . . new architecture for embedded hypervisors . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Figure 3 . 1 :
31 Figure 3.1: Two approaches are possible: (i) writing code, writing a model, prove the correctness of the model and proof the equivalence between the code and the model. (ii) Writing and prove a model, then extract the code from the model with a verified tool.
3 . 4 . 2
342 Assuming a correct VMM source-code, the compiler must provide guarantee that the generated program will exhibit the same semantic than the source-code.When the VMM is correctly compiled, a binary-form program is produced. This program is loaded in memory and executed by the CPU. Each instruction from that binary should be correctly decoded by the hardware, and no different behaviour from the specification should arise. Factorizing formalization: confound ISA and hardware specificationHaving three layers of models / development / proof / validation is tedious. We claim that the VMM and the Hardware should share a large part of the specification. The following schema illustrates the link established by each components: the software, the compiler and the hardware. At the beginning, each layer is independant, and no specification are shared.
Contents 4 . 1 76 4. 2
41762 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance overview . . . . . . . . . . . . . . . . . . . . .
Figure 4 . 1 :
41 Figure 4.1: Main loop performed by the analyser on a guest code
Figure 4 . 2 :
42 Figure 4.2: On the top of this figure, is a piece of C code which performs an arithmetic operation.This example shows that disassembling each instruction one after another can lead to false results. In this case, the first operand's value is a numeric value, but it can also be decoded as CPS. Replacing it by a trap to the hypervisor would change the program behavior.
Figure 4 . 3 :
43 Figure 4.3:The original instruction was a conditional branch. Given the guest context (general registers and CPSR), we can craft an instruction which will conditionally be executed. Knowing the side effect (assigning 1 to r0), we can detect whether the condition was verified or not.
1
1 Keeping track of the guest interrupt vector
Figure 4 . 7 :
47 Figure 4.7: On the left side:The benchmarks are run on top of an operating system. On the right side: the underlying operating system is run on top of our hypervisor.
Figure 4 . 8 :
48 Figure 4.8: As opposed to the previous ones, benchmarks are run in privileged mode. Hence, they are executed under hypervision.
Contents 5 . 1 94 5. 2 100 5. 3 104 5. 4 107 5. 5 107 93
51942100310441075107 General concepts about the ARM architecture and instruction set Analyser's implementation: the instruction scanner . . . . . . Arbitration's implementation . . . . . . . . . . . . . . . . . . . Example on a simple guest . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .General concepts about the ARM architecture and instruction set
Figure 5 . 1 :
51 Figure 5.1: An ARM instruction encoding. First part is the condition bits, then comes the opcode, and finally the parameters for the instruction.
Figure 5 . 2 :
52 Figure 5.2: Examples of inconsistend usage of the opcode part of the encoding scheme.
Figure 5 . 3 :
53 Figure 5.3: Instruction set state values given CPSR.J and CPSR.T.
Table Virtualization
Virtualization
System virtualization mechanisms
2
: provides a hardware assist to memory virtual-
ization, which includes the partitioning and allocation of physical memory among
VMs. Memory virtualization causes VMs to see a contiguous address space, which is
not actually contiguous within the underlying physical memory. The guest OS stores
the mapping between virtual and physical memory addresses in page tables.
Because the guest OSs do not have native direct access to physical system memory,
the VMM must perform another level of memory virtualization in order to accom-
modate multiple VMs simultaneously. That is, mapping must be performed within
the VMM, between the physical memory and the page tables in the guest OSs. In
order to accelerate this additional layer of memory virtualization, both Intel and
AMD have announced technologies to provide a hardware assist. Intel's is called Ex-
tended Page Tables (EPT), and AMD's is called Nested Page Tables (NPT). These
two technologies are very similar at the conceptual level.
Interrupt Virtualization: the IA-32 architecture provides a mechanism for masking ex-
ternal interrupts, preventing their delivery when the OS is not ready for them. A
VMM will likely manage external interrupts and deny the guest operating system the
ability to control interrupt masking. This will lead to frequent mask/unmask inter-
rupts. Moreover, intercepting every guest attempt could significantly affect system
performance. Even though, challenge remains when a VMM has a "virtual interrupt"
to deliver to a guest. Intel VT-d Posted-Interrupts and AMD Advanced Virtual In-
terrupt Controller (AVIC) provide an effective hardware mechanism which causes no
overhead for interrupt handling.
.2.3.2 The ARM architecture
The ARM architecture exists in multiple revision. The latest revision is version 8. Fig. 2.10
illustrates the privileges layers of this new architecture. The architecture is separated in
three profiles (where v8 stands for version 8):
the ARMv8-A "Application profile": suitable for high performance markets; the ARMv8-R "Real-time profile": suitable for embedded applications; the ARMv8-M "Microcontroller profile": suitable
for embedded and IoT applica-
tions.
These architectures are used by different processors; for example the Cortex-A7 processor
implements the ARMv7-A architecture, and the Cortex-A35 provides a full ARMv8-A
support. The different within a CPU families are features, such as frequency, or number
of cores.
EL0 App1 App2 App1 App2 Trusted App1
EL1 Guest Operating System 1 Guest Operating System 2 Secured OS
EL2 Virtual Machine Monitor Trust Zone
EL3 (TrustZone) Monitor
Figure 2.10: The ARM architecture's privileges levels with virtualization and trust zone
features.
Criteria
Certified systems and Common Criteria
Assurance class Assurance Family Assurance Components by Evaluation Assurance Level EAL1 EAL2 EAL3 EAL4 EAL5 EAL6 EAL7
ADV_ARC 1 1 1 1 1
ADV_FSP 1 2 3 4 5 5
Development ADV_IMP ADV_INT 1 1 2 2 3
ADV_SPM 1
ADV_TDS 1 2 3 4 5
Guidance docu-
ments
ALC_CMC 1 2 3 4 4 5
ALC_CMS 1 2 3 4 5 5
Life-cycle sup-port 1 Test ALC_DEL 1 ATE_COV 1 2 ATE_DPT 1 ATE_FUN 1 1 2 1 1 2 3 1 3 3 2
ATE_IND 1 2 2 2 2 2
Vulnerability as- AVA_VAN 1 2 2 3 4 5
sessment
Figure 2.11: Evaluation assurance level summary. Aiming highest levels brings highest
requirements in term of assurance components.
51
table, to generate matching functions for several behaviours. Both ARM and Thumb instructions were encoded. Some metrics are provided in Fig 4.4.
CPU mode Instructions Entries in table Overhead
ARM 299 976 226.4%
Thumb 276 1111 302.5%
Figure 4.
4: Number of entries in the Python table for ARM and Thumb instruction set.
2 On top of an hypervised operating system
This configuration is the same as above, except that the operating system does not directly run on the hardware anymore. Instead, it is started in our hypervisor. The results are also given in milliseconds.
Result # Run Execution Time (ms) 101 100 1.01 3670998 100 36709.98 5959942 1 5959942 13852709 200 69263.545 103767253 5 20753450.6 Bench Result # Run Execution Time (ms) 1854375 100 18543.75 10161195 100 101611.95 6002561 1 6002561 14454528 200 72272.64 108536430 5 21707286 4.2.3.3 Performance overhead NOP SYSCALL PUTS SYSCALL Fibonacci Dhrystone AES BENCH NOP SYSCALL PUTS SYSCALL Fibonacci Dhrystone AES This section compares the results, and exibits the overhead induced by the hypervisor. BENCH Native Hypervised Overhead NOP SYSCALL 1.01 18543.75 18360.148 PUTS SYSCALL 36709.98 101611.95 2.768 Fibonacci 5959942 6002561 1.007 Dhrystone 69263.545 72272.64 1.043 4.2.3.Performance overview AES 20753450.6 21707286 1.046
.1.
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Cond Opcode Parameters
.2. These very simple examples illustrate that decoding ARM instruction is not as simple 96 General concepts about the ARM architecture and instruction set
Instruction 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
B cond 1 0 1 0 signed 24-bit branch offset
BLX
Table 5 .
5 Fig. 5.5 illustrates a factice lookup in the matching table.The instruction e0823003 is then, a ADD instruction. The table browsing algorithm is linear. It may seem inappropriate, but the following section explains why this is necessary. 1: Instruction set comparison between entry count in the specification and in the match table.
# Input : instr : binary-encoded instruction
# Output: The index of the first matching line
idx := -1
for each (mask,value,name) in match_table {
idx++;
if ((instr & mask) == value)
return idx
}
Contents 6.1 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.3 Personal feedback
.6):
Summary
Instruction Count Branches add 1 0 bhi 1 0 CHAPTER 6
bic 1 0
ldrls 1 1
lsr 1 0
mrs 1 0
orr 1 0
subs uxtb 1 1 0 0 Conclusion and future work
beq 2 2
cmp 2 0
cps 2 0
strb 2 0
sub 2 0
umull ldrb udf ldm pop 2 3 And on the seventh day God finished his 0 0 3 work that he had done, and he rested on the 0 4 seventh day from all his work that he had 4 4 2 done.
svc 4 0 Genesis 2:2, English standard version
bx 5 5
push 5 0
tst 6 0
bne 7 7
b 8 8
str 9 0
bl 11 11
mov 11 0
.word 22 0
ldr 30 2
Total 153 42
108
. . . . . . . . . . . . . . . . . . . . . . . . . . 118 111 Synthesis
Les principaux challenges sont d'une part, l'analyse du jeu d'instruction qui, malgré l'existence de documentation, comporte des ambiguïtés et des particularités dépendantes de l'implémentation, voire même, des comportements non définis. D'autre part, d'identifier les intensions d'un système invité étant donné un flot d'instructions discret afin de rester en interposition avec le matériel sous-jacent. Pour ce faire, le code machine de l'invité est analysé et les instructions menaçant l'intégrité ou la confidentialité du système sont remplacées par des trapes logicielles, permettant d'analyser le contexte afin de décider de laisser s'exécuter l'instruction ou non.
Thèse de François Serman, Lille 1, 2016 © 2016 Tous droits réservés. lilliad.univ-lille.fr
Because a MMU has been successfully evaluated at the highest evaluation level, we decided that we could rely on such features
Thèse de François Serman, Lille 1, 2016 © 2016 Tous droits réservés. lilliad.univ-lille.fr Thèse de François Serman, Lille 1, 2016 © 2016 Tous droits réservés. lilliad.univ-lille.fr
This can be a way to implement efficient shared memory though.
Courtesy to Amazon (https://aws.amazon.com/what-is-cloud-computing/)
Thèse de François Serman, Lille 1, 2016 © 2016 Tous droits réservés. lilliad.univ-lille.fr System virtualization mechanisms
https://www.tuvit.de/cps/rde/xbcr/SID-81880875-27A695D4/tuevit_de/common-criteria.pdf
http://www.ssi.gouv.fr/administration/produits-certifies/cspn/
https://xenbits.xen.org/xsa/
http://sel4.systems/Info/Docs/seL4-brochure.pdf
https://www.apple.com/support/security/commoncriteria/CC_Whitepaper_31605.pdf
http://www.provenrun.com/
http://trust-in-soft.com/
Thèse de François Serman, Lille 1, 2016 © 2016 Tous droits réservés. lilliad.univ-lille.fr Prototype
It also provides Cgroup API, which is used for containers, but this is considered out of scope here
Considering the first Xen releases, HVM was not available
Those metrics come from the Xen source-tree, using cloc(1) to count lines of code. Cloc splits lines of code, blanks and comments. This metrics are thus, code only.
urlhttp://www.linaro.org/blog/core-dump/on-the-performance-of-arm-virtualization/
https://www.unicorn-engine.org
https://github.com/nccgroup/TriforceAFL
Remerciements
RIsd9TGG5La3tXvB5xuEVSPGVtRvV7s/PaZ0NivV1kERMTKasTpNFbspV/bHfG8A 1vvG1PWvjYmr/VUjEtAk0eyu9FUzPkjQdRkMyoLpQRfzD2ovS/oQfStvi1pZ98FM 7vZNvL70rFTzoYA8QLk355Tk0pWyyU3UffBZIFe8fjlmWFy/z/D2f6tDe45iKyGt UZ2NBOWKX5X7R7HSUfe0q1Lh7XMNzAPJPwlP8Y9EQyTPJ91Wh3Cf/J8/2IJzxnBb US7mQdiLhnoiiORniq8W =4CTb --END PGP SIGNATURE--1 https://www.hopperapp.com
iQIcBAEBCgAGBQJX3CxIAAoJED4y673idXxPMeUP/iA0l8crfHbfrOgEpPmLi42m M0mecVIBB0xN72xKkJ79hWaqgpc51wZWbA63fhL9iGuIVMb/uSZpS58VOKVWMwFl vXZsfuwI0IzBa1MVsG6QWWyqFX78IYqMqfC2+fG6ziY34+quCxCadenJkvYkb4Tv tGgRSxoPjEWRoNdqS4p/i5r5qlKVmKOo0xXu1uM1h3gYL/eht4d3g/sA2lJS2nhL mxg0NmpVz7nJWbxj7zfBCc9wmzZYTSRMDx+6i6YQH2Zec4S2RkRrdjKtF9mFC5jb CPPM2QVb3G5eJm/sB7LtRBtUeJGA5jyynuJAwmD/Pv5lYj30VAOt82Ax/BYXrNuV jkFDRFtGqru2DQGft3O3uPeA01u85a8UZpXZnnFwVugXAgY3Inv4ln2Ytt65+UhC
Hypervisor
Because most of the instructions are performed on the CPU, almost no emulation is done. Thus, the code size of the hypervisor is small (4,200 lines of C). We have tried to reduce the amount of assembly to the minimum, because it is known to be hard to audit, hence to formally verify. From that verification point of view, it is commonly admitted that projects with less than 10,000 lines of code are good candidates to work on.
Discussion
Considering Nova -which is one of the smallest hypervisor in the literature, the core of our implementation is roughly two to five times smaller.
Performance overview
Performance overview
Benchmarks
This section presents the benchmarks used to evaluate the performances of the hypervisor. Four benchmarks were used to validate the chosen design:
Dhrystone: a synthetic benchmark program designed to measure integer computation performances [START_REF] Weicker | Dhrystone: A synthetic systems programming benchmark[END_REF], Puts:
a simple function that performs an output using a hardware UART, Fibonacci: the classical Fibonacci sequence. This benchmark is a recursive function that makes heavy use of the stack, AES:
this task performs the self test function of the mbed TLS [START_REF] Arm | mbed tls implementation[END_REF] (formerly Po-larSSL) AES implementation for ECB, CBC and CTR ciphering mode. This function performs encryption and decryption of data buffers with different cipher modes.
Evaluation
The benchmarks are run in two distinct configurations: first, on top of an operating system, as in existing systems; secondly, on top of the same operating system, running on top of our hypervisor.
The first case evaluates the performance which is achievable on the actual system in nominal operating mode. The second one evaluates the overhead induced by the hypervisor. Only the privileged code is run under hypervision. Unprivileged code is executed directly on the processor, achieving native performances. Since this is one of the main traits of the proposed design, we expect a limited performance overhead (tenth of percents).
Results
This section provides results of the benchmarks in various context. Unless otherwise stated, the results are given in milli-seconds.
General concepts about the ARM architecture and instruction set
It would be easy to return an error code to prevent the guest from using Neon while it's not properly handled by the hypervisor.
Jazelle
Quoting ARM Architecture Manual, Jazelle is an execution mode in ARM architecture which "provides architectural support for hardware acceleration of bytecode execution by a Java Virtual Machine (JVM)". It came from the famous Java adage "Write once, run anywhere". BXJ is used to enter Jazelle mode. Jazelle has it own configuration register (JMCR). Jazelle mode has to be enabled in the CPSR. For the same reason that we do not handle NEON/VFP, we decided not to handle Jazelle in the hypervisor.
Security extentions (TrustZone)
As described in 2.2.3.2 (p 38), TrustZone is an architectural extension targeting security. Because operating systems become more and more complex, it becomes difficult to verify the security and the correctness in the software. The ARM solution is to add a new operating state, where a small, verifiable kernel will run. This kernel will provide services to the "rich-operating system" using a message passing mechanism, built on top of the SMC instruction.
The security extensions define two security states, Secure state and non-secure state. Each security state operates in its own virtual memory address space, with its own translation regime [START_REF] Arm | ARM Architecture Reference Manual. ARMv7-A and ARMv7-R edition[END_REF]. Using security extensions provides more security than systems using the split between the different levels of execution privilege:
• The memory system prevents the non-secure state from accessing regions of the physical memory designated as secured; • System controls that apply from the secure state are not accessible from the nonsecure state; • Entry/exit from the secure state is provided by a small number of mechanisms;
• Many exceptions can be handled without changing security state.
The security extensions are not adressed by this thesis.
Analyser's implementation: the instruction scanner
Analyser's implementation: the instruction scanner
Instruction behaviors
As explained in 5.1.2 (p 95), the large-grained families are not sufficient. For example, arithmetic instruction applied to pc changes the control flow. In that case, branching instruction would better qualify the actual behaviour of the instruction. Hence, we have identified the following fine(r)-grained families: Reads memory
These newly defined behaviour are more precise and remove some ambiguities. Notice that the first families (read/write memory) are not used in practice because of the performance overhead. Instead, a hardware memory protection facility is used.
Instruction matching
The match table
Because the opcode part of the encoding was not suitable to discriminate the instructions, we have created a matching table which associates a mask-value couple to associated behaviours. This The largest family is the unpredictable one. Without those entries, there would be much less instructions to match, but unpredictable behaviour could arise.
Finding the next sensitive instruction
As explained in the state of the art, because the ARM instruction set mixes code and data, scanning the whole binary by 32bits chunk is not realistic. Hence, the control flow is followed. Finding the next sensitive instruction is easy with the match-table and the control flow. It consists of applying the analysis of each instruction, and returning if a sensitive behavior is present in the table. But because the table is large, the analyser keeps a cache entry:
• At the beginning, the cache is empty, and the entry point is analysed (named cur-rent_addr); • Following the control flow, the following instruction is scanned;
-If the instruction is not sensitive, another iteration is performed; -otherwise, the current entry is returned, and a cache entry is append containing the current entry, associated with current_addr.
Arbitration's implementation
• Given an address addr, and a CPU mode (Thumb or ARM):
• The instruction stored at addr an addr + instr_size (depending on the CPU mode) are stored in the guest context; • A branch to PC-4 is crafted through an LDR instruction.
Uppon the trap, the hypervisor will restore the guest code with the stored instructions (Fig. One can notice the -4 offset in the crafted instruction. This is simply due to the pipeline stage. Also, an additional instruction is required to store the destination address.
A more graphical representation is given by Fig. 5.7. As we can see, a single instruction is replaced by two instructions (actually, a load instruction and its corresponding value). Those two instructions are saved in the guest context, and restored before scheduling the guest on the processor.
Guest code
Modified guest Saved context Figure 5.7: On the left side, the guest code before analysis. The red box is a sensitive instruction. After analysis, this instruction is replaced by a trap into the hypervisor (blue box). Two instructions are required because of the PC-relative encoding. That's why two instructions are saved.
Arbitration's implementation
Tracking indirect branches
The instructions that perform branch by an odd way are typically instructions which assign PC a dynamically computed value (using another register for instance). Trivial cases could have been handled with static analysis, for instance:
Instead, because of the processor's complexity, we decided to let the actual processor perform the task. Hence, when an "odd-branch" instruction is reached, the destination register (PC) is replaced by another register (not used within the instruction), and that instruction is executed on the CPU, with the current guest's context. Upon execution, the hypervisor has the actual destination address into that register.
Condition validation
The same kind of instruction crafting is also performed to determine whether a conditional instruction will execute. A dummy instruction is created, and its conditional bits are set accordingly. mov r0, #0 add<<COND>> r0, r0, #1 Upon executing, the conditional behaviour is determined by reading r0's value: if 0 is read, the condition was not met, if 1 is read, then the condition was met.
Conditional blocks
IT (If-Then) instruction is a thumb instruction which makes up to four following instructions conditional (IT block). Otherwise, thumb mode does not have conditional execution. The hypervisor performs a complex analysis to handle this instruction:
1. Determine the size of the IT block; 2. For each instruction, store the conditional outcome; 3. Sets a trap, if needed, accordingly.
Summary
This instruction is defined as follows: IT{x{y{z}}} {cond}, where x y and z can be either T (then) or E (else) of cond. In addition, the assembler may warn about unpredictable instructions in an IT block like B,BL and CPS but also BX,CBZ and RFE.
Summary
The ARM instruction set is no longer RISC, and introduce interesting features to achieve performances on embedded system. But those features are often hard to catch and to analyse in software. The lack of specification also makes it hard to have confidence in the matching table, because it is human made. Releasing a computer-usable specification of the instruction-set would be a huge plus for this kind of work.
Example on a simple guest
description
Several guests were writen to evaluate the hypervisor. One of them was a user-space hello world: test_user_mode_swi.c. It is composed of 105 lignes of C code, and 35 lines of assembly. This guest does the following actions:
1. Setup a proper context • Switch to system mode • Setup a stack for user mode • Switchs back to supervisor mode 2. Setups interrupt vector (to handle SWI) 3. Switchs to user mode 4. Performs a home-made system call to trigger a print on UART Down to assembly code, only 30 distinct instructions are used.
The following table exposes the number of entries for each instruction families aforementioned:
Summary
Contrary to what might be assumed, the Thumb encoding is not easier to handle. First, changes in the encoding must be tracked (basically ..X instructions). Then, the IT instruc- tion makes it harder to follow the control flow. Finally, Thumb32 must also be encoded an analysed. As show in the metrics, there is as many overhead in the entrys for ARM than for Thumb instructions.
I could only get a PDF description of the instruction set. Hence, quite some time was needed to encode those instructions. Also, in practice, founder can implement features, but also add bugs. Hence, the ARM documentation is not enough, but the peripheral documentation was also needed. Sometimes, errata must also be consulted to avoid known error. Again, it would be suitable to have an electronic released form of these information, so that automatic extracting tools could be used. That way, altering features and publishing erratas could easily and reliably be included into such implementations.
On a blog of Linaro 1 , Christoffer Dall states that "A primary goal in designing both hypervisor software and hardware support for virtualization is to reduce the frequency and cost of transitions as much as possible." Because of all the cases to be handled (tracking branches, undefined behaviours), a lot of traps are set, which makes the hypervisor rather slow. We have estimated that without smart improvements, a trap is set every three instructions. Hyperblocks should be as large as possible.
Summary
Memory map
For information purpose, the memory map of our hypervisor is given here.
Reset code 0x00008000
. . .
Guest callable API 0x00009000
Hypervisor code Hypervisor data Hypervisor bss . . .
Guest code 0x00508000
. . .
GPIO Configuration Registers 0x20200000
UART Configuration Registers 0x20201000 cases, a difference should be made between security and precision. For instance, a scanner matching all the instructions as sensitive would be correct from a security point of view (a disaster for performances though) but it wouldn't be precise, because of false positives.
Verifying the instruction scanner
The first model would be provided by ARM, while the verification process would demonstrate that the instruction scanner properly decodes chunks of memory. That is not suitable in practice. Indeed, formal methods are efficient with a simpler model, where properties are verified. In this work, the documentation (which can be seen as the model) is more than four thousand pages. The complexity of the model is in the same order of magnitude than the implementation. This makes it very unsuitable for such work.
The only remaining solution would be exaustive tests. I have performed this kind of |
01757907 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757907/file/AIM2017_Klimchik_Pashkevich_Caro_Furet_HAL.pdf | Alexandr Klimchik
email: a.klimchik@innopolis.ru
Anatol Pashkevich
email: anatol.pashkevich@imt-atlantique.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Benoît Furet
email: benoit.furet@univ-nantes.fr
Calibration of Industrial Robots with Pneumatic Gravity Compensators
Keywords: Industrial robot, stiffness modeling, elastostatic calibration, pneumatic gravity compensator, design of calibration experiments
niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
I. INTRODUCTION
Advancements in shipbuilding and aeronautic industries demand high-precision and high-speed machining of huge hulls and fuselage components. For these tasks, industrial robots are more attractive comparing to conventional CNCmachines because of large and easily extendable workspace, capability to process complex-shape parts and high-speed motion capability. However, processing of modern and contemporary materials, which are widely used in these industries, requires high processing forces affecting robot positioning accuracy [START_REF] Zhu | An off-line programming system for robotic drilling in aerospace manufacturing[END_REF][START_REF] Guillo | Impact & improvement of tool deviation in friction stir welding: Weld quality & real-time compensation on an industrial robot[END_REF][START_REF] Denkena | Enabling an Industrial Robot for Metal Cutting Operations[END_REF]. To reduce the external force impact on the positioning accuracy, robotic experts usually apply technique that is based on the compliance error estimation via the manipulator stiffness modelling [START_REF] Dumas | Joint stiffness identification of industrial serial robots[END_REF][START_REF] Nubiola | Absolute calibration of an ABB IRB 1600 robot using a laser tracker[END_REF][START_REF] Alici | Enhanced stiffness modeling, identification and characterization for robot manipulators[END_REF][START_REF] Klimchik | Stiffness modeling for perfect and non-perfect parallel manipulators under internal and external loadings[END_REF] and relevant error compensation in the online or offline mode [START_REF] Schneider | Stiffness modeling of industrial robots for deformation compensation in machining[END_REF][START_REF] Klimchik | Compliance Error Compensation in Robotic-Based Milling[END_REF][START_REF] Klimchik | Compliance error compensation technique for parallel robots composed of non-perfect serial chains[END_REF][START_REF] Slavkovic | A method for off-line compensation of cutting force-induced errors in robotic machining by tool path modification[END_REF]. This approach is very efficient if the stiffness and geometric model parameters of the manipulator as well as the external forces are known. To estimate them, additional experimental studies are usually carried out [START_REF] Dumas | Joint stiffness identification of industrial serial robots[END_REF][START_REF] Klimchik | Identification of the manipulator stiffness model parameters in industrial environment[END_REF][START_REF] Wu | Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments[END_REF][START_REF] Hollerbach | Model Identification[END_REF], which allow user to obtain an extended geometric model suitable for compliance error compensation.
Another practical solution is based on enhancement the robot stiffness by means of increasing the link cross-sections. However, it leads to increasing of the robot components masses causing additional end-effector deflections, which are usually reduced by means of different types of gravity com-pensators. However, integration of mechanical compensators into manipulator kinematics essentially complicates the stiffness modelling, because conventional serial architecture is transformed into the quasi-serial one that contains a kinematic closed loop.
The stiffness modelling of the industrial manipulators with mechanical gravity compensators is quite a new problem. There are rather limited number of works dealing with the impact of gravity compensators on the manipulator forcedeflection relation [START_REF] Wu | Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments[END_REF][START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF][START_REF] Klimchik | Stiffness Modeling of Robotic Manipulator with Gravity Compensator[END_REF], while there are some work devoted to the compensator design [START_REF] Arakelian | Gravity compensation in robotics[END_REF][START_REF] Cho | Design of a Static Balancing Mechanism for a Serial Manipulator With an Unconstrained Joint Space Using One-DOF Gravity Compensators[END_REF] and softwarebased balancing solutions [START_REF] De Luca | A PD-type regulator with exact gravity cancellation for robots with flexible joints[END_REF][START_REF] De Luca | PD control with on-line gravity compensation for robots with elastic joints: Theory and experiments[END_REF]. In contrast, for conventional strictly serial manipulators [START_REF] Alici | Enhanced stiffness modeling, identification and characterization for robot manipulators[END_REF][START_REF] Chen | Conservative congruence transformation for joint and Cartesian stiffness matrices of robotic hands and fingers[END_REF][START_REF] Salisbury | Active stiffness control of a manipulator in Cartesian coordinates[END_REF] and strictly parallel mechanisms [START_REF] Klimchik | Stiffness modeling for perfect and non-perfect parallel manipulators under internal and external loadings[END_REF][START_REF] Yan | Stiffness analysis of parallelogram-type parallel manipulators using a strain energy method[END_REF][START_REF] Li | Stiffness analysis for a 3-PUU parallel kinematic machine[END_REF][START_REF] Deblaise | A systematic analytical method for PKM stiffness matrix calculation[END_REF][START_REF] Gosselin | Stiffness analysis of parallel mechanisms using a lumped model[END_REF][START_REF] Merlet | Parallel Mechanisms and Robots[END_REF] there were developed a number of methods for the stiffness analysis. At the same time, only limited number of works deals with stiffness modelling of socalled quasi-serial architectures incorporating internal closed-loops [START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF][START_REF] Klimchik | Serial vs. quasi-serial manipulators: Comparison analysis of elasto-static behaviors[END_REF][START_REF] Subrin | New Redundant Architectures in Machining: Serial and Parallel Robots[END_REF][START_REF] Guo | A multilevel calibration technique for an industrial robot with parallelogram mechanism[END_REF][START_REF] Vemula | Stiffness Based Global Indices for Structural Evaluation of Anthropomorphic Manipulators[END_REF]. To our knowledge, the simplest and efficient way to take into account the influence of gravity compensator is utilization of non-linear virtual springs in the frame of the conventional VJM technique [START_REF] Klimchik | Stiffness modeling for perfect and non-perfect parallel manipulators under internal and external loadings[END_REF][START_REF] Pashkevich | Enhanced stiffness modeling of manipulators with passive joints[END_REF][START_REF] Pashkevich | Stiffness analysis of overconstrained parallel manipulators[END_REF]. This approach was originally proposed in our previous works [START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF][START_REF] Klimchik | Stiffness Modeling of Robotic Manipulator with Gravity Compensator[END_REF] and successfully applied to the manipulators the springbased gravity compensators. However, some additional efforts are required to adapt it to the case of robots with pneumatic compensators, which progressively replace their counterparts in new models of heavy industrial robots available on the market.
This paper proposes a new modification of the VJMbased stiffness modelling technique for the quasi-serial industrial manipulators with a pneumatic gravity compensator that creates a kinematic closed-loop violating the stiffness modelling principles, which are used for pure serial robot architecture. The main attention is paid to the identification of the model parameters and calibration experiment planning. The developed approach is confirmed by the experimental results that deal with compliance error compensation for robotic cell employed in manufacturing of large dimensional components.
To address these problems the remainder of the paper is organized as follows. Section 2 presents the stiffness modelling for pneumatic gravity compensator. Section 3 is devoted to the elastostatic calibration and design of calibration experiments. Section 4 deals with experimental study. Section 5 summarizes the main contributions of the paper. The mechanical structure and principal components of pneumatic gravity compensator considered here is presented in Fig. 1a; its equivalent model is shown in Fig. 1b. The mechanical compensator is a passive mechanism incorporating a constant cross-section cylinder and a constant volume gas reservoir. The volume occupied by the gas linearly depends on the piston position that defines the internal pressure of the cylinder. It is clear that this mechanism can be treated as a non-linear virtual spring influencing on the manipulator stiffness behavior. It is worth mentioning that in general case the gas temperature has impact on the pressure inside the tank, which defines the compensating force. Nevertheless, one can assume that in the case of continuous or periodical manipulator movements the gas temperature remains almost constant, i.e. the process of the gas compression-decompression can be assumed to be the isothermal one. In the frame of the manipulator model, the compensator is attached to the first and second links that creates a closedloop acting on the second actuated joint. This particularity allows us to adapt the conventional stiffness model of the serial manipulator (with constant joint stiffness matrix θ K ) by introducing the configuration dependent joint stiffness matrix θ () Kq that takes into account the compensator impact and depends on the vector of actuated coordinates q . In this case, the Cartesian stiffness matrix C K of the robotic ma- nipulator can be presented in the following form
Calibration of industrial robots with pneumatic gravity compensators
1 1T C θ θ θ () K J K q J
where θ J is the Jacobian with respect to the virtual joint coordinates θ (in the case of industrial robots it is usually equivalent to the kinematic Jacobian computed with respect to actuated coordinates q ). Thus, to obtain the stiffness model of the industrial robot with the pneumatic gravity compensator it is required to determine the non-linear joint stiffness matrix θ () Kq describing elasticity of both actuators and the gravity compensation mechanism. It should be mentioned that in the majority of works devoted to the stiffness analysis of the serial manipulators the matrix θ K is assumed to be a constant and strictly diagonal one [START_REF] Salisbury | Active stiffness control of a manipulator in Cartesian coordinates[END_REF][START_REF] Gosselin | Stiffness analysis of parallel mechanisms using a lumped model[END_REF][START_REF] Klimchik | Stiffness matrix of manipulators with passive joints: computational aspects[END_REF].
To find the desired matrix θ () PP varies with the robot motions and non-linearly depends on the angle 2 q . Below, this distances are denoted as follows:
Kq
12 ,
L P P , 12 ,
a P P , 01 , s PP .
In addition, let us introduce parameters , ,
x a and y a defining relevant locations of points P0, P1, P2 (see Fig. 1b). This allows us to compute the compensator length s using the following expression
2 2 2 2 • • •co 2 s( ) aL s a L q
which defines the non-linear function 2 () s q . For this geometry, the impact of the gravity compensator can be taken into account by replacing the considered quasiserial architecture by the serial one, where the second joint stiffness coefficient is modified in order to include elasticity of both the actuator and compensator. To find relevant nonlinear expression for this coefficient, let us present the static torque in the second joint 2 M as a geometric sum of two components. The first of them is caused by the deflection 2 q in the mechanical transmission of the second actuated joint and can be expressed in a usual way as
2 2 2 2 sin( •• ) qS L a M K q F q s
where both the force S F and the compensator length s de- pend on the joint variable 0 2 2 2 qq q . To find the compensating force S F , let us use the iso- thermal process assumption that yields the relation
P V const
, where P is the tank pressure, 00 () V A s s V is the corresponding internal volume, A is the piston area, 0 s is compensator link length correspond- ing to zero compensating force and 0 V is the tank volume corresponding to the atmospheric pressure
() •• ( sin( ) ) q A s s L A s s V a M K q P q s
and present in a more compact form
2 2 0 2 2 0 s n( ) • i • q V q M ss a L q s KP s s where a constant 00 / V s V s A
is the equivalent distance. Further, after computing the partial derivative
( ) ( ) ) () V V q V P L a s s a L q ss s s s ss q a L q K q K s s
which is obviously highly non-linear with respect to manipulator configuration (here, s is also a non-linear function of 2 q ). Nevertheless, it allows us to compute an relevant stiff- ness coefficient 2 K for the equivalent serial chain and direct- ly apply eq. ( 1) to evaluate stiffness of the quasi-serial manipulator with pneumatic gravity compensator.
It should be mentioned that in practice the compensator parameters 0 ,
V s s and actuator stiffness coefficients
1 2 3 6
, , ,..., K K K K are usually not given in the robot datasheets, so they should be identified via dedicated experimental study. For this reason, the following Section focuses on the identification of this extended set of the manipulator elastostatic parameters.
III. ELASTOSTATIC PARAMETERS IDENTIFICATION
A. Methodology
In the frame of the VJM-based modelling approach developed for serial kinematic chains [START_REF] Salisbury | Active stiffness control of a manipulator in Cartesian coordinates[END_REF][START_REF] Pashkevich | Enhanced stiffness modeling of manipulators with passive joints[END_REF] and adapted here for the case of quasi-serial manipulators with pneumatic gravity compensators, the desired stiffness model parameters describe elasticity of the virtual springs located in the actuated joints of the manipulator, and also compensator parameters 0 ,
V s s of defining preloading of the compensator spring and the equivalent distance for the tank volume , , ,..., K K K K used in previous sec- tion) and the compensator elastic parameters as 0 ,
V s s . To find the desired set of elastic parameters, the robotic manipulator sequentially passes through several measurement configurations where the external loading is applied to the specially designed end-effector presented in Fig. 2 (it allows us to generate both forces and torques applied to the manipulator). Using the absolute measurement system (the laser tracker Leica AT900, the Cartesian coordinates of the reference points are measured twice, before and after loading. To increase identification accuracy, it is reasonable to have several markers on the end effector (reference points) and to apply the loading of the maximum allowed magnitude. It should be mentioned that to avoid singularities caused by numerical routines, the external force/torque directions should not be the same for all calibration experiments (while from the practical view point the mass-based gravity loading is the most attractive). Thus, the calibration experiments yield the dataset that includes values of the manipulator joint coordinates i q , applied forces/torques
B. Identification algorithm
To take into account the compensator influence while using classical approach developed for strictly serial manipulators without compensators [START_REF] Alici | Enhanced stiffness modeling, identification and characterization for robot manipulators[END_REF], it was proposed below to use in the second joint an equivalent virtual spring with nonlinear stiffness, which depends on the joint coordinate 2 q (see eq. ( 1)). Using this idea, it is convenient to consider several aggregated compliances 2 i k corresponding to each different value of angle 2 q . This idea allows us to linearize the identification equations with respect to extended set of model parameters and that can be easily solved using standard least-square technique.
Let us denote this extended set of desired parameters as
TT i i i i ni ni i im F A F J J J J
that is usually used in stiffness analysis of serial manipulators. Here, ni J denotes the manipulator Jacobian column, superscript '(p)' stands for the Cartesian coordinates (position without orientation). Transformation from i A to () p i B is ra-ther trivial and is based on the extraction from i A the first three lines and inserting in it several zero columns.
In this case, the elastostatic parameters identification can be reduced to the following optimization problem
0 ,, ( ) ( ) 1 ( ) ( ) min jc m p T p i i i i k k i F B k p B k p
which yields to the following solution
1 ( ) ( ) ( ) 11 • TT mm p p p i i i i ii k B B B p
where the parameters
1 3 6
, ,..., k k k describe the compliance of the virtual joints #1,#3,...#6, while the rest of them 21 22 , ... kk present an auxiliary dataset allowing to separate the compliance of the joint #2 and the compensator parameters 0 , V s s . Using eq. ( 7), the desired optimization problem can be written as
2 0 2 2 00 2 2 22 1 2 , 0 2 , () ) ( ) m sin ( cos( in ) sin qV q m i iV i i V i ii s V q K s i P L a s s a L q ss s s s ss q aL q s K s
where q m is the number of different angles 2 q in the exper- imental data. It is obvious that eq. ( 12) is highly non-linear and can be solved numerically only.
Thus, the proposed modification of the previously developed calibration technique allows us to find the manipulator and compensator parameters. An open question, however, is how to find the set of measurement configurations that ensure the lowest impact of the measurement noise.
C. Design of calibration experiments
The goal of the calibration experiment design is to select a set of robot configurations/external loadings , ii qF that ensure the best identification accuracy. The key issue is to rang plans of experiments in accordance with selected performance measure. This problem is well known in the classical regression analysis; however, the results are not suitable for non-linear case of the elastostatic calibration and require additional efforts. Here, an industry oriented performance measure is used, which evaluates the calibration plan quality [START_REF] Wu | Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments[END_REF][START_REF] Klimchik | Identification of geometrical and elastostatic parameters of heavy industrial robots[END_REF]. Its physical meaning is the robot positioning accuracy (under the loading), which is achieved after compliance error compensation based on the identified elastostatic parameters.
Assuming that experiments include measurement errors i ε , covariance matrix for the parameters k can be expressed as
1 ( ) ( ) 1 1 ( ) ( ) ( ) ( ) 11 cov( ) E T TT m pp ii i mm p T p p p i i i i i i ii k B B B ε ε B B B
Following independent identically distributed assumption with zero expectation and standard deviation 2 for the measurement errors, expression (13) can be simplified to
1 2 ( ) ( ) 1 cov( ) T m pp ii i k B B
Hence, the impact of the measurement errors on the accuracy of the identified parameters k is defined by the matrix
( ) ( ) 1 T m pp ii i
BB (in regression analysis it is known as the in- formation matrix).
It is evident that in industrial practice the most important issue is not the parameters identification accuracy, but their impact on the robot positioning accuracy. Considering that the end-effector accuracy varies throughout the workspace and highly depends on the manipulator configuration, it is proposed to evaluate the calibration accuracy in a typical manipulator configuration ("test-pose") provided by the user. For the most of applications of heavy industrial robots, the test pose is usually related to the typical machining configuration 0 q and corresponding external loading 0 F related to the corresponding technological process. For the so-called test-pose the mean square value of the positioning error will be denoted as 2 0 and the matrix () p i A corresponding to it as
() 0 p A .
It should be noted that that the proposed approach operates with a specific structure of the parameters included in the vector k , where the second joint is presented by several components 21 22 , ... kk while the other joints are described by a single parameter , ... k k k . This motivates further re- arrangement of the vector k and replacing it by several vectors ) cov( )
T j j j k k k , the performance measure 2 0 can be presented as 1 ( ) ( ) 2 00 2 ) ( 0 1 1 () trace T q T m j p j p m pp ii i j A A A A
Based on this performance measure, the calibration experiment design can be reduced to the following optimization problem
1 ( ) ( ) 00 1 { , } ( ) ( ) 1 trace min T q T ii m j p j p i pp j i m i qF A A A A subject to max , 1.. i F i m F
whose solution gives a set of the desired manipulator configurations and corresponding external loadings. It is evident that its analytical solution can hardly be obtained and a numerical approach is the only reasonable one.
IV. EXPERIMENTAL STUDY
The developed technique was applied to the elastostatic calibration of robot Kuka KR-120. The parameters to be identified were the compliances j k of the actuated joints and the gravity compensator parameters 0 , V s s . To generate de- flections in the actuated joints, the gravity forces 120 kg were applied to the robot end-effector (see Fig 3). The Cartesian coordinates of three markers located on the tool (see Fig, 2) have been measured before and after the loading. To find optimal measurement configurations for calibration, the design of experiments was used for six different angles 2 q that are distributed between the joint limits. For each 2 q from three to seven optimal measurement configurations were found, which satisfy joint limits and physical constraints related to the possibility carry out experiments. In total 31 different measurement configurations and 186 measurements were considered for the identification, from which 7 physical parameters were obtained. The obtained experimental data have been processed using the identification algorithm presented in Section 3. Identified values for the extended set of joint compliances (for 6 different angles q2) and their confidence intervals are presented in Table 1. As follows from this results, wrist compliances were identified with lower accuracy. The reason for it is smaller shoulder from the applied external forces comparing with manipulator joints. Relatedly small accuracy of first joint due to a smaller number of measurements in the experiments in which the deflections were generated in the first joint. Further, obtained compliances k21… k26 were used to estimate pneumonic compensator parameters by solving optimization problem [START_REF] Klimchik | Identification of the manipulator stiffness model parameters in industrial environment[END_REF]. The identified joint compliances can be used to predict robot deformations under the external loading. V. CONCLUSIONS
Gravity compensator
The paper presents a new approach for the modelling and identification of the elastostatic parameters of heavy industrial robots with the pneumatic gravity compensator. It proposes a methodology and data processing algorithms for the identification of elastostatic parameters of gravity compensator and manipulator. To increase the identification accuracy, the design of experiments has been used aimed at proper selection of the measurement configurations. In contrast to other works, it is based on the industry oriented performance measure that is related to the robot accuracy under the loading. The advantages of the developed techniques are illustrated by experimental study of the industrial robot Kuka KR-120, for which the joint compliances and parameters of the gravity compensator have been identified.
Figure 1 .
1 Figure 1. Pneumatic gravity compensator and its mode
, let us consider the compensator geometry in detail. As follows from Fig 1b, the compensator geometrical model contains three principal node points P0, P1, P2., where P0, P1 define the passive joint rotation axes and P2 defines the second actuated joint axis. In this model, two distances 12 ,
K 2 M
2 is the stiffness coefficient. The second component can be presented as• sin force generated by the gravity compensator. It is clear that sin can be computed from the triangle 12 allows us to express the torque in the following form:
Taking into account that compensating force S F depends on the internal and external pressure difference and is computed as
0V
. In the frame of this model, let us denote the manipulator joint com-
iFFigure 2 .
2 Figure 2. End-effector used for the elastostatic calibration experiments and it model
F
them in the vector k . In this case the linearize force-deflection relation with respect to this vector can be present in the following form () p ii p B k where i p is the vector of the end-effector displacements under the external loading i
Figure 3 .
3 Figure 3. Experimental setup for the identification of the elastostatic parameters.
TABLE I .
I ELASTO-STATIC PARAMETERS OF ROBOT KUKA KR-120
Parameter value CI
k1, [rad×μm/N] 1.13 ±0.15 (13.3%)
k21, [rad×μm/N] 0.34 ±0.004 (1.1%)
k22, [rad×μm/N] 0.36 ±0.005 (1.4%)
k23, [rad×μm/N] 0.35 ±0.005 (1.4%)
k24, [rad×μm/N] 0.28 ±0.007 (2.6%)
k25, [rad×μm/N] 0.32 ±0.011 (3.6%)
k26, [rad×μm/N] 0.26 ±0.007 (2.8%)
k3, [rad×μm/N] 0.43 ±0.007 (1.8%)
k4, [rad×μm/N] 0.95 ±0.31 (31.8%)
k5, [rad×μm/N] 3.82 ±0.27 (7.0%)
k6, [rad×μm/N] 4.01 ±0.35 (8.7%)
ACKNOWLEDGMENTS
The work presented in this paper was partially funded by Innopolis University and the project Partenariat Hubert Curien SIAM 2016 France-Thailand.
*The work presented in this paper was partially funded by Innopolis University and the project Partenariat Hubert Curien SIAM 2016 France-Thailand |
01757920 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757920/file/ICMIT2017_Gao_Pashkevich_Caro_HAL.pdf | Jiuchun Gao
email: jiuchun.gao@ls2n.fr
Anatol Pashkevich
Sté Phane Caro
Optimal Trajectories Generation in Robotic Fiber Placement Systems
The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.
Introduction
Robotic fiber placement technology has been increasingly implemented recently in aerospace and automotive industries for fabricating complex composite parts [START_REF] Gay | Composite materials: design and applications[END_REF][START_REF] Gallet-Hamlyn | Multiple-use robots for composite part manufacturing[END_REF]. It is a specific technique that uses robotic workcell to place the heated fiber tows on the workpiece surface [START_REF] Peters | Handbook of composites[END_REF]. Corresponding robotic systems usually include a 6-axis industrial robot and a one-axis positioner (see Figure 1), which are kinematically redundant and provides the user with some freedom in terms of optimization of robot and positioner motions.
To deal with the robotic system redundancy, a common technique based on the pseudo-inverse of kinematic Jacobian is usually applied. However, as follows from relevant studies, this standard approach does not satisfy the real-life industrial requirements of the fiber placement [START_REF] Kazerounian | Redundancy resolution of serial manipulators based on robot dynamics[END_REF][START_REF] Buss | Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods[END_REF]. In literature, there is also an alternative technique (that deals with multi-goal tasks) that is based on conversion of the original continuous problem to the combinatorial one [START_REF] Gueta | Coordinated motion control of a robot arm and a positioning table with arrangement of multiple goals[END_REF][START_REF] Gueta | Hybrid design for multiple-goal task realization of robot arm with rotating table[END_REF], but it only generates trajectories for point-to-point motions, e.g. for spot welding applications. A slightly different method was introduced in [START_REF] Dolgui | Manipulator motion planning for high-speed robotic laser cutting[END_REF][START_REF] Pashkevich | Multiobjective optimization of robot motion for laser cutting applications[END_REF][START_REF] Zhou | Off-line programming system of industrial robot for spraying manufacturing optimization[END_REF], and it was successfully applied to laser-cutting and arc-welding processes where the tool speed was assumed to be constant (which is not valid in the considered problem). Another approach has been proposed in [START_REF] Debout | Tool path smoothing of a redundant machine: Application to Automated Fiber Placement[END_REF], where the authors concentrated on the tool path smoothing in Cartesian space in order to decrease the manufacturing time in fiber placement applications. For the considered process, where the tool speed variations are allowed (in certain degree), a discrete optimization based methodology was proposed in our previous work [START_REF] Gao | Manipulator Motion Planning in Redundant Robotic System for Fiber Placement Process[END_REF]. It allows the user to convert the original continuous problem to the combinatorial one taking into account particularities of the fiber placement technology and to generate time-optimal trajectories for both the robot and the positioner. Nevertheless, there are still a number of open questions related to selection of the optimization algorithm parameters (i.e., its "tuning") that are addressed in this paper and targeted to the improvement of the algorithm efficiency and the reduction of the computing time.
Robotic system model
In practice, the procedure of off-line programming for robotic fiber placement is implemented in the following way. The fiber placement path is firstly generated and discretized in CAM system. Further, the obtained set of task points is transformed into the task graph that describes all probable configurations of the robot and the positioner joints. The motion generator module finds the optimal trajectories that are presented as the "best" path on the graph. Finally, the obtained motions are converted into the robotic system program by the post processor. The core for the programming of this task is a set of optimization routines addressed in this paper.
To describe the fiber placement task, let us present it as a set of discrete task frames
n i F i task ,... 2 , 1 , ) (
, in such a way that the X-axis is directed along the path direction and Z-axis is normal to the workpiece surface pointing outside of it (see Figure 1). Using these notations, the task locations can be described by 4 4 homogenous transformation matrices and the considered task is formalized as follows: where all vectors of positions and orientations are expressed with respect to the workpiece frame (see superscript "w"). To execute the given fiber placement task, the robot tool must visit the frames defined by (1) as fast as possible.
) ( ) ( ) 2
The considered robotic system, shown in Figure 1, is composed of an industrial robot and an actuated positioner. Their spatial configurations can be described by the joint coordinates R q and P q respectively. The task frames can be presented in two ways using the robot and positioner kinematics that are expressed as
) ( R R g q and
) ( P P q g , respectively. To obtain the kinematic model of the whole system that is expressed as a closed loop containing the robot, the workpiece and the positioner, a global frame 0 F is selected. Then, the tool frame , tool F and task frame ) (i task F can be aligned in such a way that: (i) the origins of the two frames coincide; (ii) Z-axes are opposite; (iii) X-axes have the same direction. Due to the foregoing closed-loop, two paths can be followed to express the transformation matrices from the global frame to the task frames, namely,
n i q g g i task w i P P Pbase task Tool i R R Rbase ,... 2 , 1 ; ) ( ) ( ) ( ) ( 0 ) ( 0 T T T q T (2)
Equation ( 2) does not lead to a unique solution for R q and P q as the robotic system, i.e., robot and positioner, is kinematically redundant. Therefore, the optimum robot and positioner configurations can be searched based on specific criteria.
Algorithm for trajectories generation
To take advantage of the kinematic redundancy, it is reasonable to partition the desired motion between the robot and the positioner ensuring that the technology tool executes the given task with smooth motion as fast as possible.
To present the problem in a formal way, let us define the functions
) (t R q and ) (t q P
describing the robot and positioner motion as a function of time
] , 0 [ T t . Additionally, a sequence of time instants } ,... , { 2 1 n t t t
corresponds to the cases where the tool visits the locations defined by (1), and
T t t n , 0
1 . As a result, the problem at hand is formulated as an optimization problem aiming at minimizing the robot processing time
) ( ), ( min t q t P R T q (3)
This problem is subjected to the equality constraints [START_REF] Gallet-Hamlyn | Multiple-use robots for composite part manufacturing[END_REF] and some inequality constraints associated to the capacities of the robot/positioner actuators that are defined by upper bounds of the joint velocities and accelerations. Besides, the collision constraints verifying the intersections between the system components are also taken into account.
) ( 0 0 )) ( ( )) ( ( i task w i P P Pbase task Tool i R R Rbase t q g t g T T T q T defined in
For this considered problem aiming at finding desired continuous function of
) (t R q and ) (t q P
, there is no standard approach that can be applied to straightforwardly. The main difficulty here is that the equality constraints are written for the unknown time instants
} ,... , { 2 1 n t t t
. Besides, this problem is nonlinear and includes a redundant variable. For these reasons, this paper presents a combinatorial optimization based methodology to generate the desired trajectories.
For the considered robotic system, there is one redundant variable with respect to the given task. It is convenient here to treat P q as the redundant one since it allows us to use the kinematic models of the robot and the positioner independently and to consider the previous equality constraints.
To present the problem in a discrete way, the allowable domain of ] , [ max min P P P q q q is sampled with the step
P q as m k k q q q P P k P ,... 1 , 0 ; min ) (
, where
P P P q q q m ) ( min max
. Then, applying sequentially the positioner direct kinematics and the robot inverse kinematics, a set of possible configuration states for the robotic system can be obtained as
n i t q g g t tool task i task w i k P P Pbase Rbase R i k R ,... 2 , 1 ); , )) ( ( ( ) ( ) ( ) ( 0 0 1 ) ( T T T T q
, where μ is a configuration index vector corresponding to the robot posture. Therefore, for i th task location, a set of candidate configuration states can be obtained, i.e., T in joint space as above, the original task can be converted into the directed graph shown in Figure 2. It should be noted that some of the configuration cells should be excluded because of violation of the collision constraints or the actuator joint limits. These cases are denoted as "inadmissible" in Figure 2, and are not connected to any neighbor. Here, the allowable connection between the graph nodes are limited to the subsequent configuration states
) , ( ) , ( i k task i k task L L
, and the edge weights correspond to the minimum robot processing time restricted by the maximum velocities and accelerations of the robot and the positioner.
Using the discrete search space above, the considered problem is transformed to the classic shortest path searching and the desired solution can be represented as the sequence , where
} ...{ } { } { n) , (k ,2) (k ,1) (k
6 ,... 1 , 0 ; ) max( ) , ( max ) ( 1 , ) ( , ) 1 , ( ) , ( 1 1
j q q q dist j k i j k i j i k task i k task i i i i L L
. It should be mentioned that the above expression takes into account the velocity constraints automatically and the acceleration constraints should be considered by means of the following formula:
max 1 1 ) ( 1 , ) ( 1 ) ( ) ( 1 , ) ( ) ( ) ( 2 1 1 j i i i i k i j k i k k i j i q t t t t q t q t i i i i i j, i j, q q (5) where ) , ( ) 1 ( ) ( 1 1 ,i k task ,i k task i i i dist t L L and ) , ( ) 1 ( ) ( 1 ,i k task ,i k task i i i dist t L L .
By discretizing the search space, the original problem is converted to a combinatorial one, which can be solved by using conventional way, e.g. However, this straightforward approach is extremely time-consuming and can be hardly accepted for industrial applications. For example, it takes over 20 hours to find a desired solution in a relatively simple case (two-axis robot and one-axis positioner), where the search space is built for 100 task points and the discretization step 1° (processor Intel® i5 2.67 GHz) [START_REF] Gao | Manipulator Motion Planning in Redundant Robotic System for Fiber Placement Process[END_REF]. Besides, known methods are not able to take into account the acceleration constraints that are necessary here. For these reasons, a problem-oriented algorithm taking into account the particularities of the graph based search space is proposed in this paper.
The developed algorithm is based on the dynamic programming principle, aiming at finding the shortest path from
} , { 1 ) 1 ( 1 k , k task L to the current } , { ) ( i ,i k task k i L
. The length of this shortest path is denoted as i k d
, . Then, the shortest path for the locations corresponding to the next
} , { ) 1 ( k k,i task L
can be obtained by combining the optimal solutions for the previous column
} , { ) ( k ,i k task L
and the distances between the task locations with the indices i and i+1,
; } ) , ( { ) ( ) 1 , ( , 1 , ,i k task i k task i k k i k dist d d L L min (4)
This formula is applied sequentially from the second column of the task graph to the last one, and the desired optimal path can be obtained after selection of the minimum length . This proposed algorithm is rather time-efficient since it takes about 30 seconds [START_REF] Gao | Manipulator Motion Planning in Redundant Robotic System for Fiber Placement Process[END_REF] to find the optimal solution for the above mentioned example.
Tuning of trajectories generation algorithm
For the proposed methodology, the discretization step for the redundant variable is a key parameter, which has a big influence on the algorithm efficiency. An unsuitable discretization step may lead either to a bad solution or high computational time. For this reason, a new strategy for the determination of the discretization step is proposed thereafter to tune the optimization algorithm.
Influence of the discretization step
Let us consider a simple case study that deals with a three-axis planar robotic system executing a straight-line-task (see Figure 3). For this problem, the fiber placement path is uniformly discretized into 40 segments. Relevant optimization results are presented in Table 1. It is clear that here (as well as in other cases) smaller discretization step should provide better results but there exists a reasonable lower bound related to an acceptable computing time.
2 P q 1 P q 75 . 0 P q 5 . 0 P q 25 . 0 P q 1 . 0 P q
Robot processing time are not acceptable because they lead to a robot processing time 20-50% higher than the optimal one. Moreover, in the case of
2 P q
, the optimization algorithm generates a bad solution that does not take advantage of the positioner motion capabilities (positioner is locked,
var const q R P q ,
). The reason for this phenomenon is that the discretization step here is so large that the positioner step-time is always higher than the robot moving time between the subsequent task points.
Another interesting phenomenon can be observed for slightly smaller discretization steps, where the algorithm may produce non-smooth intermittent rotation of the positioner (start-stop motion: (without acceleration constraints), the optimization algorithm generates solution that includes only several steps where the positioner is not locked.
In addition, it is noteworthy that in the case of with acceleration constraints, the discretization step reduction from 2° to 1° leads to even worse solution, where the robot processing time is about 10% higher. This phenomenon can be explained by heuristic integration of the acceleration constraints into the optimization algorithm, which may slightly violate the dynamic programming principle. Nevertheless, further reduction of P q allows to restore the expected algorithm behavior. Hence, to apply the developed technique in practice, users need some simple "rules of thumb" that allows setting an initial value of P q . Then, the optimization algorithm can be applied several times (sequentially decreasing P q ) until the objective function convergence. To reduce computing time in the case of small P q , some local optimization techniques have been also developed by the authors.
Initial tuning of the optimization algorithm
To find a reasonable initial value of the discretization step, let us investigate in details robot and positioner motions between two sequential task locations. It is clear that for smooth positioner motions, it is required that corresponding increments of the coordinate P q should include at least one discretization step P q . To find the maximum value of P q , let us denote as the increment of P q for the movement between two adjacent task locations (Pi-Pi+1) and s as the length of the path segment. It is clear that s can be also treated as the arc length between Pi and Pi+1 around the positioner joint axis. Let us assume that the distance from a path point to the rotational axis is r, and rmax represents the furthest task location with respect to the positioner axis. To avoid undesired intermittent positioner rotations, the following constraint max r s should be verified, since the positioner velocity is usually smaller than the velocity of the robot. The latter inequality can be rewritten in terms of the robot/positioner motion time as max max max
) ( P R q v r s
, which can be further
P q
where number of the positioner steps is no less one. Hence, the initial value of P q should be at least equal to ) ( max max max max r q v s q P R P in order to provide acceptable motions of the robot and positioner. For instance, for the previous case study, this expression gives the discretization step about 0.5° that allows to generate trajectories that are very close to the optimal ones, namely, the robot processing time is only 1% higher than the minimum value.
Conclusions
This paper contributes to optimization of robot/positioner motions in redundant robotic systems for the fiber placement process. It proposes a new strategy for the optimization algorithm tuning. The developed technique converts the continuous optimization into a combinatorial one where dynamic programming is applied to find time-optimal motions. The proposed strategy of the optimization algorithm tuning allows essentially decreasing the computing time and generating desired motions satisfying industrial constraints. Feasibilities and advantages of the presented technique are confirmed by a case study. Future research will focus on application of those results in real-life industrial environments.
Figure 1 .
1 Figure 1. Typical robotic fiber placement system (6-axis robot and one-axis positioner).
Figure 2 .
2 Figure 2. Graph-based representation of the discrete search space After presenting
function (robot processing time) can be presented as the sum of the edge weights
the final column. Therefore, the desired path is described by the recorded indices
Figure 3 .
3 Figure 3. Three-axis planar robotic system and straight-line-task
To estimate the reasonable discretization step for the considered fiber placement problem, let us analyze Table1in more details. From Table1, the discretization steps
T=1.90s T=1.84s T=1.54s T=1.30s T=1.29s T=1.29s
(without acc-constraint) (38s comp.) (2m comp.) (4m comp.) (9m comp.) (47m comp.) (5h comp.)
Robot processing time T=1.90s T=2.11s T=1.60s T=1.30s T=1.29s T=1.29s
(with acc-constraint) (67s comp.) (4m comp.) (8m comp.) (17m comp.) (1.2h comp.) (9h comp.)
P , 2 { 1 , } 75 . 0
q
Acknowledgments
This work has been supported by the China Scholarship Council (Grant N° 201404490018). The authors also acknowledge CETIM for the motivation of this research work. |
01757864 | en | [
"sdv.gen.gh",
"sdv.aen",
"sdv"
] | 2024/03/05 22:32:10 | 2018 | https://amu.hal.science/hal-01757864/file/Desmarchelier%20et%20al%20final%20version.pdf | Charles Desmarchelier
Véronique Rosilio
email: veronique.rosilio@u-psud.fr
David Chapron
Ali Makky
Damien Preveraud
Estelle Devillard
Véronique Legrand-Defretin
Patrick Borel
Damien P Prévéraud
Molecular interactions governing the incorporation of cholecalciferol and retinyl-palmitate in mixed taurocholate-lipid micelles
Keywords: bioaccessibility, surface pressure, bile salt, compression isotherm, lipid monolayer, vitamin A, vitamin D, phospholipid
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Retinyl esters and cholecalciferol (D 3 ) (Figure 1) are the two main fat-soluble vitamins found in foods of animal origin. There is a renewed interest in deciphering their absorption mechanisms because vitamin A and D deficiency is a public health concern in numerous countries, and it is thus of relevance to identify factors limiting their absorption to tackle this global issue. The fate of these vitamins in the human upper gastrointestinal tract during digestion is assumed to follow that of dietary lipids [START_REF] Borel | Vitamin D bioavailability: state of the art[END_REF]. This includes emulsification, solubilization in mixed micelles, diffusion across the unstirred water layer and uptake by the enterocyte via passive diffusion or apical membrane proteins [START_REF] Reboul | Proteins involved in uptake, intracellular transport and basolateral secretion of fat-soluble vitamins and carotenoids by mammalian enterocytes[END_REF].
Briefly, following consumption of vitamin-rich food sources, the food matrix starts to undergo degradation in the acidic environment of the stomach, which contains several enzymes, leading to a partial release of these lipophilic molecules and to their transfer to the lipid phase of the meal. Upon reaching the duodenum, the food matrix is further degraded by pancreatic secretions, promoting additional release from the food matrix, and both vitamins then transfer from oil-in-water emulsions to mixed micelles (and possibly other structures, such as vesicles, although not demonstrated yet). As it is assumed that only free retinol can be taken up by enterocytes, retinyl esters are hydrolyzed by pancreatic enzymes, namely pancreatic lipase, pancreatic lipase-related protein 2 and cholesterol ester hydrolase [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF].
Bioaccessible vitamins are then taken up by enterocytes via simple passive diffusion or facilitated diffusion mediated by apical membrane proteins (Desmarchelier et al. 2017). The apical membrane protein(s) involved in retinol uptake by enterocytes is(are) yet to be identified but in the case of D 3 , three proteins have been shown to facilitate its uptake: NPC1L1 (NPC1 like intracellular cholesterol transporter 1), SR-BI (scavenger receptor class B member 1) and CD36 (Cluster of differentiation 36) [START_REF] Reboul | Proteins involved in uptake, intracellular transport and basolateral secretion of fat-soluble vitamins and carotenoids by mammalian enterocytes[END_REF]. Both vitamins then transfer across the enterocyte towards the basolateral side. The transfer of vitamin A is mediated, at least partly, by the cellular retinol-binding protein, type II (CRBPII), while that of vitamin D is carried out by unknown mechanisms. Additionally, a fraction of retinol is re-esterified by several enzymes (Borel & Desmarchelier 2017). Vitamin A and D are then incorporated in chylomicrons in the Golgi apparatus before secretion in the lymph.
The solubilization of vitamins A and D in mixed micelles, also called micellarization or micellization, is considered as a key step for their bioavailability because it is assumed that the non-negligible fraction of fat-soluble vitamin that is not micellarized is not absorbed [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Mixed micelles are mainly made of a mixture of bile salts, phospholipids and lysophospholipids, cholesterol, fatty acids and monoglycerides [START_REF] Hernell | Physical-chemical behavior of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings[END_REF]). These compounds may form various self-assembled structures, e.g., spherical, cylindrical or disk-shaped micelles [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF][START_REF] Leng | Kinetics of the micelle-to-vesicle transition ; aquous lecithin-bile salt mixtures[END_REF] or vesicles, depending on their concentration, the bile salt/phospholipid ratio [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF], the phospholipid concentration, but also the ionic strength, pH and temperature of the aqueous medium (Madency & Egelhaaf 2010;[START_REF] Salentinig | Self-assembled structures and pKa value of oleic acid in systems of biological relevance[END_REF][START_REF] Cheng | Mixtures of lecithin and bile salt can form highly viscous wormlike micellar solutions in water[END_REF]. Fat-soluble micronutrients display large variations with regards to their solubility in mixed micelles [START_REF] Sy | Effects of physicochemical properties of carotenoids on their bioaccessibility, intestinal cell uptake, and blood and tissue concentrations[END_REF][START_REF] Gleize | Form of phytosterols and food matrix in which they are incorporated modulate their incorporation into mixed micelles and impact cholesterol micellarization[END_REF] and several factors are assumed to account for these differences (Desmarchelier & Borel 2017, for review).
The mixed micelle lipid composition has been shown to significantly affect vitamin absorption. For example, the substitution of lysophospholipids by phospholipids diminished the lymphatic absorption of vitamin E in rats [START_REF] Koo | Phosphatidylcholine inhibits and lysophosphatidylcholine enhances the lymphatic absorption of alpha-tocopherol in adult rats[END_REF]. In rat perfused intestine, the addition of fatty acids of varying chain length and saturation degree, i.e. butyric, octanoic, oleic and linoleic acid, resulted in a decrease in the rate of D 3 absorption [START_REF] Hollander | Vitamin D-3 intestinal absorption in vivo: influence of fatty acids, bile salts, and perfusate pH on absorption[END_REF]. The effect was more pronounced in the ileal part of the small intestine following the addition of oleic and linoleic acid. It was suggested that unlike short-and medium-chain fatty acids, which are not incorporated into micelles, long-chain fatty acids hinder vitamin D absorption by causing enlargement of micelle size, thereby slowing their diffusion towards the enterocyte.
Moreover, the possibility that D 3 could form self-aggregates in water [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF], although not clearly demonstrated, has led to question the need of mixed micelles for its solubilization in the aqueous environment of the intestinal tract lumen [START_REF] Rautureau | Aqueous solubilisation of vitamin D3 in normal man[END_REF][START_REF] Maislos | Bile salt deficiency and the absorption of vitamin D metabolites. In vivo study in the rat[END_REF].
This study was designed to compare the relative solubility of D 3 and RP in the aqueous phase rich in mixed micelles that exists in the upper intestinal lumen during digestion, and to dissect, by surface tension and surface pressure measurements, the molecular interactions existing between these vitamins and the mixed micelle components that explain the different solubility of D 3 and RP in mixed micelles.
Materials and methods
Chemicals
2-oleoyl-1-palmitoyl-sn-glycero-3-phosphocholine (POPC) (phosphatidylcholine, ≥99%; Mw 760.08 g/mol), 1-palmitoyl-sn-glycero-3-phosphocholine (Lyso-PC) (lysophosphatidylcholine, ≥99%; Mw 495.63 g/mol), free cholesterol (≥99%; Mw 386.65 g/mol), oleic acid (reagent grade, ≥99%; Mw 282.46 g/mol), 1-monooleoyl-rac-glycerol (monoolein, C18:1,-cis-9, Mw 356.54 g/mol), taurocholic acid sodium salt hydrate (NaTC) (≥95%; Mw 537.68 g/mol) ), cholecalciferol (>98%; Mw 384.64 g/mol; melting point 84.5°C; solubility in water: 10 -4 -10 -5 mg/mL; logP 7.5) and retinyl palmitate (>93.5%; Mw 524.86 g/mol; melting point 28.5°C; logP 13.6) were purchased from Sigma-Aldrich (Saint-Quentin-Fallavier, France). Chloroform and methanol (99% pure) were analytical grade reagents from Merck (Germany). Ethanol (99.9%), n-hexane, chloroform, acetonitrile, dichloromethane and methanol were HPLC grade reagents from Carlo Erba Reagent (Peypin, France). Ultrapure water was produced by a Milli-Q ® Direct 8 Water Purification System (Millipore, Molsheim, France). Prior to all surface tension, and surface pressure experiments, all glassware was soaked for an hour in a freshly prepared hot TFD4 (Franklab, Guyancourt, France) detergent solution (15% v/v), and then thoroughly rinsed with ultrapure water. Physico-chemical properties of D 3 and RP were retrieved from PubChem (https://pubchem.ncbi.nlm.nih.gov/).
Micelle formation
The micellar mixture contained 0.3 mM monoolein, 0.5 mM oleic acid, 0.04 mM POPC, 0.1 mM cholesterol, 0.16 mM Lyso-PC, and 5 mM NaTC [START_REF] Reboul | Lutein transport by Caco-2 TC-7 cells occurs partly by a facilitated process involving the scavenger receptor class B type I (SR-BI)[END_REF]. Total component concentration was thus 6.1 mM, with NaTC amounting to 82 mol%. Two vitamins were studied: crystalline D 3 and RP.
Mixed micelles were formed according to the protocol described by [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]. Lipid digestion products (LDP) (monoolein, oleic acid, POPC, cholesterol and Lyso-PC, total concentration 1.1 mM) dissolved in chloroform/methanol (2:1, v/v), and D 3 or RP dissolved in ethanol were transferred to a glass tube and the solvent mixture was carefully evaporated under nitrogen. The dried residue was dispersed in Tris buffer (Tris-HCl 1mM, CaCl 2 5mM, NaCl 100 mM, pH 6.0) containing 5 mM taurocholate, and incubated at 37 °C for 30 min. The solution was then vigorously mixed by sonication at 25 W (Branson 250W sonifier; Danbury, CT, U.S.A.) for 2 min, and incubated at 37 °C for 1 hour. To determine the amount of vitamin solubilized in structures allowing their subsequent absorption by enterocytes (bioaccessible fraction), i.e. micelles and possibly small lipid vesicles, whose size is smaller than that of mucus pores [START_REF] Cone | Barrier properties of mucus[END_REF], the solutions were filtered through cellulose ester membranes (0.22 µm) (Millipore), according to [START_REF] Tyssandier | Processing of vegetable-borne carotenoids in the human stomach and duodenum[END_REF]. The resulting optically clear solution was stored at -20 °C until vitamin extraction and HPLC analysis. D 3 and RP concentrations were measured by HPLC before and after filtration. For surface tension measurements and cryoTEM experiments, the mixed micelle systems were not filtered.
Self-micellarization of D 3
Molecular assemblies of D 3 were prepared in Tris buffer using the same protocol as for mixed micelles. D 3 was dissolved into the solvent mixture and after evaporation, the dry film was hydrated for 30 min at 37°C with taurocholate-free buffer. The suspension was then sonicated. All D 3 concentrations reported in the surface tension measurements were obtained from independent micellarization experiments -not from the dilution of one concentrated D 3 solution.
Surface tension measurements
Mixed micelle solutions were prepared as described above, at concentrations ranging from 5.5 nM to 55 mM, with the same proportion of components as previously mentioned. The surface tension of LDP mixtures hydrated with a taurocholate-free buffer, and that of pure taurocholate solutions were also measured at various concentrations. The solutions were poured into glass cuvettes. The aqueous surface was cleaned by suction, and the solutions were left at rest under saturated vapor pressure for 24 hours before measurements. For penetration studies, glass cuvettes with a side arm were used, allowing injection of NaTC beneath a spread LDP or vitamin monolayer. Surface tension measurements were performed by the Wilhelmy plate method, using a thermostated automatic digital tensiometer (K10 Krüss, Germany). The surface tension g was recorded continuously as a function of time until equilibrium was reached. All experiments were performed at 25 ±1°C under saturated vapor pressure to maintain a constant level of liquid. The reported values are mean of three measurements. The experimental uncertainty was estimated to be 0.2 mN/m. Surface pressure (π) values were deduced from the relationship π = γ 0 -γ, with γ 0 the surface tension of the subphase and γ the surface tension in the presence of a film.
Surface pressure measurements
Surface pressure-area π-A isotherms of the LDP and LDP-vitamin mixtures were obtained using a thermostated Langmuir film trough (775.75 cm 2 , Biolin Scientific, Finland) enclosed into a Plexiglas box (Essaid et al. 2016). Solutions of lipids in a chloroform/methanol (9:1, v/v) mixture were spread onto a clean buffer subphase. Monolayers were left at rest for 20 minutes to allow complete evaporation of the solvents. They were then compressed at low speed (6.5 Å 2 .molecule -1 .min -1 ) to minimize the occurrence of metastable phases. The experimental uncertainty was estimated to be 0.1 mN/m. All experiments were run at 25 ±1°C. Mean isotherms were deduced from at least three compression isotherms. The surface compressional moduli K of monolayers were calculated using Eq. 1:
(Eq. 1)
Excess free energies of mixing were calculated according to Eq. 2:
(Eq. 2) with X L and A L the molar fraction and molecular area of lipid molecules, and X VIT and A VIT the molar fraction and molecular area of vitamin molecules, respectively [START_REF] Ambike | Interaction of self-assembled squalenoyl gemcitabine nanoparticles with phospholipid-cholesterol monolayers mimicking a biomembrane[END_REF].
Cryo-TEM analysis
A drop (5 µL) of LDP-NaTC micellar solution (15 mM), LDP-NaTC-D 3 (3:1 molar ratio) or pure D 3 "micellar suspension" (5 mM, theoretical concentration) was deposited onto a perforated carbon-coated, copper grid (TedPella, Inc); the excess of liquid was blotted with a filter paper. The grid was immediately plunged into a liquid ethane bath cooled with liquid nitrogen (180 °C) and then mounted on a cryo holder [START_REF] Da Cunha | Overview of chemical imaging methods to address biological questions[END_REF]. Transmission electron measurements (TEM) measurements were performed just after grid preparation using a JEOL 2200FS (JEOL USA, Inc., Peabody, MA, U.S.A.) working under an acceleration voltage of 200 kV (Institut Curie). Electron micrographs were recorded by a CCD camera (Gatan, Evry, France).
K = -A dπ dA T ∆ G EXC = X L A L + X VIT A VIT ( ) 0 π ∫ dπ 2.7. Vitamin analysis 2.7.1. Vitamin extraction.
D 3 and RP were extracted from 500 µL aqueous samples using the following method [START_REF] Desmarchelier | The distribution and relative hydrolysis of tocopheryl acetate in the different matrices coexisting in the lumen of the small intestine during digestion could explain its low bioavailability[END_REF]: retinyl acetate was used as an internal standard and was added to the samples in 500 µL ethanol. The mixture was extracted twice with two volumes of hexane.
The hexane phases obtained after centrifugation (1200 × g, 10 min, 10°C) were evaporated to dryness under nitrogen, and the dried extract was dissolved in 200 µL of acetonitrile/dichloromethane/methanol (70:20:10, v/v/v). A volume of 150 µL was used for HPLC analysis. Extraction efficiency was between 75 and 100%. Sample whose extraction efficiency was below 75% were re-extracted or taken out from the analysis.
2.7.2. Vitamin HPLC analysis.
D 3 , and RP and retinyl acetate were separated using a 250 x 4.6-nm RP C18, 5-µm Zorbax
Eclipse XDB column (Agilent Technologies, Les Ulis, France) and a guard column. The mobile phase was a mixture of acetonitrile/dichloromethane/methanol (70:20:10, v/v/v). Flow rate was 1.8 mL/min and the column was kept at a constant temperature (35 °C). The HPLC system comprised a Dionex separation module (P680 HPLC pump and ASI-100 automated sample injector, Dionex, Aix-en-Provence, France). D 3 was detected at 265 nm while retinyl esters were detected at 325 nm and were identified by retention time compared with pure (>95%) standards. Quantification was performed using Chromeleon software (version 6.50, SP4 Build 1000) comparing the peak area with standard reference curves. All solvents used were HPLC grade.
Statistical analysis
Results are expressed as means ± standard deviation. Statistical analyses were performed using Statview software version 5.0 (SAS Institute, Cary, NC, U.S.A.). Means were compared by the non-parametric Kruskal-Wallis test, followed by Mann-Whitney U test as a post hoc test for pairwise comparisons, when the mean difference using the Kruskal-Wallis test was found to be significant (P<0.05). For all tests, the bilateral alpha risk was α = 0.05.
Results
Solubilization of D 3 and RP in aqueous solutions rich in mixed micelles
D 3 and RP at various concentrations were mixed with micelle components (LDP-NaTC).
D 3 and RP concentrations were measured by HPLC before and after filtration of aggregates with a diameter smaller than 0.22 µm (Figure 2). D 3 and RP solubilization in the solution that contained mixed micelle solution followed different curves: D 3 solubilization was linear (R²=0.98, regression slope = 0.71) and significantly higher than that of RP, which reached a plateau with a maximum concentration around 125 µM.
The morphology of the LDP-NaTC and LDP-NaTC-D 3 samples before filtration was analyzed by cryoTEM. In Figure 3, micelles are too small to be distinguished from ice. At high LDP-NaTC concentration (15 mM) small and large unilamellar vesicles (a), nano-fibers (b) and aggregates (c) are observed (Figure 3A). Both nano-fibers and aggregates seem to emerge from the vesicles. In the presence of D 3 at low micelle and D3 concentration (5 mM LDP-NaTC + 1.7 mM D 3 ) (Figures 3B and3C), the morphology of the nano-assemblies is greatly modified.
Vesicles are smaller and deformed, with irregular and more angular shapes (a'). There are also more abundant. A difference in contrast in the bilayers is observed, which would account for leaflets with asymmetric composition. Some of them coalesce into larger structures, extending along the walls of the grid (d). Fragments and sheets are also observed (figure 3B). They exhibit irregular contour and unidentified membrane organization. The bilayer structure is not clearly observable. New organized assemblies appear, such as disk-like nano-assemblies (e) and emulsion-like droplets (f). At higher concentration (15 mM LDP-NaTC + 5 mM D 3 in figure 3D), the emulsion-like droplets and vesicles with unidentified membrane structure (g) are enlarged. They coexist with small deformed vesicules.
Compression properties of LDP components, the LDP mixture and the vitamins
To better understand the mechanism of D 3 and RP interaction with LDP-NaTC micelles, we focused on the interfacial behavior of the various components of the system. We first determined the interfacial behavior of the LDP components and their mixture in proportions similar to those in the micellar solution, by surface pressure measurements. The π-A isotherms are plotted in Figure 4A. Based on the calculated compressibility modulus values, the lipid monolayers can be classified into poorly organized (K < 100 mN/m, for lyso-PC, monoolein, and oleic acid), liquid condensed (100 < K < 250 mN/m, for POPC and the LDP mixture) and highly rigid monolayers (K > 250 mN/m, for cholesterol) [START_REF] Davies | Interfacial phenomena 2nd ed[END_REF].
The interfacial behavior of the two studied vitamins is illustrated in Figure 4B. D 3 shows a similar compression profile to that of the LDP mixture, with comparable surface area and surface pressure at collapse (A c = 35 Å 2 , π c = 38 mN/m) but a much higher rigidity, as inferred from the comparison of their maximal K values (187.4 mN/m and 115.4 mN/m for D 3 and LDP, respectively). RP exhibits much larger surface areas and lower surface pressures than D 3 . The collapse of its monolayer is not clearly identified from the isotherms, and is estimated to occur at π c = 16.2 mN/m (A c = 56.0 Å 2 ), as deduced from the slope change in the π-A plot.
Self-assembling properties of D 3 in an aqueous solution
Since D 3 showed an interfacial behavior similar to that of the lipid mixture, and since it could be solubilized at very high concentrations in an aqueous phase rich in mixed micelles (as shown in Figure 2), its self-assembling properties were more specifically investigated. Dried D 3 films were hydrated with the sodium taurocholate free-buffer. Surface tension measurements at various D 3 concentrations revealed that the vitamin could adsorb at the air/solution interface, and significantly lower the surface tension of the buffer to g cmc = 30.6 mN/m. A critical micellar concentration (cmc = 0.45 µM) could be deduced from the γ-log C relationships and HPLC assays. Concentrated samples D 3 samples were analyzed by cryo-TEM (Figure 3E and3F).
Different D 3 self-assemblies were observed, including circular nano-assemblies (h) coexisting with nano-fibers (i), and large aggregates (j) with unidentified structure. The analysis in depth of the circular nano-assemblies allowed to conclude that they were disk-like nano-assemblies, rather than nanoparticles.
Interaction of LDP with NaTC
To better understand how the two studied vitamins interacted with the mixed micelles, we compared the interfacial behaviors of the pure NaTC solutions, LDP mixtures hydrated by NaTC-free buffer, and LDP mixtures hydrated by the NaTC buffered solutions (full mixed micelle composition). The LDP mixture composition was maintained constant, while its concentration in the aqueous medium was increased. The concentration of NaTC in the aqueous phase was also calculated so that the relative proportion of the various components (LDP and NaTC) remained unchanged in all experiments. From the results plotted in Figure 5A, the critical micellar concentration (cmc) of the LDP-NaTC mixture was 0.122 mM (γ cmc = 29.0 mN/m), a concentration 50.8 times lower than the concentration used for vitamin solubilization.
The cmc values for the LDP mixture and the pure NaTC solutions were 0.025 mM (γ cmc = 24.0 mN/m), and 1.5 mM (γ cmc = 45.3 mN/m), respectively.
Experiments modeling the insertion of NaTC into the LDP film during rehydration by the buffer suggested that only few NaTC molecules could penetrate in the condensed LDP film (initial surface pressure: π i = 28 mN/m) and that the LDP-NaTC mixed film was not stable, as
shown by the decrease in surface pressure over time (Figure 5B).
Interaction of D 3 and RP with NaTC
The surface tension of the mixed NaTC-LDP micelle solutions was only barely affected by the addition of 0.1 or 1 mM D 3 or RP: the surface tension values increased by no more than 2.8 mN/m. Conversely, both vitamins strongly affected the interfacial behavior of the NaTC micellar solution, as inferred from the significant surface tension lowering observed (-7.0 and -8.1 mN/m for RP and D 3 , respectively).
Interaction of D 3 and RP with lipid digestion products
The interaction between the vitamins and LDP molecules following their insertion into LDP micelles was modeled by compression of LDP/D 3 and LDP/RP mixtures at a 7:3 molar ratio. This ratio was chosen arbitrarily, to model a system in which LDP was in excess. The π-A isotherms are presented in Figures 6A and6B. They show that both vitamins modified the isotherm profile of the lipid mixture, however, not in the same way. In the LDP/D 3 mixture, the surface pressure and molecular area at collapse were controlled by LDP. For LDP/RP, despite the high content in LDP, the interfacial behavior was clearly controlled by RP. From the isotherms in Figures 6A and6B, compressibility moduli and excess free energies of mixing were calculated and compared (Figures 6C, and6D). D 3 increased the rigidity of LDP monolayers, whereas RP disorganized them. The negative ∆G EXC values calculated for the LDP-D 3 monolayers at all surface pressures account for the good mixing properties of D 3 and the lipids in all molecular packing conditions. Conversely for RP, the positive and increasing ∆G EXC values with the surface pressure demonstrate that its interaction with the lipids was unfavorable.
Discussion
The objective of this study was to compare the solubility of RP and D 3 in aqueous solutions containing mixed micelles, and to decipher the molecular interactions that explain their different extent of solubilization. Our first experiment revealed that the two vitamins exhibit very different solubilities in an aqueous medium rich in mixed micelles. Furthermore, the solubility of D 3 was so high that we did not observe any limit, even when D 3 was introduced at a concentration > 1mM in the aqueous medium. To our knowledge, this is the first time that such a difference is reported. Cryo-TEM pictures showed that D 3 dramatically altered the organization of the various components of the mixed micelles. The spherical vesicles were deformed with angular shapes. The nano-fibers initiating from the vesicles were no longer observed. Large irregular in shape vesicle and sheets, disk-like nano-assemblies and emulsionlike droplets appeared in LDP-NaTC-D 3 mixtures, only. The existence of so many different assemblies would account for a different interaction of D 3 with the various components of mixed micelles, and for a reorganization of the components. D 3 could insert in the bilayer of vesicles and deform them, but also form emulsion-like droplets with fatty acids and monoglyceride. It is noteworthy that these emulsion-like droplets were not observed in pure D 3 samples, nor mixed micelles. Since previous studies have shown that both bile salts and some mixed micelle lipids, e.g. fatty acids and phospholipids, can modulate the solubility of fatsoluble vitamins in these vehicles [START_REF] Yang | Vitamin E and vitamin E acetate solubilization in mixed micelles: physicochemical basis of bioaccessibility[END_REF], we decided to study the interactions of these two vitamins with either bile salts or micelle lipids to assess the specific role of each component on vitamin solubility in mixed micelles.
The characteristics of pure POPC, Lyso-PC, monoolein, and cholesterol isotherms were in agreement with values published in the literature [START_REF] Pezron | Monoglyceride Surface-Films -Stability and Interlayer Interactions[END_REF]Flasinsky et al. 2014;[START_REF] Huynh | Structural properties of POPC monolayers under lateral compression: computer simulations analysis[END_REF]. For oleic acid, the surface pressure at collapse was higher (π c = 37 mN/m) and the corresponding molecular area (A c = 26 Å 2 ) smaller than those previously published [START_REF] Tomoaia-Cotisel | Insoluble mixed monolayers[END_REF], likely due to the pH of the buffer solution (pH 6) and the presence of calcium.
The interfacial properties of D 3 were close to those deduced from the isotherm published by [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF] for a D 3 monolayer spread from a benzene solution onto a pure water subphase. The molecular areas at collapse are almost identical in the two studies (about 36 Å 2 ), but the surface pressures differ (30 mN/m in Meredith and coworkers' study, and 38 mN/m in ours). Compressibility modulus values show that D 3 molecules form monolayers with higher molecular order than the LDP mixture, which suggests that they might easily insert into LDP domains.
As could be expected from its chemical structure, RP exhibited a completely different interfacial behavior compared to D 3 and the LDP, even to lyso-PC which formed the most expanded monolayers of the series, and displayed the lowest collapse surface pressure. The anomalous isotherm profile of lyso-PC has been attributed to monolayer instability and progressive solubilization of molecules into the aqueous phase [START_REF] Heffner | Thermodynamic and kinetic investigations of the release of oxidized phospholipids from lipid membranes and its effect on vascular integrity[END_REF]. The molecular areas and surface pressures for RP have been compared to those measured by [START_REF] Asai | Formation and stability of the dispersed particles composed of retinyl palmitate and phosphatidylcholine[END_REF] for RP monolayers spread from benzene solutions at 25°C onto a water subphase. Their values are much lower than ours, accounting for even more poorly organized monolayers. The low collapse surface pressure could correspond to molecules partially lying onto the aqueous surface, possibly forming multilayers above 16 mN/m as inferred from the continuous increase in surface pressure above the change in slope of the isotherm. The maximal compressibility modulus confirms the poor monolayer order. The significant differences in RP surface pressure and surface area compared to the LDP mixture might compromise its insertion and stability into LDP domains.
The dogma in nutrition is that fat-soluble vitamins need to be solubilized in bile salt micelles to be transported to the enterocyte and then absorbed. It is also well known that although NaTC primary micelles can be formed at 2-3 mM with a small aggregation number, concentrations as high as 10-12 mM are usually necessary for efficient lipid solubilization in the intestine [START_REF] Baskin | Bile salt-phospholipid aggregation at submicellar concentrations[END_REF]. Due to their chemical structure bile salts have a facial arrangement of polar and non-polar domains (Madency & Egelhaaf 2010). Their selfassembling (dimers, multimers, micelles) is a complex process involving hydrophobic interaction and cooperative hydrogen bonding, highly dependent on the medium conditions, and that is not completely elucidated. The cmc value for sodium taurocholate in the studied buffer was particularly low compared to some of those reported in the literature for NaTC in water or sodium chloride solutions (3-12 mM) [START_REF] Kratohvil | Concentration-dependent aggregation patterns of conjugated bile-salts in aqueous sodiumchloride solutions -a comparison between sodium taurodeoxycholate and sodium taurocholate[END_REF][START_REF] Meyerhoffer | Critical Micelle Concentration Behavior of Sodium Taurocholate in Water[END_REF][START_REF] Madenci | Self-assembly in aqueous bile salt solutions[END_REF]. At concentrations as high as 10-12 mM, NaTC molecules form elongated cylindrical "secondary" micelles [START_REF] Madenci | Self-assembly in aqueous bile salt solutions[END_REF][START_REF] Bottari | Structure and composition of sodium taurocholate micellar aggregates[END_REF]. The cryoTEM analysis did not allow to distinguish micelles from the ice. In our solubilization experiment, the concentration of NaTC did not exceed 5 mM. Nevertheless, the micelles proved to be very efficient with regards to vitamin solubilization.
When bile salts and lipids are simultaneously present in the same environment, they form mixed micelles [START_REF] Hernell | Physical-chemical behavior of dietary and biliary lipids during intestinal digestion and absorption. 2. Phase analysis and aggregation states of luminal lipids during duodenal fat digestion in healthy adult human beings[END_REF]. Bile salts solubilize phospholipid vesicles and transform into cylindrical micelles [START_REF] Cheng | Mixtures of lecithin and bile salt can form highly viscous wormlike micellar solutions in water[END_REF]. [START_REF] Walter | Intermediate structures in the cholate-phosphatidylcholine vesicle-micelle transition[END_REF] suggested that sodium cholate cylindrical micelles evolved from the edge of lecithin bilayer sheets. Most published studies were performed at high phospholipid/bile salt ratio. In our system, the concentration of the phospholipids was very low compared to that of NaTC. We observed however the presence of vesicles, and nano-fiber structures emerging from them. In their cryoTEM analysis, [START_REF] Fatouros | Colloidal structures in media simulating intestinal fed state conditions with and without lypolysis products[END_REF] compared bile salt/phospholipid mixtures to bile salt/phospholipid/fatty acid/monoglyceride ones at concentrations closer to ours. They observed only micelles in bile salt/phospholipid mixtures. However, in the presence of oleic acid and monoolein, vesicles and bilayer sheets were formed. This would account for a reorganization of the lipids and bile salts in the presence of the fatty acid and the monoglyceride.
We therefore decided to study the interactions between bile salts and LDP. The results obtained
show that the surface tension, the effective surface tension lowering concentration, and cmc Solubilization experiments and the analysis of vitamin-NaTC interaction cannot explain why the LDP-NaTC mixed micelles solubilize D 3 better than RP. Therefore, we studied the interfacial behavior of the LDP mixture in the presence of each vitamin, to determine the extent of their interaction with the lipids. The results obtained showed that D 3 penetrated in LDP domains and remained in the lipid monolayer throughout compression. At large molecular areas, the π-A isotherm profile of the mixture followed that of the LDP isotherm with a slight condensation due to the presence of D 3 molecules. Above 10 mN/m, an enlargement of the molecular area at collapse and a change in the slope of the mixed monolayer was observed.
However, the surface pressure at collapse was not modified, and the shape of the isotherm accounted for the insertion of D 3 molecules into LDP domains. This was confirmed by the surface compressional moduli. D 3 interacted with lipid molecules in such manner that it increased monolayer rigidity (K max = 134.8 mN/m), without changing the general organization of the LDP monolayer. The LDP-D 3 mixed monolayer thus appeared more structured than the LDP one. D 3 behavior resembles that of cholesterol in phospholipid monolayers, however without the condensing effect of the sterol [START_REF] Ambike | Interaction of self-assembled squalenoyl gemcitabine nanoparticles with phospholipid-cholesterol monolayers mimicking a biomembrane[END_REF]. The higher rigidity of LDP monolayer in the presence of D 3 could be related to the cryo-TEM pictures showing the deformed, more angular vesicles formed with LDP-NaTC-D 3 . The angular shape would account for vesicles with rigid bilayers [START_REF] Kuntsche | Cryogenic transmission electron microscopy (cryo-TEM) for studying the morphology of collidal drug delivery systems[END_REF].
For RP, the shape of the isotherms show evidence that lipid molecules penetrated in RP domains, rather than the opposite. Indeed, the π-A isotherm profile of the LDP-RP monolayer is similar to that of RP alone. The insertion of lipid molecules into RP domains is also attested by the increase in the collapse surface pressure from 16 to 22 mN/m. Partial collapse is confirmed by the decrease in the compressibility modulus above 22 mN/m. Thus, RP led to a destructuration of the LDP mixed monolayer and when the surface density of the monolayer increased, the vitamin was partially squeezed out from the interface. The calculated ∆G EXC values for both systems suggest that insertion of D 3 into LDP domains was controlled by favorable (attractive) interactions, whereas mixing of RP with LDP was limited due to unfavorable (repulsive) interactions, even at low surface pressures. According to [START_REF] Asai | Formation and stability of the dispersed particles composed of retinyl palmitate and phosphatidylcholine[END_REF], RP can be partially solubilized in the bilayer of phospholipids (up to 3 mol%), and the excess is separated from the phospholipids, and dispersed as emulsion droplets stabilized by a phospholipid monolayer.
On the whole, the information obtained regarding the interactions of the two vitamins with NaTC and LDP explain why D 3 is more soluble than RP in an aqueous medium rich in mixed micelles. Both vitamins can insert into pure NaTC domains, but only D 3 can also insert into the LDP domains in LDP-enriched NaTC micelles.
Furthermore, the results obtained suggest that this is not the only explanation. Indeed, since it has been suggested that D 3 could form cylindrical micelle-like aggregates [START_REF] Meredith | The Supramolecular Structure of Vitamin-D3 in Water[END_REF], we hypothesize that the very high solubility of D 3 in the aqueous medium rich in mixed micelles was partly due to the solubilization of a fraction of D 3 as self-aggregates. Indeed, we observed that D 3 at concentrations higher than 0.45 µM, could self-assemble into various structures including nano-fibers. To our knowledge, no such structures, especially nanofibers, have been reported for D 3 so far. Rod diameter was smaller than 10 nm, much smaller than for the rods formed by lithocholic acid, for example [START_REF] Terech | Self-assembled monodisperse steroid nanotubes in water[END_REF]. They were similar to those observed in highly concentrated LDP-NaTC mixtures, which seemed formed via desorganization of lipid vesicles. Disk-like and aggregates with unidentified structure, also observed in concentrated D 3 samples, could be related to these nano-fibers.
In our solubilization experiments, which were performed at much higher D 3 concentrations, both insertion of D 3 molecules into NaTC and LDP domains, and D 3 self-assembling could occur, depending on the kinetics of insertion of D 3 into the NaTC-DLP mixed micelles.
Conclusion
The solubilization of a hydrophobic compound in bile salt-lipid micelles is dependent upon its chemical structure and its ability to interact with the mixed micelles components. Most hydrophobic compounds are expected to insert into the bile salt-lipid micelles. The extent of the solubilizing effect is, however, much more difficult to predict. As shown by others before us, mixed micelles components form a heterogeneous system with various molecular assemblies differing in shape and composition. The conditions of the medium (pH, ionic strength and temperature) affect the formation of these molecular assemblies, although we did not study this effect on our system. Our results showed that D 3 displayed a higher solubility in mixed micelle solutions than RP. This difference was attributed to the different abilities of the two vitamins to insert in between micelle components, but it was also explained by the propensity of D 3 , contrarily to RP, to self-associate into structures that are readily soluble in the aqueous phase. It is difficult to predict the propensity of a compound to self-association. We propose here a methodology that was efficient to distinguish between two solubilizing behaviors, and could be easily used to predict the solubilization efficiency of other hydrophobic compounds. Whether the D 3 self-assemblies are available for absorption by the intestinal cells needs further studies.
values were very much influenced by LDP. The almost parallel slopes of Gibbs adsorption isotherms for pure NaTC and mixed NaTC-LDP suggest that LDP molecules inserted into NaTC domains, rather than the opposite. This was confirmed by penetration studies, which showed that NaTC (0.1 mM) could hardly penetrate in a compact LDP film. So, during lipid hydration, LDP molecules could insert into NaTC domains. The presence of LDP molecules improved NaTC micellarization.After having determined the interfacial properties of each micelle component and measured the interactions between NaTC and LDP, we assessed the ability of D 3 and RP to solubilize in either NaTC or NaTC-LDP micelles. Surface tension values clearly show that both vitamins could insert in between NaTC molecules adsorbed at the interface, and affected the surface tension in the same way. The interfacial behavior of the molecules being representative of their behavior in the bulk, it is reasonable to think that both D 3 and RP can be solubilized into pure NaTC micelles. For the mixed NaTC-LDP micelles, the change in surface tension was too limited to allow conclusions, but the solubilization experiments clearly indicated that neither vitamin was solubilized to the same extent.
Figure 1 :
1 Figure 1: Chemical structures for D 3 and RP.
Figure 2 :
2 Figure 2: Solubilization of D 3 and RP in aqueous solutions rich in mixed micelles: (l),
Figure 3 :
3 Figure 3: Cryo-TEM morphology of (A) 15 mM mixed LDP-NaTC micelles, (B) and (C) 5
Figure 4 :
4 Figure 4: Mean compression isotherms for (A) the pure micelles components and the LDP
Figure 5 :
5 Figure 5: (A) Adsorption isotherms for LDP hydrated in NaTC-free buffer (○), LDP hydrated
Figure 6 :
6 Figure 6: π-A isotherms (A,B), compressibility moduli (C) and excess free energies (D) for
Ø
Figure 1
Acknowledgements:
The authors are grateful to Dr Sylvain Trépout (Institut Curie, Orsay, France) for his contribution to cryoTEM experiments and the fruitful discussions.
Funding: This study was funded by Adisseo France SAS.
Conflicts of interest: DP, ED and VLD are employed by Adisseo. Adisseo markets formulated vitamins for animal nutrition. |
01757936 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757936/file/CK2017_Nayak_Caro_Wenger_HAL.pdf | Abhilash Nayak
email: abhilash.nayak@irccyn.ec-nantes.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Philippe Wenger
email: philippe.wenger@ls2n.fr
Local and Full-cycle Mobility Analysis of a 3-RPS-3-SPR Series-Parallel Manipulator
Keywords: series-parallel manipulator, mobility analysis, Jacobian matrix, screw theory, Hilbert dimension
without any proof, and shown to be five in [4] and [3] with an erroneous proof. Screw theory is used to derive the kinematic Jacobian matrix and the twist system of the mechanism, leading to the determination of its local mobility. I turns out that this local mobility is found to be six in several arbitrary configurations, which indicates a full-cycle mobility equal to six. This full-cycle mobility is confirmed by calculating the Hilbert dimension of the ideal made up of the set of constraint equations. It is also shown that the mobility drops to five in some particular configurations, referred to as impossible output singularities.
Introduction
A series-parallel manipulator (S-PM) is composed of parallel manipulators mounted in series and has merits of both serial and parallel manipulators. The 3-RPS-3-SPR S-PM is such a mechanism with the proximal module being composed of the 3-RPS parallel mechanism and the distal module being composed of the 3-SPR PM. Hu et al. [START_REF] Hu | Analyses of inverse kinematics, statics and workspace of a novel 3-RPS-3-SPR serial-parallel manipulator[END_REF] analyzed the workspace of this manipulator. Hu formulated the Jacobian matrix for S-PMs as a function of Jacobians of the individual parallel modules [START_REF] Hu | Formulation of unified Jacobian for serial-parallel manipulators[END_REF]. In the former paper, it was assumed that the number of local dof of the 3-RPS-3-SPR mechanism is equal to six, whereas Gallardo et al. found out that it is equal to five [START_REF] Gallardo-Alvarado | Mobility and velocity analysis of a limited-dof series-parallel manipulator[END_REF][START_REF] Gallardo-Alvarado | Kinematics of a series-parallel manipulator with constrained rotations by means of the theory of screws[END_REF]. As a matter of fact, it is not straightforward to find the local mobility of this S-PM due to the third-order twist systems of each individual module. It is established that the 3-RPS PM performs a translation and two non pure rotations about non fixed axes, which induce two translational parasitic motions [START_REF] Hunt | Structural kinematics of in-parallel-actuated robot-arms[END_REF]. The 3-SPR PM also has the same type of dof [START_REF] Nayak | Comparison of 3-RPS and 3-SPR parallel manipulators based on their maximum inscribed singularity-free circle[END_REF]. In addition, these mechanisms are known as zero-torsion mechanisms. When they are mounted in series, the axis about which the torsional motion is constrained, is different for a general configuration of the S-PM. Gallardo et al. failed to consider this fact but only those special configurations in which the axes coincide resulting in a mobility equal to five. This paper aims at clarifying that the full-cycle mobility of the 3-RPS-3-SPR S-PM is equal to six with the help of screw theory and some algebraic geometry concepts. Although the considered S-PM has double spherical joints and two sets of three coplanar revolute joint axes, the proposed methodology to calculate the mobility of the manipulator at hand is general and can be applied to any series-parallel manipulator.
The paper is organized as follows : The manipulator under study is described in Section 2. The kinematic Jacobian matrix of a general S-PM with multiple modules is expressed in vector form in Section 3. Section 4 presents some configurations of the 3-RPS-3-SPR S-PM with the corresponding local mobility. Section 5 deals with the full-cycle mobility of the 3-RPS-3-SPR S-PM.
Manipulator under study
The architecture of the 3-RPS-3-SPR S-PM under study is shown in Fig. 1. It consists of a proximal 3-RPS PM module and a distal 3-SPR PM module. The 3-RPS PM is composed of three legs each containing a revolute, a prismatic and a spherical joint mounted in series, while the legs of the 3-SPR PM have these lower pairs in reverse order. Thus, the three equilateral triangular shaped platforms are the fixed base, the coupler and the end effector, coloured brown, green and blue, respectively. The vertices of these platforms are named A i , B i and C i , i = 0, 1, 2. Here after, the subscript 0 corresponds to the fixed base, 1 to the coupler platform and 2 to the end-effector. A coordinate frame F i is attached to each platform such that its origin O i lies at its circumcenter. The coordinate axes, x i points towards the vertex A i , y i is parallel to the opposite side B i C i and by the right hand rule, z i is normal to platform plane. Besides, the circum-radius of the i-th platform is denoted as h i . p i and q i , i = 1, ..., 6 are unit vectors along the prismatic joints while u i and v i , i = 1, ..., 6 are unit vectors along the revolute joint axes.
Kinematic modeling of series-parallel manipulators
Keeping in mind that the two parallel mechanisms are mounted in series, the end effector twist (angular velocity vector of a body and linear velocity vector of a point on the body) for the 3-RPS-3-SPR S-PM with respect to base can be represented as follows:
0 t 2/0 = 0 t PROX 2/0 + 0 t DIST 2/1 =⇒ 0 ω 2/0 0 v O 2 /0 = 0 ω PROX 2/0 0 v PROX O 2 /0 + 0 ω DIST 2/1 0 v DIST O 2 /1 (1)
where 0 t PROX 2/0 is the end effector twist with respect to the base (2/0) due to the proximal module motion and 0 t DIST 2/1 is the end effector twist with respect to the coupler
h 1 h 0 h 2 z 1 y 1 x 1 z 0 y 0 x 0 z 2 y 2 x 2 O 2 O 1 O 0 A 0 B 0 C 0 A 1 B 1 C 1 C 2 B 2 A 2 F 0 F 1 F 2 u 2 u 1 u 3 v 2 v 1 v 3 p 1 p 3 p 2 q 1 q 3 3 q 2 PROXIMAL module
DISTAL module
Fig. 1: A 3-RPS-3-SPR series-parallel manipulator
F 0 F n F 1 Module 1 Module 2 Module n F 2 F n-1
Fig. 2: n parallel mechanisms (named modules) arranged in series (2/1) due to the distal module motion. These twists are expressed in the base frame F 0 , hence the left superscript. The terms on right hand side of Eq. ( 1) are not known, but can be expressed in terms of the known twists using screw transformations. To do so, the known twists are first noted down. If the proximal and distal modules are considered individually, the twist of their respective moving platforms with respect to their fixed base will be expressed as a function of the actuated joint velocities :
A PROX 0 t PROX 1/0 = B PROX ρ13 =⇒ ( 0 r O 1 A 1 × 0 p 1 ) T 0 p T 1 ( 0 r O 1 B 1 × 0 p 2 ) T 0 p T 2 ( 0 r O 1 C 1 × 0 p 3 ) T 0 p T 3 ( 0 r O 1 A 1 × 0 u 1 ) T 0 u T 1 ( 0 r O 1 B 1 × 0 u 2 ) T 0 u T 2 ( 0 r O 1 C 1 × 0 u 3 ) T 0 u T 3 0 ω PROX 1/0 0 v PROX O 1 /0 = I 3×3 0 3×3 ρ1 ρ2 ρ3 (2)
A DIST 1 t DIST 2/1 = B DIST ρ46 =⇒ ( 1 r O 2 A 1 × 1 q 1 ) T 1 q T 1 ( 1 r O 2 B 1 × 1 q 2 ) T 1 q T 2 ( 1 r O 2 C 1 × 1 q 3 ) T 1 q T 3 ( 1 r O 2 A 1 × 1 v 1 ) T 1 v T 1 ( 1 r O 2 B 1 × 1 v 2 ) T 1 v T 2 ( 1 r O 2 C 1 × 1 v 3 ) T 1 v T 3 1 ω DIST 2/1 1 v DIST O 2 /1 = I 3×3 0 3×3 ρ4 ρ5 ρ6 (3)
where, 0 t PROX
1/0
is the twist of the coupler with respect to the base expressed in F 0 and1 t DIST 2/1 is the twist of the end effector with respect to the coupler expressed in F 1 . A PROX and A DIST are called forward Jacobian matrices and they incorporate the actuation and constraint wrenches of the 3-RPS and 3-SPR PMs, respectively [START_REF] Joshi | Jacobian analysis of limited-DOF parallel manipulators[END_REF]. B PROX and B DIST are called inverse Jacobian matrices and they are the result of the reciprocal product between wrenches of the mechanism and twists of the joints for the 3-RPS and 3-SPR PMs, respectively. ρ13 = [ ρ1 , ρ2 , ρ3 ] T and ρ46 = [ ρ4 , ρ5 , ρ6 ] T are the prismatic joint velocities for the proximal and distal modules, respectively. k r PQ denotes the vector pointing from a point P to point Q expressed in frame F k . Considering Eq. ( 1), the unknown twists 0 t PROX 2/0 and 0 t DIST 2/1 can be expressed in terms of the known twists 0 t PROX 1/0 and 1 t PROX 2/1 using the following screw transformation matrices [START_REF] Murray | A Mathematical Introduction to Robotic Manipulation[END_REF][START_REF] Binaud | The kinematic sensitivity of robotic manipulators to joint clearances[END_REF].
0 ω PROX 2/0 0 v PROX O 2 /0 = 2 Ad 1 0 ω PROX 1/0 0 v PROX O 1 /0 (4) with 2 Ad 1 = I 3×3 0 3×3 -0 rO 1 O 2 I 3×3 , 0 rO 1 O 2 = 0 -0 z O 1 O 2 0 y O 1 O 2 0 z O 1 O 2 0 -0 x O 1 O 2 -0 y O 1 O 2 0 x O 1 O 2 0 2 Ad 1 is called the adjoint matrix. 0 rO 1 O 2 is the cross product matrix of vector 0 r O 1 O 2 = [ 0 x O 1 O 2 , 0 y O 1 O 2 , 0 z O 1 O 2 ], pointing from point O 1 to point O 2 expressed in frame F 0 .
Similarly, for the distal module, the velocities 1 ω DIST 2/1 and 1 v DIST O 2 /1 can be transformed from frame F 1 to F 0 just by multiplying each of them by the rotation matrix 0 R 1 from frame F 0 to frame F 1 :
0 ω DIST 2/1 0 v DIST O 2 /1 = 0 R 1 1 ω DIST 2/1 1 v DIST O 2 /1 with 0 R 1 = 0 R 1 I 3×3 I 3×3 0 R 1 (5)
0 R 1 is called the augmented rotation matrix between frames F 0 and F 1 . Consequently from Eqs. ( 4) and (5),
0 t 2/0 = 2 Ad 1 0 t PROX 1/0 + 0 R 1 1 t DIST 2/1 (6)
Note that Eq. ( 6) amounts to the twist equation derived in [START_REF] Hu | Formulation of unified Jacobian for serial-parallel manipulators[END_REF] whereas Gallardo et al. add the twists of individual modules directly without considering the screw transformations. It is noteworthy that Equation [START_REF] Murray | A Mathematical Introduction to Robotic Manipulation[END_REF] in [START_REF] Gallardo-Alvarado | Mobility and velocity analysis of a limited-dof series-parallel manipulator[END_REF] is incorrect, so are any further conclusions based on this equation. Following Eqs. ( 2) and ( 3), with the assumption that the proximal and distal modules are not in a parallel singularity 1 or in other words, matrices A PROX and A DIST are invertible,
0 t 2/0 = 2 Ad 1 A -1 PROX B PROX ρ13 + 0 R 1 A -1 DIST B DIST ρ46 = 2 Ad 1 A -1 PROX B PROX 0 R 1 A -1 DIST B DIST ρ13 ρ46 = J S-PM ρ13 ρ46 (7)
J S-PM is the kinematic Jacobian matrix of the 3-RPS-3-SPR S-PM under study. The rank of this matrix provides the local mobility of the S-PM. Equations ( 6), ( 7) and ( 8) can be extended to a series-parallel manipulator with n number of parallel mechanisms, named modules in this paper, in series as shown in Fig. 2. Thus, the twist of the end effector with respect to the fixed base expressed in frame F 0 can be expressed as follows :
0 t n/0 = n ∑ i=1 0 R (i-1) n Ad i (i-1) t M i i/(i-1) = J 6×3n ρM 1 ρM 2 . . . ρM n with 0 R i = 0 R i I 3×3 I 3×3 0 R i , n Ad i = I 3×3 0 3×3 -(i-1) rO i O n I 3×3
and
J 6×3n = n Ad 1 A -1 M 0 B M 0 0 R 1 n Ad 2 A -1 M 1 B M 1 ... 0 R n A -1 M n M n (8)
where, J 6×3n is the 6 × 3n kinematic Jacobian matrix of the n-module hybrid manipulator. M i stands for the i-th module, A M i and B M i are the forward and inverse Jacobian matrices of M i of the series-parallel manipulator, respectively. ρM i is the vector of the actuated prismatic joint rates for the i-th module.
Twist system of the 3-RPS-3-SPR S-PM
Each leg of the 3-RPS and 3-SPR parallel manipulators are composed of three joints, but the order of the limb twist system is equal to five and hence there exist five twists associated to each leg. Thus, the constraint wrench system of the i-th leg reciprocal to the foregoing twists is spanned by a pure force W i passing through the spherical joint center and parallel to the revolute joint axis. Therefore, the constraint wrench systems of the proximal and distal modules are spanned by three zero-pitch wrenches, namely,
0 W PROX = 3 i=1 0 W i PROX = span 0 u 1 0 r O 2 A 1 × 0 u 1 , 0 u 2 0 r O 2 B 1 × 0 u 2 , 0 u 3 0 r O 2 C 1 × 0 u 3 1 W DIST = 3 i=1 1 W i DIST = span 1 v 1 1 r O 2 A 1 × 1 v 1 , 1 v 2 1 r O 2 B 1 × 1 v 2 , 1 v 3 1 r O 2 C 1 × 1 v 3 (9)
Due to the serial arrangement of the parallel mechanisms, the constraint wrench system of the S-PM is the intersection of the constraint wrench systems of each module. Alternatively, the twist system of the S-PM is the direct sum (disjoint union) of the twist systems of each module. Therefore, the nullspace of the 3 × 6 matrix containing the basis screws of 0 W PROX and 1 W DIST leads to the screws that form the basis of the twist system of each module, 0 T PROX = span{ 0 ξ 1 , 0 ξ 2 , 0 ξ 3 } and 1 T DIST = span{ 1 ξ 4 , 1 ξ 5 , 1 ξ 6 }, respectively. The augmented rotation matrix derived in Eq. ( 5) is exploited to ensure that all the screws are expressed in one frame (F 0 in this case). Therefore, the total twist system of the S-PM can be obtained as follows :
0 T S-PM = 0 T PROX 0 T DIST = span{ 0 ξ 1 , 0 ξ 2 , 0 ξ 3 , 0 R 1 1 ξ 4 , 0 R 1 1 ξ 5 , 0 R 1 1 ξ 6 } (10)
The order of the twist system 0 T S-PM yields the local mobility of the whole manipulator. Some general and singular configurations of the 3-RPS-3-SPR S-PM with h 0 = 2, h 1 = 1 and h 2 = 2 are considered and its mobility is listed based on the rank of the Jacobian and the order of the twist system in Table 1. For general configurations like 2 and 3, the mobility is found to be six. The mobility reduces only when some singularities are encountered. For a special configuration when the three platform planes are parallel to each other as shown in the first row of this table, the rotations of the coupler generate translational motions of the end effector. Yet, the torsional axes of both mechanisms coincide and hence, the mechanism cannot perform any rotation about an axis of vertical direction leading to a mobility equal to five. Moreover, a configuration in which any revolute joint axis in the end effector is parallel to its corresponding axis in the fixed base results in a mobility lower than six for the S-PM. For instance, for the 4th configuration in the table, there exists a constraint force f , parallel to the two parallel revolute joint axes resulting in a five dof manipulator locally. Configurations 1 and 4 are the impossible output singularities as identified by Zlatanov et al. [START_REF] Zlatanov | A unifying framework for classification and interpretation of mechanism singularities[END_REF]. It should be noted that if one of the modules is in a parallel singularity, the motion of the moving-platform of the manipulator becomes uncontrollable. A detailed singularity analysis of series-parallel manipulators will be performed in a future work for a better understanding of their behaviour in singular configurations.
Full-cycle mobility of the 3-RPS-3-SPR S-PM
The full cycle mobility can be obtained by calculating the Hilbert dimension of the set of constraint equations of the mechanism [START_REF] Husty | A Proposal for a New Definition of the Degree of Freedom of a Mechanism[END_REF]. Two Study transformation matrices are considered : 0 X 1 from F 0 to F 1 and 1 Y 2 from F 1 to F 2 composed of Study parameters x i and y i , i = 0, 1, ..., 7, respectively. Thus, the coordinates of points A j , B j and C j , j = 0, 1, 2 and vectors u k and v k , k = 1, 2, can be represented in F 0 to yield sixteen constraint equations (six for the 3-RPS PM, six for the 3-SPR Number Study parameters and configuration Rank of J S-PM Order of 0 T S-PM 1
x i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : 0.75) y i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : 0.8)
F 0 F 1 F 2 5 5 2
x i = (0.35 : -0.9 : 0.25 : 0 : 0.57 : 0.27 : -1.76 : -1.33) y i = (1 : 0 : 0 : 0 : 0 : 0 : 0 : -0.8)
F 0 F 1 F 2 6 6 3
x i = (0.99 : 0 : -0.10 : 0 : 0 : 0.21 : 0 : 1.92) y i = (-0.79 : -0.59 : 0.16 : 0 : -0.16 : -0.13 : -1.25 : -2.04)
F 0 F 1 F 2 6 6 4
x i = (0.99 : 0 : -0.10 : 0 : 0 : 0.21 : 0 : 1.92) y i = (-0.39 : 0 : 0.92 : 0 : 0 : -1.88 : 0 : 0.12)
F 0 F 1 F 2 f 5 5
Table 1: Mobility of the 3-RPS-3-SPR S-PM in different configurations PM, Study quadric and normalization equations for each transformations). It was established that the 3-RPS and the 3-SPR parallel mechanisms have two operation modes each, characterized by x 0 = 0, x 3 = 0 and y 0 = 0, y 3 = 0, respectively [START_REF] Schadlbauer | The 3-RPS parallel manipulator from an algebraic viewpoint[END_REF][START_REF] Nayak | Comparison of 3-RPS and 3-SPR parallel manipulators based on their maximum inscribed singularity-free circle[END_REF]. For the S-PM, four ideals of the constraint equations are considered : K 1 , when x 0 = y 0 = 0, K 2 , when x 3 = y 0 = 0, K 3 , when x 0 = y 3 = 0 and K 4 , when x 3 = y 3 = 0. The Hilbert dimension of these ideals over the ring C[h 0 , h 1 , h 2 ] is found to be six 1and hence the global mobility of the 3-RPS-3-SPR S-PM.
dimK i = 6, i = 1, 2, 3, 4. (11)
Conclusions and future work
In this paper, the full-cycle mobility of a 3-RPS-3-SPR PM was elucidated to be six. The kinematic Jacobian matrix of the series-parallel manipulator was calculated with the help of screw theory and the result was extended to n-number of modules. Moreover, the methodology for the determination of the twist system of series-parallel manipulators was explained. The rank of the Jacobian matrix or the order of the twist system gives the local mobility of the S-PM. Global mobility was calculated as the Hilbert dimension of the ideal of the set of constraint equations.
In the future, we intend to solve the inverse and direct kinematics using algebraic geometry concepts and to enlist all possible singularities of series-parallel mechanisms. Additionally, it is challenging to consider n-modules (n > 2) and to work on the trajectory planning of such manipulators the number of output parameters is equal to six and lower than the number of actuated joints, which is equal to 3n.
Parallel singularity can be an actuation singularity, constraint singularity or a compound singularity[START_REF] Nurahmi | Dimensionally homogeneous extended jacobian and condition number[END_REF][START_REF] Maraje | Operation modes comparison of a reconfigurable 3-PRS parallel manipulator based on kinematic performance[END_REF][START_REF] Amine | Classification of 3T1R parallel manipulators based on their wrench graph[END_REF]
The pdf file of the Maple sheet with calculation of Hilbert dimension can be found here : https://www.dropbox.com/s/3bqsn45rszvgdax/Mobility3RPS3SPR.pdf?dl=0
Acknowledgements This work was conducted with the support of both the École Centrale de Nantes and the French National Research Agency (ANR project number: ANR-14-CE34-0008-01). |
01757941 | en | [
"scco.ling",
"scco.psyc",
"info.info-cl"
] | 2024/03/05 22:32:10 | 2016 | https://amu.hal.science/hal-01757941/file/GalaZiegler_CL4LC-2016.pdf | N Úria Gala
email: nuria.gala@univ-amu.fr
Johannes Ziegler
email: johannes.ziegler@univ-amu.fr
Reducing lexical complexity as a tool to increase text accessibility for children with dyslexia
Lexical complexity plays a central role in readability, particularly for dyslexic children and poor readers because of their slow and laborious decoding and word recognition skills. Although some features to aid readability may be common to many languages (e.g., the majority of 'easy' words are of low frequency), we believe that lexical complexity is mainly language-specific. In this paper, we define lexical complexity for French and we present a pilot study on the effects of text simplification in dyslexic children. The participants were asked to read out loud original and manually simplified versions of a standardized French text corpus and to answer comprehension questions after reading each text. The analysis of the results shows that the simplifications performed were beneficial in terms of reading speed and they reduced the number of reading errors (mainly lexical ones) without a loss in comprehension. Although the number of participants in this study was rather small (N=10), the results are promising and contribute to the development of applications in computational linguistics.
Introduction
It is a fact that lexical complexity must have an effect on the readability and understandability of text for people with dyslexia [START_REF] Hyönä | Eye fixation patterns among dyslexic and normal readers : effects of word length and word frequency[END_REF]. Yet, many of the existing tools have only focused on the visual presentation of text, such as the use of specific dyslexia fonts or increased letter spacing [START_REF] Zorzi | Extra-large letter spacing improves reading in dyslexia[END_REF]. Here, we investigate the use of text simplification as a tool for improving text readability and comprehension.
It should be noted that comprehension problems in dyslexic children are typically a consequence of their problems in basic decoding and word recognition skills. In other words, children with dyslexia have typically no comprehension problems in spoken language. However, when it comes to reading a text, their decoding is so slow and strenuous that it takes up all their cognitive resources. They rarely get to the end of a text in a given time, and therefore fail to understand what they read. Long, complex and irregular words are particularly difficult for them. For example, it has been shown that reading times of children with dyslexia grow linearily with each additional letter [START_REF] Spinelli | Length effect in word naming in reading : role of reading experience and reading deficit in italian readers[END_REF] [START_REF] Ziegler | Developmental dyslexia in different languages : Language-specific or universal[END_REF]. Because children with dyslexia fail to establish the automatic procedures necessary for fluent reading, they tend to read less and less. Indeed, a dyslexic child reads in one year what a normal reader reads in two days [START_REF] Cunningham | What reading does for the mind[END_REF]) -a vicious circle for a dyslexic child because becoming a fluent reader requires extensive training and exposure to written text [START_REF] Ziegler | Modeling reading development through phonological decoding and self-teaching : Implications for dyslexia[END_REF] In this paper, we report an experiment comparing the reading performance of dyslexic children and poor readers on original and simplified corpora. To the best of our knowledge, this is the first time that such an experiment is undertaken for French readers. Our aim was to reduce the linguistic complexity of ten standardized texts that had been developped to measure reading speed. The idea was to identify the words and the structures that were likely to hamper readability in children with reading deficits. Our hypothesis was that simplified texts would not only improve reading speed but also text comprehension.
A lexical analysis of the reading errors enabled us to identify what kind of lexical complexity was particularly harmful for dyslexic readers and define what kind of features should be taken into account in order to facilitate readability.
Experimental Study
Procedure and participants
We tested the effects of text simplification by contrasting the reading performance of dyslexic children on original and manually simplified texts and their comprehension by using multiple choice questions at the end of each text. The children were recorded while reading aloud. They read ten texts, five original and five simplified in a counter-balanced order. Each text was read in a session with their speech therapists. The texts were presented on a A4 sheet printed in 14 pt Arial font. The experiment took place between december 2014 and march 2015.
After each text, each child had to answer the three multiple-choice comprehension questions without looking at the texts (the questions were the same for the original and the simplified versions of the text). Three possible answers were provided in a randomized order : the correct one, a plausible one taking into account the context, and a senseless one. Two trained speech therapists collected the reading times and comprehension scores, annotated the reading errors, and proposed a global analysis of the different errors (cf. 3.1) [START_REF] Brunel | Simplification de textes pour faciliter leur lisibilité et leur compréhension[END_REF].
Ten children aged between 8 and 12 attending regular school took part in the present study (7 male, 3 female). The average age of the participants was 10 years and 4 months. The children had been formally diagnosed with dyslexia through a national reference center for the diagnosis of learning disabilities. Their reading age1 corresponds to 7 years and 6 months, which meant that they had an average reading delay of 2 years and 8 months.
Data set
The corpora used to test text simplification is a collection of ten equivalent standardized texts (IReST, International Reading Speed Texts2 ). The samples were designed for different languages keeping the same difficulty and linguistic characteristics to assess reading performances in different situations (low vision patients, normal subjects under different conditions, developmental dyslexia, etc.). The French collection consists on nine descriptive texts and a short story (more narrative style).
The texts were analyzed using TreeTagger [START_REF] Schmid | Probabilistic part-of-speech tagging using decision trees[END_REF], a morphological analyzer which performs lemmatization and part-of-speech tagging. The distribution in terms of part-of-speech categories is roughly the same in original and simplified texts, although simplified ones have more nouns and less verbs and adjectives. Table 1 shows the average number of tokens per text and per sentence, the average number of sentences per text, the distribution of main content words and the total number of lemmas :
Simplifications
Each corpus was manually simplified at three linguistic levels (lexical, syntactic, discursive). It is worth mentioning that, in previous work, text simplifications are commonly considered as lexical and syntactic [START_REF] Carroll | Simplifying Text for Language Impaired readers[END_REF], little attention is generally paid to discourse simplification with a few exceptions. In this study, we decided to perform three kinds of linguistic transformations because we made the hypothesis that all of them would have an effect on the reading performance. However, at the time being, only the lexical simplifications have been analyzed in detail (cf. section 3.2).
The manual simplifications were made according to a set of criteria. Because of the absence of previous research on this topic, the criteria were defined by three annotators following the recommendations for readers with dyslexia [START_REF] Ecalle | Des difficultés en lecture à la dyslexie : problèmes d'évaluation et de diagnostic[END_REF] for French and [START_REF] Rello | DysWebxia. A Text Accessibility Model for People with Dyslexia[END_REF] for Spanish.
Lexical simplifications. At the lexical level, priority was given to high-frequency words, short words and regular words (high grapheme-phoneme consistency). Content words were replaced by a synonym 3 . The lexical difficulty of a word was determined on the basis of two available resources : Manulex [START_REF] Lété | Manulex : A grade-level lexical database from French elementary-school readers[END_REF] 4 , a grade-level lexical database from French elementary school readers, and FLELex (Franc ¸ois et al., 2014) 5 , a graded lexicon for French as a foreign language reporting frequencies of words across different levels.
If the word in the original text had a simpler synonym (an equivalent in a lower level) the word was replaced. For instance, the word consommer ('to consume') has a frequency rate of 3.55 in Manulex, it was replaced by manger ('to eat') that has 30.13. In most of the cases, a word with a higher frequency is also a shorter word : elle l'enveloppe dans ses fils collants pour le garder et le consommer plus tard > ... pour le garder et le manger plus tard ('she wraps it in her sticky net to keep it and eat it later').
Adjectives or adverbs were deleted if there was an agreement among the three annotators, i.e. if it was considered that the information provided by the word was not relevant to the comprehension of the sentence. To give an example, inoffensives ('harmless') was removed in Il y a des mouches inoffensives qui ne piquent pas ('there are harmless flies that do not sting').
In French, lexical replacements often entail morphological or syntactic modifications of the sentence, in these cases the words or the phrases were also modified to keep the grammaticality of the sentence (e.g. determiner and noun agreement) and the same content (meaning). Example, respectively with number and gender agreement : une partie des plantes meurt and quelques plantes meurent ('some plants die'), or la sécheresse ('drought') and au temps sec ('dry wheather').
Syntactic simplifications. Structural simplifications imply a modification on the order of the constituents or a modification of the sentence structure (grouping, deletion, splitting [START_REF] Brouwers | Syntactic French Simplification for French[END_REF]). In French, the canonical order of a sentence is SVO, we thus changed the sentences where this order was not respected (for stylistic reasons) : ensuite poussent des buissons was transformed into ensuite des buissons poussent ('then the bushes grow'). The other syntactic reformulations undertaken on the IReST corpora are the following : passive voice to active voice, and present participle to present tense (new sentence through ponctuation or coordinate conjunction).
Discursive simplifications. As for transformations dealing with the coherence and the cohesion of the text, given that the texts were short, we only took into account the phenomena of anaphora resolution, i.e. expliciting the antecedent of a pronoun (the entity which it refers to). Although a sentence where the pronouns have been replaced by the antecedents may be stylistically poorer, we made the hypothesis that it is easier to understand. For instance : leurs traces de passage ('their traces') was replaced by les traces des souris ('the mice traces').
The table 2 gives an idea of the transformations performed in terms of quantity. As clearly showed, the majority of simplifications were lexical :
3. The following reference resources were used : the database www.synonymes.com and the Trésor de la Langue Franc ¸aise informatisé (TLFi) http://atilf.atilf.fr/tlf.htm.
4
Results
Two different analyses were performed : one for quantitatively measuring the reading times, the number of errors and the comprehension scores. The second one took specifically into account the lexicon : the nature of the words incorrectly read.
Behavioral data analysis
Reading Times. The significance of the results was assessed with a pairwise t-test (Student) 6 From this table it can be seen that the overall reading times of simplified texts were significantly shorter than the reading times of original texts. While this result can be attributed to the fact that simplified texts were slightly shorter than original texts, it should be emphasized that reading speed (words per minute), which is independent of the length of a text, was significantly greater in simplified texts than in original texts.
Number of errors. The total number of errors included :
-(A) the total number of skipped words, repeated words (words read twice), interchanged words, line breaks, repeated lines (line read twice) -(B) the total number of words incorrectly read for lexical reasons (the word read is a pseudo-word or a different word) -(C) the total number of words incorrectly read for grammatical reasons (the word read has the same grammatical category (part-of-speech) but varies on number, gender, tense, mode, person)
First of all, it should be noted that participants made fewer errors in simplified texts than in original ones (5,5% vs 7,7%) 7 . The table 4 shows the distribution of all the errors : It can be noted that lexical and grammatical errors occurred equally often 8 .
Comprehension scores
6. ** significant results with p < 0.01 7. This difference was significant in a t-test (t = 2,3, p < 0.05) 8. A more detailed analysis of these errors is proposed on section 3.2.
The results of the comprehension questionnaire are better for simplified than for original texts (marginal gain 9 ) as shown on table 5 : These results entail that dyslexic children read the simplified version of the corpus without a significant loss of comprehension. If anything, they showed a marginal increase in comprehension scores for simplified texts.
Lexical analysis
As we were interested in the lexicon of the corpus, an analysis of the content words (i.e. nouns, verbs, adjectives, adverbs) incorrectly read was undertaken in order to better target the reading pitfalls. From our study, we identified 404 occurrences that were incorrectly read, corresponding to 213 different lemmas (to be precise, there were 235 tokens (22 were inflected variants), i.e. arbre and arbres, or restaient, restent, rester). 404 wrong read words corresponds to 26.81 % of the content words of the corpora, which means that more than one word out of four is incorrectly read.
It is worth mentioning that we did not count monosyllabic grammatical words as determiners, pronouns or prepositions, although an important number or errors occurred also on those tokens, i.e. le read la ('the'), ces read des ('these'), pour read par ('for'). We make the hypothesis that the readers concentrate their efforts on decoding content words, and not grammatical ones, because they are those that carry the semantic information and are thus important for text comprehension. Besides, as grammatical words are usually very short and frequent in French, they have a higher number of orthographic neighbours and people with dyslexia tend to confuse short similar words.
We distinguished the words that were replaced by a pseudo-word (29.46%) and those replaced by other existing words on French vocabulary (70.37%). These figures can be compared with those obtained by Rello and collaborators [START_REF] Rello | A First Approach to the Creation of a Spanish Corpus of Dyslexic Texts[END_REF]. Non-word errors are pronunciations that do not result in an existing word, real-word errors are pronunciations that result in an incorrect but existing word. Non-word errors appear to be higher in English (83%) and in Spanish (79%), but not in French where real-word errors were clearly a majority 10 : Grammatical variants concern variations on gender and number for nouns, and for person, tense and mode for verbs. Lexical remplacements are words read as if they were other words with orthographical similarities (lieu > île, en fait > enfin, commun > connu, etc.). Morphological variants are words of 9. p < 0.1 10. This finding will deserve more attention in future work.
the same morphological family (baisse > basse, malchanceux > malchance). As for orthographical neighbours, we specifically distinguish word pairs where the difference is only of one letter (raisins > raisons, bon > don).
Concerning word length for all the mentionned features, 36.88% of the words read were replaced by words of strictly the same length (forment > formant, catégorie > *calégorie), 14.11% were replaced by longer ones (utile > utilisé, suffisant > suffisamment), 49.01% were replaced by shorter ones (nourriture > nature, finie > fine, empilées > empli). The average length of the 404 words incorrectly read is 7.65 characters (the shortest has three characters, bon, and the longest 16, particulièrement).
The average number of orthographical neighbours is 3.24, with eight tokens having more than ten neighbours : bon, bois, basse, foule, fine, fils, garde, sont ('good, wood, low, crowd, thin, thread, keeps, are').
As far as the grammatical categories are concerned, the majority of the errors were on verbs. They concerned grammatical variants of person, tense (past imparfait > present) and mode (present > present participle). The distribution on part-of-speech tags errors is shown on table 8 In French, it is stated that the more frequent (and easier) structure is CV and V. In our results, 58,69% of the words contain this common structure, while 41,31% present a more complex structure (CVC, CVCC, CYC 11 , etc.) We finally analyzed the consistency of grapheme-to-phoneme correspondences which is particularly irregular in French (silent letters, nasal vowels, etc.) 12 . As mentioned above, the average length of the words incorrectly read is 7.65 and their average in number of phonemes is 4.95. This means that the 11. C is a consonant, V is a vowel, Y is a semi-vowel, i.e. [j] in essayait [e-se-je], [w] in doivent [dwav] 12. This is not the case for other languages, e.g. the Spanish writing system has consistent grapheme-to-phoneme correspondences.
average difference between the number of letters and the number of real phonemes is 2.71. Only four tokens were regular (same number of phonemes than letters : existe, mortel, partir, plus ('exists, mortal, leave, plus')). The highest difference is 6 in apparaissent, épargneaient ('appear, saved') with 12 letters and 6 phonemes each, and mangeaient ('ate') with 10 letters and 4 phonemes. All the words incorrectly read were thus irregular as far as grapheme-to-phoneme consistency is concerned.
4 Discussion : determining where complexity is According to the literature, complexity for children with dyslexia should be found on long and less frequent words. More precisely, from the analysis of the reading errors obtained on our first pilot-study, the errors mainly occur on verbs and nouns with complex syllable structure, i.e. irregular grapheme-tophoneme correspondences, words with many orthographic neighbours or many morphological family members which are more frequent. Visual similarity is a source of error, specially for the following pairs 13 : In all the replacements we can observe visual similarities. As shown in table 12 ,the word that is actually read tends to be in most of the cases shorter and more frequent 14 than the original one : To sum up, lexical complexity for dyslexic readers in French is to be found on verbs and nouns longer than seven characters, presenting letters with similar equivalents, with complex syllables and irregular phoneme-to-grapheme consistency. Lexical replacements of words incorrectly read should consider shorter and more frequent words and words with higher grapheme-to-phoneme consistency.
Conclusion
In this paper we have presented the results of a first pilot-study aiming at testing the effects of text simplification on children with dyslexia. From our results, reading speed is increased without a loss of 13. Other possible similar pairs (not found in our corpora) : t/f, u/v, a/o 14. The frequencies have been extracted from the Manulex database (column including the five levels). comprehension. It is worth mentioning that reading errors were lower on simplified texts (in this experiment, simplified texts contained a majority of lexical simplifications). The comprehensive analyses of reading errors allow us to propose a detailed description of lexical complexity for dyslexic children. The causes of lexical complexity were mainly related to word length (words longer than seven characters), irregular spelling-to-sound correspondences and infrequent syllable structures.
The insights obtained as a result of this first pilot-study are currently being integrated into a model aiming at providing better accessibility of texts for children with dyslexia. We are currently working in a new study with children in French schools to refine the features that are to be taken into account in our model. These results will be integrated into a tool that will automatically simplify texts by replacing complex lexical items with simpler ones.
TABLE 1 -
1 IReST corpora features before and after manual simplifications.
TABLE 2 -
2 Linguistic transformations on the IReST French corpora.
Lexical Simplifications 85.91%
Direct replacements 57.04%
Removals 13.38%
Replacements with morphological changes 4.93%
Replacements with syntactical changes 10.56%
Syntactic Simplifications 9.86%
Reformulations 7.75%
Constituent order 2.11%
Discursive Simplifications 4.23%
Total 100 %
. http://www.manulex.com 5. http://cental.uclouvain.be/flelex/
TABLE 3 -
3 . The results are shown on table 3 : Significance of the results obtained.
Variables Original texts Simplified texts T value Significance
Reading times (sec) 159.94 134.70 -3.528 0.006**
Reading speed (words per minute) 64.85 71.10 4.105 0.003**
TABLE 4 -
4 Distribution of the types of errors in original and simplfied texts.
TABLE 5 -
5 Significance of the results obtained.
TABLE 6 -
6 Error typology compared accross languages.The overall error typology that we propose is shown on table 7 :
Type of lexical replacement Original word English translations
Pseudo-word 119 29.46% grenouille > *greniole frog, *
Grammatical variant 135 33.42 % oubliaient > oublient forgot, forget
Lexical replacement 84 20.79% attendent > attaquent wait, attack
Morphological variant 43 10.64% construction > construire build, to build
Orthographical neighbour 23 5.69% jaunes > jeunes yellow, young
Total 404 100%
TABLE 7 -
7 Error typology.
:
Part-of-speech tags of tokens incorrectly read
VERB 196 48.51 %
NOUN 115 28.47%
ADJECTIVE 48 11.88%
ADVERB 25 6.19%
Other categories (determiners excluded) 20 4.95%
TABLE 8 -
8 Part-of-speech distribution of the tokens in the corpora.We analyzed the syllabe structure of the 404 tokens. The average number of syllables is 2.09, the distribution is shown on table 9 :
Number of syllabs
1 syllab 72 30,64%
2 syllabs 96 40,85%
3 syllabs 47 20,00%
4 syllabs 15 6,38%
5 syllabs 5 2,13%
235 100,00%
TABLE 9 -
9 Syllabs distribution of the tokens in the corpora.
, as shown on table10 :
Syllable structure
CV 230 47,03%
V 57 11,66%
CVC 107 21,88%
CVCC, CCVC, CYVC 47 9,61%
CYV, CCV, VCC, CVY 34 6,95%
VC, YV 10 2,04%
VCCC, CCYV, CCVCC 4 0,82%
489 100,00%
TABLE 10 -
10 Syllable structure.
TABLE 11 -
11 Graphical alternations.
TABLE 12 -
12 Lexical replacements typology with frequencies of the tokens.
We used standardized reading tests to assess the reading level of each child, i.e. lAlouette[START_REF] Lefavrais | Test de l'alouette[END_REF] and PM47[START_REF] Raven | Pm47 : Standard progressive matrices : Sets a[END_REF] and a small battery of tests to assess general cognitive abilities.
http ://www.vision-research.eu
Acknowledgements
We deeply thank the speech therapists Aurore and Mathilde Combes for collecting the reading data and providing a first analysis of the data. We also thank Luz Rello for her valuable insights on parts of the results. |
01757946 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01757946/file/CK2017_WuBaiCaro_HAL.pdf | Guanglei Wu
Shaoping Bai
Stéphane Caro
email: stephane.caro@ls2n.fr
Transmission Quality Evaluation for a Class of Four-limb Parallel Schönflies-motion Generators with Articulated Platforms
Keywords: Schönflies motion, Jacobian, pressure angle, transmission. 1
This paper investigated the motion/force transmission quality for a class of parallel Schönflies-motion generators built with four identical RRΠ RR-type limbs. It turns out that the determinant of the forward Jacobian matrices for this class of parallel robots can be expressed as the scalar product of two vectors, the first vector being the cross product of the four unit vectors along the parallelograms, the second one being related to the rotation of the mobile platform. The pressure angles, derived from the determinants of forward and inverse Jacobians, respectively, are used for the evaluation of the transmission quality of the robots. Four robots are compared based on the proposed method as illustrative examples.
Introduction
Parallel robots performing Schönflies motions are well adapted to high-speed pickand-place (PnP) operations [START_REF] Pierrot | Optimal design of a 4-dof parallel manipulator: From academia to industry[END_REF][START_REF] Amine | Singularity conditions of 3T1R parallel manipulators with identical limb structures[END_REF], thanks to their lightweight architecture and high stiffness. A typical robot is the Quattro robot [START_REF]Adept Quattro Parallel Robots[END_REF] by Adept Technologies Inc., the fastest industrial robot available. Its latest version can reach an acceleration up to 15 G with a 2 kg payload, allowing to accomplish four standard PnP cycles per second. Its similar version is the H4 robot [START_REF] Pierrot | H4: a new family of 4-dof parallel robots[END_REF] that consists of four identical limbs and an articulated traveling plate [START_REF] Company | Internal singularity analysis of a class of lower mobility parallel manipulators with articulated traveling plate[END_REF]. Recently, the Veloce. robot [START_REF] Veloce | [END_REF] with a different articulated platform that is connected by a screw pair has been developed. Besides, the four-limb robots with single-platform architecture have also been reported [START_REF] Wu | Architecture optimization of a parallel schönflies-motion robot for pick-and-place applications in a predefined workspace[END_REF][START_REF] Xie | Design and development of a high-speed and high-rotation robot with four identical arms and a single platform[END_REF]. Four-limb parallel robots with an articulated mobile platform are displayed in Fig. 1. It is noteworthy that the H4 robot with the modified mobile platform can be mounted vertically instead of the horizontal installation for the reduced mounting space, to provide a rotation around an axis of vertical direction, which is named as "V4" for convenience in the following study.
In the design and analysis of a manipulator, its kinematic Jacobian matrix plays an important role, since the dexterity/manipulability of the robot can be evaluated by the condition number of Jacobians as well as the accuracy/torque capability [START_REF] Merlet | Jacobian, manipulability, condition number, and accuracy of parallel robots[END_REF] be-tween the actuators and end-effector. On the other hand, a problem usually encountered in this procedure is that the parallel manipulators with mixed input or/and output motions, i.e., compound linear and angular motions, will result in dimensionally inhomogeneous Jacobians, thus, the conventional performance indices associated with the Jacobian matrix, such as norm or condition number, will lack in physical significance [START_REF] Kim | New dimensionally homogeneous Jacobian matrix formulation by three end-effector points for optimal design of parallel manipulators[END_REF]. As far as Schönflies-motion generators are concerned, their endeffector generates a mixed motion of three translations and one rotation (3T1R), for which the terms of the kinematic Jacobian matrix do not have the same units. A common approach to overcome this problem is to introduce a characteristic length [START_REF] Altuzarra | Multiobjective optimum design of a symmetric parallel Schönflies-motion generator[END_REF] to homogenize the Jacobian matrix, whereas, the measurement significantly depends on the choice of the characteristic length that is not unique, resulting in biased evaluation, although a "best" one can be found by optimization technique [START_REF] Angeles | Is there a characteristic length of a rigid-body displacement?[END_REF]. Alternatively, an efficient approach to accommodate this dimensional inhomogeneity is to adopt the concept of the virtual coefficient, namely, the transmission index, which is closely related to the transmission/pressure angle. The pressure angle based transmission index will be adopted in this work.
This paper presents a uniform evaluation approach for transmission quality of a family of four-limb 3T1R parallel robots with articulated mobile platforms. The pressure angles, derived from the forward and inverse Jacobians straightforward, are used for the evaluation of the transmission quality of the robots. The defined transmission index is illustrated with four robot counterparts for the performance evaluation and comparison. The global coordinate frame F b is built with the origin located at the geometric center of the base platform. The x-axis is parallel to the segment A 2 A 1 (A 3 A 4 ), and the z-axis is normal to the base-platform plane pointing upwards. The moving coordinate frame F p is attached to the mobile platform and the origin is at the geometric center, where X-axis is parallel to segment C 2 C 1 (C 3 C 4 ). Vectors i, j and k represent the unit vectors of x-, yand z-axis, respectively. The axis of rotation of the ith actuated joint is parallel to unit vector u i = R z (α i )i, where R stands for the rotation matrix, and
Manipulator Architecture
α 1 = -α 2 = α -π/2, α 3 = -α 4 = β + π/2
. Moreover, unit vectors v i and w i are parallel to the segments A i B i and B i C i , respectively, namely, the unit vectors along the proximal and distal links, respectively.
C2 C4
H pair
(lead: h) P2 (c)
Kinematics and Jacobian Matrix of the Robots
The Cartesian coordinates of points A i and B i expressed in the frame F b are respectively derived by
a i = R cos η i sin η i 0 T (1) b i = bv i + a i ; v i = R z (α i )R x (θ i )j (2)
where η i = (2i -1)π/4, i = 1, ..., 4, and θ i is the input angle.
Let the mobile platform pose be denoted by χ χ χ = p T φ T , p = x y z T , the Cartesian coordinates of point C i in frame F b are expressed as
c i = sgn(cos η i )rR z (φ )i + sgn(sin η i )cj + p, Quattro (H4) -sgn(cos η i )rR y (φ )i + sgn(cos η i )cj + p, V4 rR z (η i )i + mod(i, 2)hφ /(2π)k + p, Veloce. (3)
where sgn(•) stands for the sign function of (•), and mod stands for the modulo operation, h being the lead of the screw pair of the Veloce. robot. The inverse geometric problem has been well documented [START_REF] Pierrot | Optimal design of a 4-dof parallel manipulator: From academia to industry[END_REF]. It can be solved from the following the kinematic constraint equations:
(c i -b i ) T (c i -b i ) = l 2 , i = 1, ..., 4 (4)
Differentiating Eq. ( 4) with respect to time, one obtains
φ rw T i s i + w T i ṗ = θi bw T i (u i × v i ) (5)
with
w i = c i -b i l ; s i = sgn(cos η i )R z (φ )j, Quattro (H4) sgn(cos η i )R y (φ )k, V4 mod(i, 2)hφ /(2π)k, Veloce. (6)
Equation ( 5) can be cast in a matrix form, namely,
A χ χ χ = B θ θ θ (7)
with
A = e 1 e 2 e 3 e 4 T ; χ χ χ = ẋ ẏ ż φ T (8a) B = diag h 1 h 2 h 3 h 4 ; θ θ θ = θ1 θ2 θ3 θ4 T (8b)
where A and B are the forward and inverse Jacobian matrices, respectively, and
e i = w T i rw T i s i T ; h i = bw T i (u i × v i ) (9)
As along as A is nonsingular, the kinematic Jacobian matrix is obtained as
J = A -1 B (10)
According to the inverse Jacobian matrix, each limb can have two working modes, which is characterized by the sign "-/+" of h i . In order for the robot not to reach any serial singularity, the mode h i < 0, i = 1, ..., 4, is selected as the working mode for all the robots.
Transmission Quality Analysis
Our interests are the transmission quality, which is related to the robot Jacobian. The determinant |B| of the inverse Jacobian matrix B is expressed as
|B| = 4 ∏ i=1 h i = b 4 4 ∏ i=1 w T i (u i × v i ) (11)
sequentially, the pressure angle µ i associated with the motion transmission in the ith limb, i.e., the motion transmitted from the actuated link to the parallelogram, is defined as:
µ i = cos -1 w T i (u i × v i ), i = 1, ..., 4 (12)
namely, the pressure angle between the velocity of point B i along the vector of u i × v i and the pure force applied to the parallelogram along w i , as shown in Fig. 3(a). where w mn = w m × w n . Taking the Quattro robot as an example, the pressure angle σ amongst limbs, namely, the force transmitted from the end-effector to the passive parallelograms in the other limbs, provided that the actuated joints in these limbs are locked, is derived below:
A i B i C i u i v i w i u i ×v i μ i (a)
σ = cos -1 (w 14 × w 23 ) T s w 14 × w 23 ( 14
)
wherefrom the geometrical meaning of angle σ can be interpreted as the angle between the minus Y -axis (s is normal to segment P 1 P 2 ) and the intersection line of planes B 1 P 1 B 4 and B 2 P 2 B 3 , where plane B 1 P 1 B 4 (B 2 P 2 B 3 ) is normal to the common perpendicular line between the two skew lines along w 1 and w 4 (w 2 and w 3 ), as depicted in Fig. 3(b). To illustrate the angle σ physically, (w 14 × w 23 ) T s can be rewritten in the following form:
(w 14 × w 23 ) T s = w T 14 [w 3 (w 2 • s) -w 2 (w 3 • s)] (15) = w T 23 [w 4 (w 1 • s) -w 1 (w 4 • s)]
The angle σ now can be interpreted as the pressure angle between the velocity in the direction of w 1 × w 4 and the forces along w 2 × w 3 imposed by the parallelograms in limbs 2 and 3 to point P, under the assumption that the actuated joints in limbs 1 and 4 are locked simultaneously. The same explanation is applicable for the case when the actuated joints in limbs 2 and 3 are locked. By the same token, the pressure angle for the remaining robot counterparts can be defined. Consequently, the motion κ and force ζ transmission indices (TI) a prescribed configuration are defined as the minimum value of the cosine of the pressure angles, respectively,
κ = min(| cos µ i |), i = 1, ..., 4; ζ = | cos σ | (16)
To this end, the local transmission index (LTI) [START_REF] Wang | Performance evaluation of parallel manipulators: Motion/force transmissibility and its index[END_REF] is defined as
η = min{κ, ζ } = min{| cos µ i |, | cos σ |} ∈ [0, 1] (17)
The larger the value of the index η, the better the transmission quality of the manipulator. This index can also be applicable for singularity measurement, where η = 0 means singular configuration.
Transmission Evaluation of PnP Robots
In this section, the transmission index over the regular workspace, for the Quattro, H4, Veloce. and V4 robots, will be mapped to analyzed their motion/force transmission qualities. According to the technical parameters of the Quattro robot [START_REF]Adept Quattro Parallel Robots[END_REF], the parameters of the robots' base and mobile platforms are given in Table 1, and other parameters are set to R = 275 mm, b = 375 mm and l = 800 mm, respectively.
Table 1 Geometrical parameters of the base and mobile platforms of the four-limb robots. The LTI isocontours of the four robots with different rotation angles of mobile platform are visualized in Fig. 4, from which it is seen that the minimum LTI of the Quattro and Veloce. robots are much higher than those of H4 and V4. Moreover, the volumes of the formers with LTI ≥ 0.7 are larger, to formulate larger operational workspace with high transmission quality. This means that the four-limb robots with a fully symmetrical structure have much better transmission performance than the asymmetric robot counterparts. Another observation is that the transmission performance of the robots decreases with the increasing MP rotation angle.
As displayed in Fig. 4(a), the transmission index of the Quattro robot have larger values in the central region, which admits a singularity-free workspace with rotational capability φ = ±45 • . Similarly, Fig. 4(c) shows that the Veloce. robot can also have a high-transmission workspace free of singularity with smaller lead of screw pair, which means that this type of mobile platform allows the robot to have high performance in terms of transmission quality and rotational capability of fullcircle rotation. By contrast, the asymmetric H4 and V4 robots result in relatively small operational workspace and relatively low transmission performance, as illustrated in Figs. 4(b) and 4(d), but similar mechanism footprint ratio with same link dimensions and close platform shapes.
Conclusions
This paper presents the transmission analysis for a class of four-limb parallel Schönflies-motion robots with articulated mobile platforms, closely in connection with two pressure angles derived from the forward and inverse Jacobian matrices, wherein the determinant of the forward Jacobian matrices was simplified in an elegant manner, i.e., the scalar product between two vectors, through the Laplace expansion. The cosine function of the pressure angles based indices are defined to evaluate the transmission quality. It appears that the robot with the screw-pair-based mobile platform, namely, the Veloce., is the best in terms of transmission quality for any orientation of the mobile-platform.
Figure 2 (
2 Figure 2(a) depicts a simplified CAD model of the parallel Schönflies-motion generator, which is composed of four identical RRΠRR 1 -type limbs connecting the base and an articulated mobile platform (MP). The generalized base platform and the different mobile platforms of the four robots are displayed in Figs. 2(b) and 2(c), respectively.The global coordinate frame F b is built with the origin located at the geometric center of the base platform. The x-axis is parallel to the segment A 2 A 1 (A 3 A 4 ), and the z-axis is normal to the base-platform plane pointing upwards. The moving coordinate frame F p is attached to the mobile platform and the origin is at the geometric center, where X-axis is parallel to segment C 2 C 1 (C 3 C 4 ). Vectors i, j and k represent the unit vectors of x-, yand z-axis, respectively. The axis of rotation of the ith actuated joint is parallel to unit vector u i = R z (α i )i, where R stands for the rotation matrix, and α 1 = -α 2 = απ/2, α 3 = -α 4 = β + π/2. Moreover, unit vectors v i and w i are parallel to the segments A i B i and B i C i , respectively, namely, the unit vectors along the proximal and distal links, respectively.
Fig. 1
1 Fig. 1 The four-limb PnP robots with different base and mobile platforms: (a) Quattro [1]; (b) H4 [9]; (c) Veloce. [2]; (d) "V4" [12].
Fig. 2
2 Fig. 2 The parameterization of the four-limb robots: (a) simplified CAD model; (b) a generalized base platform; (c) three different mobile platforms for the four robots.
Fig. 3
3 Fig. 3 The pressure angles of the four-limb robots in the motion/force transmission: (a) µ i for all robots; (b) σ for Quattro.
robots base mobile platform Quattro α = -π/4, β = 3π/4 r = 80 mm, c = 70 mm H4, V4 α = 0, β = π/2 r = 80 mm, c = 70 mm Veloce. α = -π/4, β = 3π/4 r = 100 mm, γ = (2i -1)π/4, h
Fig. 4
4 Fig. 4 The LTI isocontours of the robots: (a) Quattro, φ = 0 and φ = 45 • ; (b) H4, φ = 0 and φ = 45 • ; (c) Veloce. with φ = 2π, screw lead h = 20 and h = 50; (d) V4, φ = 0 and φ = 45 • .
Acknowledgements The reported work is partly supported by the Fundamental Research Funds for the Central Universities (DUT16RC(3)068) and by Innovation Fund Denmark (137-2014-5). |
01757949 | en | [
"info.info-fl"
] | 2024/03/05 22:32:10 | 2018 | https://inria.hal.science/hal-01757949/file/HMMSuspicious.pdf | Loïc Hélouët
email: loic.helouet@inria.fr
John Mullins
email: john.mullins@polymtl.ca
Hervé Marchand
email: herve.marchand@inria.fr
Concurrent secrets with quantified suspicion
A system satisfies opacity if its secret behaviors cannot be detected by any user of the system. Opacity of distributed systems was originally set as a boolean predicate before being quantified as measures in a probabilistic setting. This paper considers a different quantitative approach that measures the efforts that a malicious user has to make to detect a secret. This effort is measured as a distance w.r.t a regular profile specifying a normal behavior. This leads to several notions of quantitative opacity. When attackers are passive that is, when they just observe the system, quantitative opacity is brought back to a language inclusion problem, and is PSPACEcomplete. When attackers are active, that is, interact with the system in order to detect secret behaviors within a finite depth observation, quantitative opacity turns to be a two-player finitestate quantitative game of partial observation. A winning strategy for an attacker is a sequence of interactions with the system leading to a secret detection without exceeding some profile deviation measure threshold. In this active setting, the complexity of opacity is EXPTIME-complete.
I. INTRODUCTION
Opacity of a system is a property stating that occurrences of runs from a subset S of runs of the system (the secret) can not be detected by malicious users. Opacity [START_REF] Bryans | Opacity generalised to transition systems[END_REF], [START_REF] Badouel | Concurrent secrets[END_REF] can be used to model several security requirements like anonymity and non-interference [START_REF] Goguen | Security policies and security models[END_REF]. In the basic version of non-interference, actions of the system are divided into high (classified) actions and low (public) ones, and a system is non-interferent iff one can not infer from observation of low operations that highlevel actions were performed meaning that occurrence of high actions cannot affect "what an user can see or do". This implicitly means that users have, in addition to their standard behavior, observation capacities.
Non-interference is characterized as an equivalence between the system as it is observed by a low-level user and a ideally secure version of it where high-level actions and hence any information flow, are forbidden. This generic definition can be instantiated in many ways, by considering different modeling formalisms (automata, Petri nets, process algebra,...), and equivalences (language equivalence, bisimulation(s),...) representing the discriminating power of an attacker. (see [START_REF] Sabelfeld | Language-based information-flow security[END_REF] for a survey).
Opacity generalizes non-interference. The secrets to hide in a system are sets of runs that should remain indistinguishable from other behaviors. A system is considered as opaque if, as observed, one can not deduce that the current execution belongs to the secret. In the standard setting, violation of opacity is a passive process: attackers only rely on their partial observation of runs of the system. Checking whether a system is opaque is a PSPACE-complete problem [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF].
As such, opacity does not take in account information that can be gained by active attackers. Indeed, a system may face an attacker having the capability not only to observe the system but also to interact with him in order to eventually disambiguate observation and detect a secret. A second aspect usually ignored is the quantification of opacity : the more executions leaking information are costly for the attacker, the more secure is the system. In this paper we address both aspects. A first result of this paper is to consider active opacity, that is opacity in a setting where attackers of a system perform actions in order to collect information on secrets of the system. Performing actions in our setting means playing standard operations allowed by the system, but also using observation capacities to infer whether a sensible run is being performed. Checking opacity in an active context is a partial information reachability game, and is shown EXPTIME-complete.
We then address opacity in a quantitative framework, characterizing the efforts needed for an attacker to gain hidden information with a cost function. Within this setting, a system remains opaque if the cost needed to obtain information exceeds a certain threshold. This cost is measured as a distance of the attacker's behavior with respect to a regular profile, modeling that deviations are caught by anomaly detection mechanisms. We use several types of distances, and show that quantitative and passive opacity remains PSPACE-complete, while quantitative and active opacity remains EXPTIMEcomplete.
Opacity with passive attackers has been addressed in a quantitative setting by [START_REF] Bérard | Quantifying opacity[END_REF]. They show several measures for opacity. Given a predicate φ characterizing secret runs, a first measure quantifies opacity as the probability of a set of runs which observation suffice to claim the run satisfies φ. A second measure considers observation classes (sets of runs with the same observation), and defines the restrictive probabilistic opacity measure as an harmonic mean (weighted by the probability of observations) of probability that φ is false in a given observation class. Our setting differs from the setting of [START_REF] Bérard | Quantifying opacity[END_REF] is the sense that we do not measure secrecy as the probability to leak information to a passive attacker, but rather quantify the minimal efforts required by an active attacker to obtain information.
The paper is organized as follows: Section II introduces our model for distributed systems, and the definition of opacity. Section III recalls the standard notion of opacity usually found in the literature and its PSPACE-completeness, shows how to model active attackers with strategies, and proves that active opacity can be solved as a partial information game over an exponential size arena, and is EXPTIME-complete. Section IV introduces quantification in opacity questions, by measuring the distance between the expected behavior of an agent and its current behavior, and solves the opacity question with respect to a bound on this distance. Section V enhances this setting by discounting distances, first by defining a suspicion level that depends on evolution of the number of errors within a bounded window, and then, by averaging the number of anomalies along runs. The first window-based approach does not change the complexity classes of passive/active opacity, but deciding opacity for averaged measures is still an open problem.
II. MODEL
Let Σ be an alphabet, and let Σ ⊆ Σ. A word of Σ * is a sequence of letters w = σ 1 . . . σ n . We denote by w -1 the mirror of w, i.e., w -1 = σ n . . . σ 1 .The projection of w on Σ ⊆ Σ is defined by the morphism π Σ : Σ * → Σ * defined as π Σ ( ) = , π Σ (a.w) = a.π Σ (w) if a ∈ Σ and π Σ (a.w) = π Σ (w) otherwise. The inverse projection of w is the set of words which projection is w, and is defined as π -1 Σ (w) = {w ∈ Σ * | π Σ (w ) = w}. For a pair of words w, w defined over alphabets Σ and Σ , the shuffle of w and w is denoted by w||w and is defined as the set of words w||w = {w | π Σ (w ) = w ∧ π Σ (w ) = w }. The shuffle of two languages L 1 , L 2 is the set of words obtained as a shuffle of a words of L 1 with a word of L 2 .
Definition 1: A concurrent system S = (A, U ) is composed of:
• A finite automaton A = (Σ, Q, -→, q 0 , F ) • A finite set of agents U = u 1 , . . . u n , where each u i is a tuple u i = (A i , P i , S i , Σ i o ), where A i , P i , S i are automata and Σ i o an observation alphabet. Agents behave according to their own logic, depicted by a finite automaton A i = (Σ i , Q i , -→ i , q i 0 , F i ) over an action alphabet Σ i . We consider that agents moves synchronize with the system when performing their actions. This allows modeling situations such as entering critical sections. We consider that in A and in every A i , all states are accepting. This way, every sequence of steps of S that conforms to transition relations is a behavior of S. An agent u i observes a subset of actions, defined as an observation alphabet Σ i o ⊆ Σ 1 . Every agent u i possesses a secret, defined as a regular language L(S i ) recognized by automaton
S i = (Σ, Q S i , -→ S i , q S 0,i , F S i ).
All states of secret automata are not accepting, i.e. some behaviors of an agent u i are secret, some are not. We equip every agent u i with a profile P i = (Σ, Q P i , δ P i , s P 0,i , F P i ), that specifies its "normal" behavior. The profile of an agent is 1 A particular case is Σ i o = Σ i , meaning that agent u i observes only what it is allowed to do. prefix-closed. Hence, F P i = Q P i , and if w.a belongs to profile L(P i ) then w is also in user u i 's profile. In profiles, we mainly want to consider actions of a particular agent. However, for convenience, we define profiles over alphabet Σ, and build them in such a way that L(
P i ) = L(P i ) (Σ \ Σ i ) * .
We assume that the secret S i of an user u i can contain words from Σ * , and not only words in Σ * i . This is justified by the fact that an user may want to hide some behavior that are sensible only if they occur after other agents actions (u 1 plays b immediately after a was played by another agent). For consistency, we furthermore assume that Σ i ⊆ Σ i o , i.e., an user observes at least its own actions. Two users may have common actions (i.e., Σ i ∩ Σ j = ∅), which allows synchronizations among agents. We denote by Σ U = ∪ i∈U Σ i the possible actions of all users. Note that Σ U ⊆ Σ as the system may have its own internal actions.
Intuitively, in a concurrent system, A describes the actions that are feasible with respect to the current global state of the system (available resources, locks, access rights,...). The overall behavior of the system is a synchronized product of agents behaviors, intersected with L(A). Hence, within a concurrent system, agents perform moves that are allowed by their current state if they are feasible in the system. If two or more agents can perform a transition via the same action a, then all agents that can execute a move conjointly to the next state in their local automaton. More formally, a configuration of a concurrent system is a tuple C = (q, q 1 , . . . , q |U | ), where q ∈ Q is a state of A and each q i ∈ Q i is a local state of user u i . The first component of a configuration C is denoted state(C). We consider that the system starts in an initial configuration C 0 = (q 0 , q 1 0 , . . . , q |U | 0 ). A move from a configuration C = (q, q 1 , . . . , q |U | ) to a configuration C = (q , q 1 , . . . , q |U | ) via action a is allowed
• if a ∈ Σ U and (q, a, q ) ∈-→, or • if a ∈ Σ U , (q, a, q ) ∈-→, there exists at least one agent u i such that (q i , a, q i ) ∈-→ i , and for every q j such that some transition labeled by a is firable from q j , (q j , a, q j ) ∈-→ j . The local state of agents that cannot execute a remains unchanged, i.e., if agent u k is such that a ∈ Σ k and (q j , a, q j ) ∈-→ j , then
q k = q k . A run of S = (A, U ) is a sequence of moves ρ = C 0 a1 -→ C 1 . . . C k . Given a run ρ = C 0 a1 -→ C 1 . . . a k -→ C k , we denote by l(ρ) = a 1 • • • a k its corresponding word.
The set of run of S is denoted by Runs(S), while the language L(S) = l(Runs(S)) is the set of words labeling runs of S. We denote by Conf (S) the configurations reached by S starting from C 0 . The size |S| of S is the size of its set of configurations. Given an automaton A, P i , or S i , we denote by δ(q, A, a) (resp δ(q, P i , a), δ(q, S i , a)) the states that are successors of q by a transition labeled by a, i.e. δ(q, A, a) = {q | q a -→ q }. This relation extends to sets of states the obvious way, and to words, i.e. δ(q, A, w.a) = δ(δ(q, A, w), A, a) with δ(q, A, ) = {q}. Last, for a given sub-alphabet Σ ⊆ Σ and a letter a ∈ Σ , we define by ∆ Σ (q, A, a) the set of states that are reachable from q in A by sequences of moves which observation is a. More formally, ∆ Σ (q, A, a) = {q | ∃w ∈ (Σ \ Σ ) * , q ∈ δ(q, A, w.a)}.
III. OPACITY FOR CONCURRENT SYSTEMS
The standard Boolean notion of opacity introduced by [START_REF] Bryans | Opacity generalised to transition systems[END_REF], [START_REF] Badouel | Concurrent secrets[END_REF] says that the secret of u i in a concurrent system S is opaque to u j if, every secret run of u i is equivalent with respect to u j 's observation to a non-secret run. In other words, u j cannot say with certainty that the currently executed run belongs to L(S i ).
Implicitly, opacity assumes that the specification of the system is known by all participants. In the setting of concurrent system with several agents and secrets, concurrent opacity can then be defined as follows:
Definition 2 (Concurrent Opacity):
A concurrent system S is opaque w.r.t. U (noted U -Opaque) if ∀i = j, ∀w ∈ L(S i ) ∩ L(S), π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i )
Clearly, U -opacity is violated if one can find a pair of users u i , u j and a run labeled by a word w ∈ L(S i )∩L(S) such that π -1
Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ),
i.e. after playing w, there in no ambiguity for u j on the fact that w is a run contained in u i s secret. Unsurprisingly, checking opacity can be brought back to a language inclusion question, and is hence PSPACE-complete. This property was already shown in [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF] with a slightly different model (with a single agent j which behavior is Σ * j and a secret defined as a sub-language of the system A).
Theorem 3 ( [START_REF] Cassez | Synthesis of opaque systems with static and dynamic masks[END_REF]): Deciding whether S is U -opaque is PSPACE-complete.
Proof:[sketch] The proof of PSPACE-completeness consists in first showing that one can find a witness run in polynomial space. One can chose a pair of users u i , u j in logarithmic space with respect to the number of users, and then find a run after which u j can estimate without error that u i is in a secret state. Then, an exploration has to maintain u j 's estimation of possible configuration of status of u i 's secret with |Conf (S)| * |S i | bits. It is also useless to consider runs of length greater than 2 |Conf (S)| * |Si| . So finding a witness is in NPSPACE and using Savitch's lemma [START_REF] Walter | Relationships between nondeterministic and deterministic tape complexities[END_REF] and closure of PSPACE by complementation, opacity is in PSPACE. Hardness comes from a reduction from universality question for regular languages. We refer interested readers to appendix for a complete proof.
The standard notion of opacity considers accidental leakage of secret information to an honest user u j that is passive, i.e. that does not behave in order to obtain this information. One can also consider an active setting, where a particular agent u j behaves in order to obtain information on a secret S i . In this setting, one can see opacity as a partial information reachability game, where player u j tries to reach a state in which his estimation of S i s states in contained in F S i . Following the definition of non-interference by Goguen & Messeguer [START_REF] Goguen | Security policies and security models[END_REF], we also equip our agents with observation capacities. These capacities can be used to know the current status of resources of the system, but not to get directly information on other agents states. We define a set of atomic propositions Γ, and assign observable propositions to each state of A via a map O : Q → 2 Γ . We next equip users with additional actions that consist in asking for the truth value of a particular proposition γ ∈ Γ. For each γ ∈ Γ, we define action a γ that consists in checking the truth value of proposition γ, and define Σ Γ = {a γ | γ ∈ Γ}. We denote by a γ (q) the truth value of proposition γ in state q, i.e., a γ (q) = tt if γ ∈ O(q) and ff otherwise. Given a set of states X = {q 1 , . . . q k }, the refinement of X with assertion γ = v where v ∈ {tt, ff } is the set X \γ=v = {q i ∈ X | a γ (q i ) = v}. Refinement easily extends to a set of configurations CX ⊆ Conf (S) with
CX \γ=v = {C ∈ CX | a γ (state(C)) = v}.
We allow observation from any configuration for every user, hence a behavior of a concurrent system with active attackers shuffles behaviors from L(S), observation actions from Σ * Γ and the obtained answers. To simplify notations, we assume that a query and its answer are consecutive transitions. The set of queries of a particular agent u j will be denoted by Σ Γ j . Adding the capacity to observe states of a system forces to consider runs of S containing queries followed by their answers instead of simply runs over Σ * .
We will denote by S Γ the system S executed in an active environment Formally, a run of S Γ in an active setting is a sequence
ρ = C 0 e1 -→ S Γ C 1 . . . e k -→ S Γ C k where C 0 , . . . , C k are usual configurations, each e i is a letter from Σ ∪ Σ Γ ∪ {tt, ff }, such that • if e k ∈ Σ Γ then C k+1 ∈ δ(C k , S, e k ).
• if e k = a γ ∈ Σ Γ , then e k+1 = a γ (q k-1 )2 , and C k-1 = C k+1 . Intuitively, testing the value of a proposition does not change the current state of the system. Furthermore, playing action a γ from C k-1 leaves the system in the same configuration, but remembering that an agent just made the query a γ . We will write C k = C k-1 (a γ ) to denote this situation. The semantics of S Γ can be easily obtained from that of S. It can be defined as a new labeled transition system LT S Γ (S) = (Conf (S Γ ), -→ S Γ , C 0 ) over alphabet Σ ∪ Σ Γ ∪ {tt, ff } recognizing runs of S Γ . If LT S(S) = (Conf (S), -→) is an LTS defining runs of S, then LT S(S Γ ) can be built by adding a loop of the form
C k aγ -→ S Γ C k (a γ ) aγ (q k ) -→ S Γ C k from each configuration C k in Conf (S).
We denote by Runs(S Γ ) the set of runs of system S in an active setting with observation actions Σ Γ . As usual, ρ is a secret run of agent u i iff l(ρ) is recognized by automaton S i . The observation of a run ρ by user u j is a word l j (ρ) obtained by projection of l(ρ) on Σ j ∪ Σ Γ j ∪ {tt, ff }. Hence, an observation of user j is a word l j (ρ) = α 1 . . . α k where α m+1 ∈ {tt, ff } if α m ∈ Σ Γ j (α m is a query followed by the corresponding answer).
Let w ∈ (Σ j .(Σ Γ j .{tt, ff }) * ) * . We denote by l -1 j (w) the set of runs of S Γ which observation by u j is w. A malicious agent can only rely on his observation of S to take the decisions that will provide him information on other users secret. Possible actions to achieve this goals are captured by the notion of strategy.
Definition 4: A strategy for an user u j is a map µ j from Runs(S Γ ) to Σ j ∪ Σ Γ j ∪ { }. We assume that strategies are observation based, that is if l j (ρ) = l j (ρ ), then µ j (ρ) = µ j (ρ ).
A run ρ = C 0 e1 -→ C 1 . . . C k conforms to strategy µ j iff, ∀i, µ j (l(C 0 -→ . . . C i )) = implies e i+1 = µ j (l(C 0 -→ . . . C i )) or e i+1 ∈ Σ j ∪ Σ Γ j .
Intuitively, a strategy indicates to player u j the next move to choose (either an action or an observation or nothing. Even if a particular action is advised, another player can play before u j does. We will denote by Runs(S, µ j ) the runs of S that conform to µ j . Let µ j be a strategy of u j and ρ ∈ Runs(S Γ ) be a run ending in a configuration C = (q, q 1 , . . . q |U | ), we now define the set of all possible configurations in which S can be after observation l j (ρ) under strategy µ j . It is inductively defined as follows:
• ∆ µj (X, S Γ , ) = X for every set of configurations X • ∆µ j (X, S Γ , w.e) = ∆ Σ j o (∆µ j (X, S Γ , w), S Γ , e) if e ∈ Σj ∆µ j (X, S Γ , w) if e = aγ ∈ Σ Γ j ,
∆µ j (X, S Γ , w) \γ(q) if e ∈ {tt, ff } and w = w .aγ for some γ ∈ Γ Now, ∆ µj ({C 0 }, S Γ , w) is the estimation of the possible set of reachable configurations that u j can build after observing w. We can also define a set of plausible runs leading to observation w ∈ (Σ j o ) * by u j . A run is plausible after w if its observation by u j is w, and at every step of the run ending in some configuration C k a test performed by u j refine u j s estimation to a set of configuration that contain C k . More formally, the set of plausible runs after w under strategy µ j is P l j (w) = {ρ ∈ Runs(S, µ j ) | l j (ρ) = w ∧ ρ is a run from C 0 to a configuration C ∈ ∆ µj ({C 0 }, S Γ , w)}.
We now redefine the notion of opacity in an active context. A strategy µ j of u j to learn S i is not efficient if despite the use of µ j , there is still a way to hide S i for an arbitrary long time.
In what follows, we assume that there is only one attacker of the system.
Definition 5 (Opacity with active observation strategy): A secret S i is opaque for any observation strategy to user u j in a system S iff µ j and a bound K ∈ N, such that ∀ρ ∈ Runs(S, µ j ), ρ has a prefix ρ 1 of size ≤ K, l(P l(ρ 1 )) ⊆ L(S i ). A system S is opaque for any observation strategy iff ∀i = j, secret S i is opaque for any observation strategy of u j .
Let us comment differences between passive (def. 2) and active opacity (def. 5). A system that is not U-opaque may leak information while a system that not opaque with active observation strategy cannot avoid leaking information if u j implements an adequate strategy. U-opaque systems are not necessarily opaque with strategies, as active tests give additional information that can disambiguate state estimation. However, if a system is U-opaque, then strategies that do not use disambiguation capacities do not leak secrets. Note also that a non-U-opaque system may leak information in more runs under an adequate strategy. Conversely, a non-opaque system can be opaque in an active setting, as the system can delay leakage of information for an arbitrary long time. Based on the definition of active opacity, we can state the following result:
Theorem 6: Given a system S = (A, U ) with n agents and a set secrets S 1 , . . . S n , observation alphabets Σ 1 o , . . . Σ n o and observation capacities Σ Γ 1 , . . . , Σ Γ n , deciding whether S is opaque with active observation strategies is EXPTIMEcomplete.
Proof:[sketch] An active attacker u j can claim that the system is executing a run ρ that is secret for u i iff it can claim with certainty that ρ is recognized by S i . This can be achieved by maintaining an estimation of the system's current configuration, together with an estimation of S i 's possible states. We build an arena with nodes of the form n = (b, C, s, ES) contains a player's name b (0 or 1): intuitively, 0 nodes are nodes where all agents but u j can play, and 1 nodes are nodes where only agent u j plays. Nodes also contain the current configuration C of S, the current state s of S i , an estimation ES of possible configurations of the system with secret's current state by
u j , ES j = {(C 1 , s 1 ), ...(C k , s k )}.
The attacker starts with an initial estimation ES 0 = {(C 0 , q S 0,i )}. Then, at each occurrence of an observable move, the state estimation is updated as follows : given a letter a ∈ Σ j o , for every pair (C k , s k ), we compute the set of pairs (C k , s k ) such that there exists a runs from C k to C k , that is labeled by a word w that is accepted from s k and leads to s k in S i and such that l j (w) = a. The new estimation is the union of all pairs computed this way.
Moves in this arena represent actions of player u j (from nodes where b = 1 and actions from the rest of the system (see appendix for details). Obviously, this arena is of exponential size w.r.t. the size of configurations of S.
A node n = (b, C, s, ES) is not secret if s ∈ F S i , and secret otherwise. A node is ambiguous if there exists (C p , s p ) and
(C m , s m ) in ES such that s p ∈ F S i is secret and s m ∈ F S i . If the restriction of ES to it second components is contained in F S i , n leaks secret S i .
The set of winning nodes in the arena is the set of nodes that leak S i . Player u j can take decisions only from its state estimation, and wins the game if it can reach a node in the winning set. This game is hence a partial information reachability game. Usually, solving such games requires computing an exponentially larger arena containing players beliefs, and then apply polynomial procedures for a perfect information reachability game. Here, as nodes already contain beliefs, there is no exponential blowup, and checking active opacity is hence in EXPTIME.
For the hardness part, we use a reduction from the problem of language emptiness for alternating automata to an active opacity problem. (see appendix for details)
Moving from opacity to active opacity changes the complexity class from P SP ACE-complete to EXP T IM E-complete. This is due to the game-like nature of active opacity. However, using observation capacities does not influence complexity: even if an agent u j has no capacity, the arena built to verify opacity of S i w.r.t. u j is of exponential size, and the reduction from alternating automata used to prove hardness does not assume that observation capacities are used.
IV. OPACITY WITH THRESHOLD DISTANCES TO PROFILES
So far, we have considered passive opacity, i.e. whether a secret can be leaked during normal use of a system, and active opacity, i.e. whether an attacker can force secret leakage with an appropriate strategy and with the use of capacities. In this setting, the behavior of agents is not constrained by any security mechanism. This means that attackers can perform illegal actions with respect to their profile without being discovered, as long as they are feasible in the system.
We extend this setting to systems where agents behaviors are monitored by anomaly detection mechanisms, that can raise alarms when an user's behavior seems abnormal. Very often, abnormal behaviors are defined as difference between observed actions and a model of normality, that can be a discrete event model, a stochastic model,.... These models or profiles can be imposed a priori or learnt from former executions. This allows for the definition of profiled opacity, i.e. whether users that behave according to predetermined profile can learn a secret, and active profiled opacity, i.e. a setting where attackers can perform additional actions to refine their knowledge of the system's sate and force secret leakage in a finite amount of time without leaving their normal profile.
One can assume that the behavior of an honest user u j is a distributed system is predictable, and specified by his profile P j . The definitions of opacity (def. 2) and active opacity (def. 5) do not consider these profiles, i.e. agents are allowed to perform legally any action allowed by the system to obtain information. In our opinion, there is a need for a distinction between what is feasible in a system, and what is considered as normal. For instance, changing access rights of one of his file by an agent should always be legal, but changing access rights too many times within a few seconds should be considered as an anomaly. In what follows, we will assume that honest users behave according to their predetermined regular profile, and that deviating from this profile could be an active attempt to break the system's security. Yet, even if an user is honest, he might still have possibilities to obtain information about other user's secret. This situation is captured by the following definition of opacity wrt a profile. Definition 7:
A system S = (A, U ) is opaque w.r.t. profiles P 1 , . . . P n if ∀i = j, ∀w ∈ L(S i ) ∩ L(S), w ∈ L(P j ) ⇒ π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i
) Intuitively, a system is opaque w.r.t profiles of its users if it does not leak information when users stay within their profiles. If this is not the case, i.e. when w ∈ L(P j ), then one can assume that an anomaly detection mechanism that compares users action with their profiles can raise an alarm. Definition 7 can be rewritten as ∀i = j, ∀w ∈ L(S i ) ∩ L(P j ) ∩ L(S), π -1
Σ j o (π Σ j o (w)) ∩ L(S)
L(S i ) Hence, P SP ACEcompleteness of opacity in Theorem 3 extends to opacity with profiles: it suffices to find witness runs in L(S)∩L(S i )∩L(P j ).
Corollary 8: Deciding whether a system S is opaque w.r.t. a set of profiles P 1 , . . . P n is PSPACE complete.
If a system is U-opaque, then it is opaque w.r.t its agents profiles. Using profiles does not change the nature nor complexity of opacity question. Indeed, opacity w.r.t. a profile mainly consists in considering regular behaviors in L(P j ) instead of L(A j ). In the rest of the paper, we will however use profiles to measure how much users deviate from their expected behavior and quantify opacity accordingly.
One can similarly define a notion of active opacity w.r.t. profiles, by imposing that choices performed by an attacker are actions that does not force him to leave his profile. This can again be encoded as a game. This slight adaptation of definition 5 does not change the complexity class of the opacity question (as it suffices to remember in each node of the arena a state of the profile of the attacker). Hence active opacity with profiles is still a partial information reachability game, and is also EXPTIME-complete. Passive opacity (profiled or not) holds iff certain inclusion properties are satisfied by the modeled system, and active opacity holds if an active attacker has no strategy to win a partial information reachability game. Now, providing an answer to these opacity questions returns a simple boolean information on information leakage. It is interesting to quantify the notions of profiled and active opacity for several reasons. First of all, profiles can be seen as approximations of standard behaviors: deviation w.r.t. a standard profile can be due to errors in the approximation, that should not penalize honest users. Second, leaving a profile should not always be considered as an alarming situation: if profiles are learned behaviors of users, one can expect that from time to time, with very low frequency, the observed behavior of a user differs from what was expected. An alarm should not be raised as soon as an unexpected event occurs. Hence, considering that users shall behave exactly as depicted in their profile is a too strict requirement. A sensible usage of profiles is rather to impose that users stay close to their prescribed profile. The first step to extend profiled and active opacity to a quantitative setting is hence to define what "close" means.
Definition 9: Let u, v be two words of Σ * . An edit operation applied to word u consists in inserting a letter a ∈ Σ in u at some position i, deleting a letter a from u at position i, or substituting a letter a for another letter b in u at position i.
Let OP s(Σ) denote the set of edit operations on Σ, and ω(.) be a cost function assigning a weight to each operation in OP s(Σ). The edit distance d(u, v) between u and v is the minimal sum of costs of operations needed to transform u in v. Several edit distances exist, the most known ones are
• the Hamming distance ham((u, v)), that assumes that OP s(Σ) contains only substitutions, and counts the number of substitutions needed to obtain u from v (u, v are supposed of equal lengths).
• the Levenshtein distance lev((u, v)) is defined as the distance obtained when ω(.) assigns a unit to every operation (insertion, substitution, deletion). One can notice that lev((u, v)) is equal to lev((v, u)), and that max(|u|, |v|) ≥ lev((u, v)) ≥ ||u| -|v||. For a particular distance d(.) among words, the distance between a word u ∈ Σ * and a language R ⊆ Σ * is denoted d(u, R) and is defined as
d(u, R) = min{d(u, v) | v ∈ R}.
We can now quantify opacity. An expected secure setting is that no secret is leaked when users have behaviors that are within or close enough from their expected profile. In other words, when the observed behavior of agents u 1 , . . . u k resemble the behavior of their profiles P 1 , . . . , P k , no leakage should occur. Resemblance of u i 's behavior in a run ρ labeled by w can be defined as the property d(w, L(P i ))) ≤ K for some chosen notion of distance d(.) and some threshold K fixed by the system designers. In what follows, we will use the Hamming and Levenshtein distances as a proximity measures w.r.t. profiles. However, we believe that this notion of opacity can be extended to many other distances. We are now ready to propose a quantified notion of opacity.
Definition 10 (threshold profiled opacity): A system S is opaque wrt profiles P 1 , . . . P n with tolerance K for a distance
d iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), d(w, L(P j )) ≤ K ⇒ π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i
) Threshold profiled opacity is again a passive opacity. In some sense, it provides a measure of how much anomaly detection mechanisms comparing users behaviors with their profiles are able to detect passive leakage. Consider the following situation: the system S is opaque w.r.t. profiles P 1 , . . . P n with threshold K + 1 but not with threshold K. Then it means there exists a run of the system with K + 1 anomalies of some user u j w.r.t. profile P j , but no run with K anomalies. If anomaly detection mechanisms are set to forbid execution of runs with more than K anomalies, then the system remains opaque.
We can also extend the active opacity with thresholds. Let us denote by Strat K j the set of strategies that forbid actions leaving a profile P j if the behavior of the concerned user u j is already at distance K from P j (the distance can refer to any distance, e.g., Hamming or Levenshtein).
Definition 11 (active profiled Opacity): A system S is opaque w.r.t. profiles P 1 , . . . P n with tolerance K iff ∀i = j, µ j ∈ Strat K j such that it is unavoidable for u j to reach a correct state estimation X ⊆ F S i in all runs of Runs(S, µ j ). Informally, definition 10 says that a system is opaque if no attacker u j of the system have a strategy that leaks a secret S i and costs less than K units to reach this leakage. Again, we can propose a game version for this problem, where attacker u j is not only passive, but also has to play his best actions in order to learn u i 's secret.
A player u j can attack u i 's secret iff it has a strategy µ j to force a word w ∈ L(S i ) that conforms to µ j , such that d(w, L(P j )) ≤ K and π -1
Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i
). This can be seen as a partial information game between u j and the rest of the system, where the exact sate of each agent is partially known to others. The system wins if it can stay forever is states where u j 's estimates does not allow to know that the secret automaton S i is in one of its accepting states. The arena is built in such a way that u j stops playing differently from its profile as soon as it reaches penalty K. This is again a partial information rechability game and that is decidable on finite arenas [START_REF] Chatterjee | The complexity of partial-observation parity games[END_REF]. Fortunately, we can show (in lemma 12 below) that the information to add to nodes with respect to the games designed for active opacity (in theorem 6) is finite.
Lemma 12: For a given automaton G, one can compute an automaton G K that recognizes words at distance at most K of L(G), where the distance is either the Hamming or Levenshtein distance.
Proof: Let us first consider the Hamming distance. For an automaton
G R = (Q R , -→ R , q 0 R , F R ), we can design an automaton G K ham = (Q K , -→ K , q K 0 , F K )
that recognizes words at a distance at most K from the reference language L(G). We have
Q K = Q R × {0..K}, F K = Q × {0.
.K}, and q K 0 = (q 0 , 0). Last, we give the transition function: we have ((q, i), a, (q , i)) ∈-→ K iff (q, a, q ) ∈-→ R , and ((q, i), a, (q , i + 1)) ∈-→ K if (q, a, q ) ∈-→ R and i+1 ≤ K, and there exists b = a such that (q, b, q ) ∈-→ R . This way, G K ham recognizes sequences of letters that end on a state (q f , i) such that q f is an accepting state of G R , and i ≤ K. One can easily show that for any accepting path in G K ham ending on state (q f , i) recognizing word w, there exists a path in G R of identical length recognizing a word w that is at hamming distance at most K of w. Similarly, let us consider any accepting path
ρ = q 0 R a1 -→ R q 1 . . . an -→ R q f of G R . Then, every path of the form ρ k = (q 0 R , 0) . . . ai1 -→ K (q i1 , 1) . . . (q ik-1 , k -1) a ik -→ K (q ik , k) . . . an -→ K (q f ,
i) such that i ≤ K and for every
(q ij-1 , j -1) aij -→ K (q ij ,
j), a ij is not allowed in sate q ij is a path that recognizes a word at distance i of a word in R and is also a word of G K R . One can show by induction on the length of paths that the set of all paths recognizing words at distance at most k can be obtained by random insertion of at most k such letter changes in each path of G R . The size of
G K ham is exactly |G R | × K.
Let us now consider the Levenshtein distance. Similarly to the Hamming distance, we can compute an automaton G K Lev that recognizes words at distance at most K from L(G). Namely, G K Lev = (Q lev , -→ lev , q 0,Lev , F lev ) where Q lev = Q×{0..K}, q 0,lev = (q 0 , 0), F lev = F ×{0..K}. Last the transition relation is defined as ((q, i), a, (q , i)) ∈-→ lev if (q, a, q ) ∈-→, ((q, i), a, (q, i + 1)) ∈-→ lev if q , (q, a, q ) ∈-→ (this transition simulates insertion of letter a in a word), ((q, i), a, (q , i + 1)) ∈-→ lev if ∃(q, b, q ) ∈-→ with b = a (this transition simulates substitution of a character), ((q, i), , (q , i + 1)) ∈-→ lev if ∃(q, a, q ) ∈-→ (this last move simulates deletion of a character from a word in L(G).
One can notice that this automaton contains transition, but after and -closure, one obtains an automaton without epsilon that recognizes all words at distance at most K from L(G). The proof of correctness of the construction follows the same lines as for the Hamming distance, with the particularity that one can randomly insert transitions in paths, by playing letters that are not accepted from a state, leaving the system in the same state, and simply increasing the number of differences. Notice that if a word w is recognized by G K Lev with a path ending in a state (q, i) ∈ F Lev , this does not mean that the Levenshtein distance from L(G) is i, as w can be recognized by another path ending in a state (q , j) ∈ F Lev with j < i.
s 0 s 1 s 2 a b a, c a s 0 , 0 s 0 , 1 s 0 , 2 s 0 , 3 s 1 , 0 s 1 , 1 s 1 , 2 s 1 , 3 s 2 , 0 s 2 , 1 s 2 , 2 s
One can notice that the automata built in the proof of lemma 12 are of size in O(K.|G|), even after -closure. Figure 1 represents an automaton G that recognizes the prefix closure of a.a * .b.(a + c) * , and the automaton G 3
Ham . Theorem 13: Deciding threshold opacity for the Hamming and Levenshtein distance is PSPACE complete. Proof: First of all, one can remark that, for a distance d(.), a system S is not opaque if there exists a pair of users u i , u j and a word w in L(S) ∩ L(S i ) such that d(w, L(P j )) ≤ K, and π -1
Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ).
As already explained in the proof of theorem 3, w belongs to L(S i ) if a state q w reached by S i after reading w belongs to F S i . Still referring to the proof of Theorem 3, one can maintain online when reading letters of w the set reach j (w) of possible configurations and states of S i that are reached by a run which observation is the same as π Σ j o (w). One can also notice that lev(w, L(P j )) ≤ K iff w recognized by P K j,lev , the automaton that accepts words at Levenshtein distance at most K from a word in P j . Again, checking online whether w is recognized by P K j,Lev consists in maintaining a set of states that can be reached by P K j,Lev when reading w. We can denote by reach K j,Lev (w) this set of states. When no letter is read yet, reach K j,Lev ( ) = {q K 0 }, and if lev(w, L(P j )) > K, we have reach K j,Lev (w) = ∅, meaning that the sequence of actions played by user u j have left the profile. We can maintain similarly a set of states reach K j,Ham (w) for the Hamming distance. In what follows, we will simply use reach K j (w) to denote a state estimation using Levenstein of Hamming distance.
Hence, non-opacity can be rephrased as existence of a run, labeled by a word w such that reach j (w) ⊆ F S i and reach K j (w) = ∅. The contents of reach j (w) and reach K j (w) after reading a word w can be recalled with a vector of
h = |S| + |P K j | bits.
Following the same arguments as in Theorem 3, it is also useless to consider runs of size greater than 2 h . One can hence non-deterministically explore the whole set of states reached by reach j (w) and reach K j (w) during any run of S by remembering h bits and a counter which value is smaller of equal to 2 h , and can hence be encoded with at most h bits. So, finding a witness for non-opacity is in NP-SPACE, and by Savitch's theorem and closure by complementation of PSPACE, opacity with a threshold K is in PSPACE.
For the hardness part, it suffices to remark that profiled opacity is exactly threshold profiled opacity with K = 0.
Theorem 14: Deciding active profiled opacity for the Hamming and Levenshtein distance is EXPTIME-complete. Proof:[sketch] Let us first consider the Hamming distance. One can build an arena for a pair of agents u i , u j as for the proof of theorem 6. This arena is made of nodes of the form (b, C, s, spjk, ES, d) that contain: a bit b indicating if it is u j turn to play and choose the next move, C the current configuration of S , s the current state of S i , the estimation of ES of possible pairs (C, s) of current configuration and current state of the secret by player u j , and spjk a set of states of the automaton P K j,ham that recognizes words that are at Hamming distance at most K from P j . In addition to this information, a node contains the distance d of currently played sequence w.r.t. profile P j . This distance can be easily computed: if all states of P K j,ham memorized in spjk are pairs of state and distance, i.e., spkj = {(q 1 , i 1 ), (q 2 , i 2 ), . . . , (q k , i k )} then d = min{i 1 , . . . , i k }. User u j (the attacker) has partial knowledge of the current state of the system (i.e. a configuration of S and of the state of S i ), perfect knowledge of d. User j wins if it can reach a node in which his estimation of the current state of secret S i is contained in F S i (a non-ambiguous and secret node), without exceeding threshold K. The rest of the system wins if it can prevent player u j to reach a non-ambiguous and secret node of the arena. We distinguish a particular node ⊥ reached as soon as the distance w.r.t. profile P j is greater than K. We consider this node as ambiguous, and every action from it gets back to ⊥. Hence, after reaching ⊥, player u j has no chance to learn S i anymore. The moves from a node to another are the same as in the proof for theorem 6, with additional moves from any node of the form n = (1, q, s, spjk, ES, d) to ⊥ using action a is the cost of using a from n exceeds K.
We add an equivalence relation ∼, such that n = (b, q, s, spjk, ES, d) ∼ n = (b , q , s , spjk , ES , d ) iff b = b , spjk = spjk , d = d , and ES = ES . Obviously, u j has a strategy to violate u i 's secret without exceeding distance K w.r.t. its profile P j iff there is a strategy to reach W in = {(b, q, s, spjk, ES, d) | ES ⊆ S F i } for player u j with partial information that does not differentiate states in the equivalence classes of ∼. This is a partial information reachability game over an arena of size in O(2.|Conf (S)|.|S i |.2 |Conf (S)|.|Si|.K.|Pj | ), that is exponential in the size of S and of the secret S i and profile P j . This setting is a partial information reachability game over an arena of exponential size. As in the Boolean setting, the nodes of the arena already contain a representation of the beliefs that are usually computed to solve such games, and hence transforming this partial information reachability game into a perfect information game does not yield an exponential blowup. Hence, solving this reachability game is in EXPTIME.
The hardness part is straightforward: the emptiness problem for alternating automaton used for the proof of theorem 6 can be recast in a profiled and quantified setting by setting each profile P i to an automaton that recognizes (Σ Γ i ) * (i.e., users have the right to do anything they want as far as they always remain at distance 0 from their profile).
V. DISCOUNTING ANOMALIES
Threshold opacity is a first step to improve the standard Boolean setting. However, this form of opacity supposes that anomaly detection mechanisms memorize all suspicious moves of users and never revises their opinion that a move was unusual. This approach can be too restrictive. In what follows, we propose several solutions to discount anomalies. We first start by counting the number of substitutions in a bounded suffix with respect to the profile of an attacker. A suspicion score is computed depending on the number of differences within the suffix. This suspicion score increases if the number of errors in the considered suffix is above a maximal threshold, and it is decreased as soon as this number of differences falls below a minimal threshold. As in former sections, this allows for the definition of passive and active notions of opacity, that are respectively PSPACE-complete and EXPTIME-complete. We then consider the mean number of discrepancies w.r.t. the profile as a discounted Hamming distance.
A. A Regular discounted suspicion measure
Let u ∈ Σ K .Σ * and let v ∈ Σ * . We denote by d K (u, v) the distance between the last K letters of word u and any suffix of v, i.e.
d K (u, v) = min{d(u [|u|-K,|u|] , v ) | v is a suffix of v}. Given a regular language R we define d K (u, R) = min{d K (u, v) | v ∈ R}
Lemma 15: Let R be a regular language. For a fixed K ∈ N, and for every k ∈ [0..K], one can compute an automaton C k that recognizes words which suffixes of length K are at Hamming distance k from a suffix of a word of R.
We now define a cost model, that penalizes users that get too far from their profile, and decreases this penalty when getting back closer to a normal behavior. For a profile P j and fixed values α, β ≤ K we define a suspicion function Ω j for words in Σ * inductively:
Ω j (w) = 0 if |w| ≤ K Ω j (a.w.b) = Ω j (a.w) + 1 if d K (w.b, P j ) ≥ β max(Ω j (a.w) -1, 0) if d K (w.b, P j ) ≤ α
As an example, let us take as profile P j the automaton G of One can easily define a notion of passive opacity with respect to a suspicion threshold T . Again, verifying this property supposes finding a witness run of the system that leaks information without exceeding suspicion threshold, which can be done in PSPACE (assuming that T is smaller than 2 |Conf | ). As for profiled opacity, we can define Strat T the set of strategies of an user that never exceed suspicion level T . This immediately gives us the following definitions and results.
Definition 16: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N be a suspicion threshold. S is opaque with suspicion threshold
T iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), Ω j (w) < T implies π -1 Σ j o (π Σ j o (w)) ∩ L(S) L(S i ) .
Theorem 17: Opacity with suspicion threshold for the Hamming distance is PSPACE-complete.
Definition 18: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N. S is actively opaque with suspicion threshold T iff ∀i = j there exists no strategy µ j ∈ Start T such that it is unavoidable for u j to reach a correct state estimation X ⊆ F S i in all runs of Runs(S, µ j ).
Theorem 19: Active opacity with suspicion threshold for the Hamming distance is EXPTIME-complete. Proof: We build an arena that contains nodes of the form n = (b, C, ES, EC 0 , . . . EC k , sus). C is the actual current configuration of S Γ , ES is the set of pairs (C, s) of configuration and secret sates in which S Γ could be according to the actions observed by u j and according to the belief refinements actions performed by u j . Sets EC 1 . . . EC k remembers sets of states of cost automata C 0 , . . . C K . Each EC i memorizes the states in which C i could be after reading the current word. If EC i contains a final state, then the K last letters of the sequence of actions executed so far contain exactly i differences. Note that only one of these sets can contain an accepting state. Suspicion sus is a suspicion score between 0 and T . When reading a new letter, denoting by p the number of discrepancies of the K last letters wrt profiles, one can update the suspicion score using the definition of C j above, depending on whether
p ∈ [0, α], p ∈ [α, β] or p ∈ [β, K].
The winning condition in this game is the set
W in = {(b, C, ES, EC 0 , . . . EC k , sus) | ES ⊆ Conf (S) × F S i }. We partition the set of nodes into V 0 = {(b, C, ES, EC 0 , . . . EC k , sus) | b = 0} and V 1 = {(b, C, ES, EC 0 , . . . EC k , sus) | b = 1}
. We de-fine moves from (b, C, ES, EC 0 , . . . EC k , sus) to (1b, C, ES, EC 0 , . . . EC k , sus) symbolizing the fact that it is user u j 's turn to perform an action. There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b , C , ES, EC 0 , . . . EC k , sus ) if there is a transition (C, a, C ) in S Γ performed by an user u i = u j , and a is not observable by u j . There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b , C , ES , EC 0 , . . . EC k , sus) if there is a transition (C, a, C ) in S Γ performed by an user u i = u j and a is observable by u j . We have ES = ∆ Σ j o (ES, S Γ , a). Suspicion and discrepancies observation (sets EC i ) remain unchanged as this move does not represent an action played by u j .
There is a move from n = (b, C, ES, EC 0 , . . . EC k , sus)
to n = (1 -b, C , ES , EC 0 , . . . EC k , sus) if b = 1
and there is a transition (q, a, q ) in S Γ performed by user u j from the current configuration. Set ES is updated as before ES = ∆ Σ j o (ES, S Γ , a) and sets EC i are updated according to transition relation δ suf i of automaton C i , i.e. EC i = δ suf i (E i , a). Similarly, sus is the new suspicion value obtained after reading a. Last, there is a move from n = (b, C, ES, EC 0 , . . . EC k , sus) to n = (b, C, ES , EC 0 , . . . EC k , sus), if there is a sequence of moves (C, a, C(a γ )).(C(a γ ), a γ(q) , C) in S Γ , ES = ES /a γ(q) , and EC i 's and sus are computed as in the former case.
As for the proofs of theorems 6 and 14, opacity can be brought back to a reachability game of partial information, and no exponential blowup occurs to solve it. For the hardness, there is a reduction from active profiled opacity. Indeed, active profiled opacity can be expressed as a suspicion threshold opacity, by setting α = β = K = 0, to disallow attackers to leave their profile.
B. Discounted Opacity : an open problem
A frequent interpretation of discounting is that weights or penalties attached to a decision should decrease progressively over time, or according to the length of runs. This is captured by averaging contribution of individual moves.
Definition 20: The discounted Hamming distance between a word u and a language R is the value d(u, R) = ham(u,R) |u| This distance measures the average number of substitutions in a word u with respect to the closest word in R. The next quantitative definition considers a system as opaque if an active attacker can not obtain a secret while maintaining a mean number of differences w.r.t. its expected behavior below a certain threshold. Let λ ∈ Q be a positive rational value. We denote by Strat λ (R) the set of strategies that does not allow an action a after a run ρ labeled by a sequence of actions w if d(w.a, R) > λ.
Definition 21 (Discounted active Opacity): A system S is opaque wrt profiles P 1 , . . . P n with discounted tolerance λ iff ∀i = j, µ j ∈ Strat λ (P j ), strategy of agent u j such that it is unavoidable for u j to reach a correct state estimation X ⊆ F i S in all runs of Runs(S, µ j ).
A system is opaque in a discounted active setting iff one can find a strategy for u j to reach a state estimation that reveals the secret S i while maintaining a discounted distance wrt P j smaller than λ. At first sight, this setting resembles discounted games with partial information, already considered in [START_REF] Zwick | The complexity of mean payoff games[END_REF]. It was shown that finding optimal strategies for such mean payoff games is in N P ∩ co-N P . The general setting for mean payoff games is that average costs are values of nodes in an arena, i.e. the minimal average reward along infinite runs that one can achieve with a strategy starting from that node. As a consequence, values of nodes are mainly values on connected components of an arena, and costs of moves leading from a component to another have no impact. In out setting, the game is not a value minimization over infinite run, but rather a co-reachability game, in which at any moment in a run, one shall not exceed a mean number of unexpected moves.
For a fixed pair of users u i , u j , we can design an arena with nodes of the usual form n = (b, C, ES, l, su) in which b indicates whether it is u j 's turn to play, C is the current configuration of the system, ES the estimation of the current configuration and of the current state of secret S i reached, l is the number of moves played so far, and su the number of moves that differ from what was expected in P j . As before, the winning states for u j are the states where all couples in state estimation refer to an accepting state of S i . In this arena, player u j looses if it can never reach a winning node, or if it plays an illegal move from a node n = (b, C, ES, l, su) such that su+1 l+1 > λ. One can immediately notice that defined this way, our arena is not finite anymore.
Consider the arena used in theorem 6, i.e. composed of nodes of the form n = (b, C, ES) that only build estimations of the attacker. Obviously, when ignoring mean number of discrepancies, one can decide whether the winning set of nodes is reachable from the initial node under some strategy in polynomial time (wrt the size of the arena). The decision algorithm builds an attractor for the winning set (see for instance [START_REF] Grädel | Automata Logics, and Infinite Games : A Guide to Current Research[END_REF] for details), but can also be used to find short paths under an adequate strategy to reach W in (without considering mean number of discrepancies). If one of these paths keeps the mean number of discrepancies lower or equal to λ at each step, then obviously, this is a witness for non-opacity. However, if no such path exists, there might still be a way to play longer runs that decrease the mean number of discrepancies before moving to a position that requires less steps to reach the winning set.
We can show an additional sufficient condition : Let ρ = n 0 .n 1 . . . n w be a path of the arena in theorem 6 (without length nor mean number of discrepancies recall) from n 0 to a winning node n w . Let d i denote the number of discrepancies with respect to profile P j at step i. Let n i be a node of ρ such that di i ≤ λ and di+1 i+1 > λ. We say that u j can enforce a decreasing loop β = n j .n j+1 . . . n j at node n j if β is a cycle that u j can enforce with an appropriate strategy, and if the mean number of discrepancies is smaller in ρ β = n 0 . . . n j .β than in n 0 . . . n j , and the mean cost of any prefix of β is smaller that λ. A consequence is that the mean cost M β of cycle β is smaller than λ. We then have a sufficient condition:
Proposition 22: Let ρ be a winning path in an arena built to check active opacity for users u i , u j such that di i > λ for some i ≤ |ρ|. If there exists a node n b in ρ such that d k k ≤ λ for every k ≤ b and u j can enforce a decreasing loop at n b , then u j has a strategy to learn S i without exceeding mean number of discrepancies λ. Similarly, if B is large enough, playing any prefix of n b+1 . . . n w to reach the winning set does not increase enough the mean number of discrepancies to exceed λ. A lower bound for B such that λ is never exceeded in n 0 . . . n b .β B .n b+1 . . . n w can be easily computed. Hence, if one can find a path in a simple arena withouts mean discrepancy counts, and a decreasing loop in this path, then u j has a strategy to learn S i without exceeding threshold λ.
VI. CONCLUSION
We have shown several ways to quantify opacity with passive and active attackers. In all cases, checking passive opacity can be brought back to a language inclusion question, and is hence PSPACE-complete. In active settings, opacity violation is brought back to existence of strategies in reachability games over arenas which nodes represent beliefs of agents, and is EXPTIME-complete.
Suspicion can be discounted or not. Non-discounted suspicions simply counts the number of anomalies w.r.t. a profile, and raises an alarm when a maximal number K of anomalies is exceeded. We have shown that when anomalies are substitutions, deletions and insertions of actions, words with less than K anomalies w.r.t. the considered profile (words at Hamming or Levenshtein distance ≤ K) are recognized by automata of linear size. This allows to define active and passive profiled opacity, with the same PSPACE/EXPTIME-complete complexities. A crux in the proofs is that words at distance lower than K of a profile are recognized by automata. A natural extension of this work is to see how regular characterization generalizes to other distances.
Discounting the number of anomalies is a key issue to avoid constantly raising false alarms. t is reasonable to consider that the contribution to suspicion raised by each anomaly should decrease over time. The first solution proposed in this paper computes a suspicion score depending on the number of discrepancies found during the last actions of an agent. When differences are only substitutions, one can use finite automata to maintain online the number of differences. This allows to enhance the arenas used in the active profiled setting without changing the complexity class of the problem (checking regular discounted suspicion remains EXPTIME-complete). Again, we would like to see if other distances (eg the Levenstein distance) and suspicion scores can be regular, which would allow for the defiition of new opacity measures.
Discounted suspicion weights discrepancies between the expected and actual behavior of an agent according to run length. This suspicion measure can be seen as a quantitative game, where the objective is to reach a state leaking information without exceeding an average distance of λ ∈ Q. In our setting, the mean payoff has to be compared to a threshold at every step. This constraint can be recast as a reachability property for timed automata with one stopwatch and linear diagonal constraints on clock values. We do not know yet if this question is decidable but we provide a sufficient condition for discounted opacity violation.
In the models we proposed, discounting is performed according to runs length. However, it seems natural to consider discrepancies that have occurred during the last ∆ seconds, rather than This requires in particular considering timed systems and they timed runs. It is not sure that adding timing to our setting preserves decidability, as opacity definitions rely a lot on languages inclusion, which are usually undecidable for timed automata [START_REF] Alur | A theory of timed automata[END_REF]. If time is only used to measure durations elapsed between actions of an attacker, then we might be able to recast the quantitative opacity questions in a decidable timed setting, using decidability results for timed automata with one clock [START_REF] Ouaknine | On the language inclusion problem for timed automata: Closing a decidability gap[END_REF] or event-clock timed automata.
APPENDIX PROOF OF THEOREM 3
Proof: Let us first prove that U -opacity is in PSPACE. A system is not opaque if one can find a pair of users u i , u j , and a run w of S such that w ∈ L(S i ) and π -1
Σ j o (π Σ j o (w)) ∩ L(S) ⊆ L(S i ).
One can non-deterministically choose a pair of users u i , u j in space logarithmic in n, and check that i = j in logarithmic space. To decide whether a run of S belongs to S i , it is sufficient to know the set of states reached by S i after recognizing w. A word w belongs to L(S i ) is the state q w reached by S i after reading w belongs to F S i . Now, observe that an user u j does not have access to w, but can only observe π Σ j o (w), and may hence believe that the run actually played is any run with identical observation, i.e. any run of π -1
Σ j o (π Σ j o (w)) ∩ L(S).
Let ρ be a run of S, one can build online the set of states reach j (w) that are reached by a run which observation is the same as π Σ j o (w). We have
reach j ( ) = {q ∈ Q S i | ∃w, q S 0,i w -→ q ∧ π Σ j o (w) = } and reach j (w.a) = {q ∈ Q S i | ∃q ∈ reach j (w), ∃w , q w -→ q ∧ π Σ j o (w ) = a}.
Obviously, a word w witnesses a secret leakage from S i to u j if reach j (w) ⊆ F S i . To play a run of S, it is hence sufficient to remember a configuration of S and a subset of states of S i . Let q ρ denote the pair (q, X) reached after playing run ρ.
Now we can show that witness runs with at most K 1 = |Conf |.2 |Si| letters observable by u j suffice. Let us assume that there exists a witness ρ of size ≥ K 1 . Then, ρ can be partitioned into ρ = ρ 1 .ρ 2 .ρ 3 such that q ρ1 = q ρ1.ρ2 . Hence, ρ 1 .ρ 3 is also a run that witness a leakage of secret S i to u j , but of smaller size.
Hence one can find a witness of secret leakage by a nondeterministic exploration of size at most |Conf |.2 |Si| . To find such run, one only needs to remember a configuration of S (which can be done with log(|S|) bits, all states of reach j (ρ) for the current run ρ followed in S, which can be done with |S i | bits of information, and an integer of size at most K 1 , which requires log |S|.|S i | bits. Finding a witness can hence be done in NPSPACE, and by Savitch's lemma it is in PSPACE. As PSPACE is closed by complement, deciding opacity of a system is in PSPACE.
Let us now consider the hardness part. We will reduce the non-universality of any regular language to an opacity problem. As universality is in PSPACE, non-universality is also in PSPACE. The language of an automaton B defined over an alphabet Σ is not universal iff L(B) = Σ * , or equivalently if Σ * L(B). For any automaton B, one can design a system S B with two users u 1 , u 2 such that S 1 = B, L(S 2 ) = a.Σ * for some letter a, A accepts all actions, i.e. is such that L
(A) = Σ * , Σ 2 o = Σ 1 o = ∅.
Clearly, for every run of S, u 1 observes , and hence leakage can not occur from u 2 to u 1 (one cannot know whether a letter and in particular a was played). So the considered system is opaque iff ∀w ∈ L(S 1 ) ∩ L(S), π -1
Σ 2 o (π Σ 2 o (w)) L(S 1
). However, as Σ 2 o = ∅, for every w, π -1
Σ 2 o (π Σ 2 o (w)) = Σ * .
That is, the system is opaque iff Σ * L(B).
PROOF OF THEOREM 6
Proof: An active attacker u j can claim that the system is executing a run ρ that is secret for u i iff it can claim with certainty that ρ is recognized by S i . This can be achieved by maintaining an estimation of the system's current configuration, together with an estimation of S i 's possible states. We build an arena with nodes N 0 ∪ N 1 . Each node of the form n = (b, C, s, ES) contains :
• a player's name b (0 or 1). Intuitively, 0 nodes are nodes where all agents but u j can play, and 1 nodes are nodes where only agent u j plays.
G ⊆ N 0 ∪ N 1 × N 0 ∪ N 1 . • (n, n ) ∈ δ G if n and n differ only w.r.t. their player's name • (n, n ) ∈ δ G if n = (0, C, s, ES) , n = (1, C , s , ES )
and there exists σ
∈ (Σ \ Σ j ) ∩ Σ j o such that C σ =⇒ C , s σ =⇒ S i
) -→ S i . • (n, n ) ∈ δ G n = (1, C, s, ES), n = (1, C, s, ES ) if
there exists γ ∈ Σ Γ j such that ES is the refinement of ES by a γ (state(C)). We assume that checking the status of a proposition does not affect the secrets of other users. We says that a node n = (b, C, s, ES) is not secret if s ∈ F S i , and say that n is secret otherwise. We say that a node is ambiguous if there exists (C p , s p ) and (C m , s m ) in ES such that s p is secret and s m is not. If the restriction of ES to it second components is contained in F S i , we says that n leaks secret S i .
We equip the arena with an equivalence relation ∼⊆
N 0 × N 0 ∪ N 1 × N 1 , such that n = (b, C, s, ES) ∼ n = (b , C , s , ES ) iff b = b = 1 and ES = ES .
Intuitively, n ≡ n if and only if they are nodes of agent u j , and u j cannot distinguish n from n using the knowledge it has on executions leading to n and to n .
Clearly, secret S i is not opaque to agent u j in S iff there exists a strategy to make a leaking node accessible. This can be encoded as a partial information reachability game G = (N 0 N 1 , δ G , ≡, W in), where W in is the set of all leaking nodes. In these games, the strategy must be the same for every node in the same class of ≡ (i.e. where u j has the same state estimation). Usually, partial information games are solved at he cost of an exponential blowup, but we can show that in our case, complexity is better. First, let us compute the maximal size of the arena. A node is of the form n = (b, C, s, ES), hence the size of the arena |G| is in O(2.|Conf |.| § i |.2 |Conf |.|Si| ) (and it can be built in time O(|Conf |.|G|). Partial information reachability games are known to be EXPTIME-complete [START_REF] Reif | Universal games of incomplete information[END_REF]. Note here that only one player is blind, but this does not change the overall complexity, as recalled by [START_REF] Chatterjee | The complexity of partial-observation parity games[END_REF]. However, solving games of partial information consists in computing a "belief" arena G B that explicitly represent players beliefs (a partial information on a state is transformed into a full knowledge of a belief), and then solve the complete information game on arena G B . This usually yields an exponential blowup. In our case, this blowup is not needed, and the belief that would be computed to solve a partial information game simply duplicates the state estimation that already appears in the partial information arena. Hence, deciding opacity with active observation strategies can be done with |U | 2 opacity tests (one for each pair of users) of exponential complexity, in only in EXPTIME.
Let us now prove the hardness of opacity with active attackers. We reduce the problem of emptiness of alternating automata to an opacity question. An alternating automaton is a tuple
A alt = (Q, Σ, δ, s 0 , F ) where Q contains two distinct subsets of states Q ∀ , Q ∃ . Q ∀ is a set of universal states, Q ∃ is a set of existential states, Σ is an alphabet, δ ⊆ (Q ∀ ∪Q ∃ )×Σ×(Q ∀ ∪Q ∃
) is a transition relation, s is the initial state and F is a set of accepting states. A run of A alt over a word w ∈ Σ * is an acyclic graph G A alt ,w = (N, -→) where nodes in N are elements of Q × {1 . . . |w|}. Edges in the graph connect nodes from a level i to a level i+1. The root of the graph is (s, 1). Every node of the from (q, i) such that q ∈ Q ∃ has a single successor (q , i+1) such that q ∈ δ(q, w i ) where w i is the i th letter of w. For every node of the from (q, i) such that q ∈ Q ∀ , and for every q such that q ∈ δ(q, w i ), ((q, i), (q , i + 1)) is an edge. A run is complete is all its node with index in 1..|w| -1 have a successor. It is accepting if all path of the graph end in a node in F × {|w|}. Notice that due to non-deterministic choice of a successor for existential states, there can be several runs of A alt for a word w. The emptiness problem asks whether there exists a word w ∈ Σ * that has an accepting run. We will consider, without loss of generality that alternating automata are complete, i.e. all letters are accepted from any state. If there is no transition of the form (q, a, q ) from a sate q, one can nevertheless create a transition to an non-accepting absorbing state while preserving the language recognized by the alternating automaton.
Let us now show that the emptiness problem for alternating automata can be recast in an active opacity question. We will design three automata A, A 1 , A 2 . The automata A 1 and A 2 are agents. Agent 1 performs actions from universal sates and agent 2 chooses the next letter to recognize and performs actions from existential states. The automaton A serves as a communication medium between agents, indicates to A 2 the next letter to recognize, and synchronizes agents 1 and 2 when switching the current state of the alternating automaton from an existential state to an universal state or conversely.
We define A
= (Q s , -→ s , Σ s ) with Σ s = {(end, 2 A); (end, A 1)} ∪ Σ × {2 A, A 1} × (Q ∃ ∪ U ) × {1 A, A 2, 2 A, A 2}.
To help readers, the general shape of automaton A is given in Figure 3.
States of A are of the form U , (U, σ), W , dU , dq i , wq i for every state in Q, and Eq i for every existential state q i ∈ Q ∃ . The initial state of A is state U if s 0 is an universal state, or s 0 if s 0 is existential. State U has |Σ| outgoing transitions of the form (U, < σ, 2 A >, (U, σ), indicating that the next letter to recognize is σ. It also has a transition of the form (U, < end, 2 A >, end 1 ) indicating that A 2 has decided to test whether A 1 is in a secret state (i.e. simulates an accepting state of A alt ). There is a single transition (end 1 , < end, A 2 >, end 2 ) from state end 1 , and a single transition (end 2 , < Ackend, A 1 >, end 3 ) indicating to A 2 that A 1 has acknowledged end of word recognition.
There is a transition ((U, σ), < σ, A → 1 >, (W, σ)) for any state (U, σ), indicating to A 1 that the next letter to recognize from its current universal state is σ. In state W , A is waiting for an universal move from A 1 . Then from W , A can receive the information that A 1 has moved to an universal state, which is symbolized by a pair of transitions (W, < σ, U, 1 A >, dU )) and (dU, < again, A 2 >, U ).
There is a transition (W, < σ, q i , 1 → A >, dq i ) for every existential state q i ∈ Q ∃ , followed by a transition (dq i , < σ, q i , A 2 >, Eq i ), indicating to A 2 that the system has moved to recognition of a letter from an existential state q i .
There is a transition (Eq i , < σ, 2 A >, (Eq i , σ)) from every state Eq i with q i ∈ Q ∃ and every σ ∈ Σ to indicate that the next letter to recognize is σ. Then, there is a transition ((Eq i , σ), < σ, q j , 2 A >, (W q j , σ)) for every existential move (q i , σ, q j ) ∈ δ. From every state (W q j , σ), there is a transition of the form ((W q j , σ), < σ, q j , A → 1 >, (dq j , σ)) to inform A 1 of A 2 's move. Then, from (Dq j , σ) if q j ∈ Q ∃ , there is a transition of the form ((Dq j , σ), < again, A 1 >, Eq j ) and if q j ∈ Q ∀ , a transition of the form ((dq j , σ), < again, A 1 >, U ), indicating to A 1 that the simulation of the current transition recognizing a letter is complete, and from which state the rest of the simulation will resume.
Let us now detail the construction of A 2 . A description of all its transition is given in Figure 4. This automaton has one universal state U , a state W , states of the form (U, σ), a pair of states Eq i and W q i and a state (Eq i , σ) for every σ ∈ Σ and every q i ∈ Q ∃ . Last, A 1 has two states End 1 and End 2 .
There is a transition (U, < σ, 2 A >, (U, σ)) from U for every σ ∈ Σ, symbolizing the choice of letter σ as the next letter to recognize when the system simulates an universal state. Note that A 2 needs not know which universal state is currently simulated. Then, there is also a transition ((U, σ), again, U ) returning to U symbolizing the end of a transition of the alternating w q j again σ, A 1, q j d qi d q i σ, 1 A, q i σ, 1 A, q i E qi E qi , σ σ, A 2, q i σ, 2 A W qj , σ σ, q j , 2 A (q j ∈ Q ∃ ) σ, q j , 2 A (q j ∈ Q ∀ )
d qj σ, q j , A 1
E qj End, 2 A
End, 2 A again Fig. 3: Automaton A in the proof of theorem 6.
automata that returns to an universal state (hence owned by A 2 ). From every state (U, σ) there is a transition ((U, σ), again, U ) and a transition ((U, σ), < σ, q i , A → 2 >, Eq i ) for every existential state q i that has an universal predecessor q with (q, σ, q i ) ∈ δ. From a state Eq i and for every σ ∈ Σ, there is a transition (Eq i , < σ, 2 A >, (Eq i , σ)) symbolizing the choice to recognize σ as the next letter. Then, from every state (Eq i , σ) for every transition of the form (q i , σ, q j ) ∈ δ where q j is existential, there is a transition ((Eq i , σ), < σ, q j , 2 → A >, W q j ).
For every transition of the form (q i , σ, q j ) ∈ δ where q j is universal, there is a transition ((Eq i , σ), < σ, q j , 2 → A >, W ). Last, transitions ((W q j , σ), again, Eq j ) and (W, again, U ) complete simulation of recognition of the current letter. Last, A 2 has a transition (U, < end, 2 A >, End 1 ), a transition (Eq i , < end, 2 A >, End 1 ) for every existential state q i ∈ Q ∃ and a transition (end 1 , ackend, End 2 ), symbolizing the decision to end recognition of a word.
Let us detail the construction of A 1 . The general shape of this automaton is described in Figure 5. This automaton has two states of the form U q i , (U q i , σ) per universal state and for each σ ∈ Σ. Similarly A 1 has a state Eq i , (Eq i , σ) per existential state and for each σ ∈ Σ. From state U q i there is a transition (U q i , < σ, A → 1 >, (U q i , σ)) to acknowledge the decision to recognize σ.
From state (U q i , σ) there exists two types of transitions. For every universal state q j such that (q i , σ, q j ) ∈ δ, Eq i , σ σ, 2 A σ, 2 A σ, q j , 2 A (q j ∈ Q ∀ ) Eq j End, 2 A End, 2 A W q j again σ, q j , 2 A (q j ∈ Q ∃ ) Fig. 4: Automaton A 2 in the proof of theorem 6, simulating existential moves .
there is a transition ((U q i , σ), < σ, U, 1 A >, U q j ), symbolizing a move to universal state q j . For every existential state q j such that (q i , σ, q j ) ∈ δ, there is a transition ((U q i , σ), < σ, q j , 1 A >, Eq j ).
Similarly, from a state Eq i , there exists a transition (Eq i , < σ, A 1 >, (Eq i , σ)) indicating to A 1 the letter chosen by A 2 . From state (Eq i , σ), there is a transition ((Eq i , σ), < σ, q j , A → 1 >, Eq j ) for every existential state q j such that (q i , σ, q j ) ∈ δ. There is also a transition ((Eq i , σ), < σ, U, 1 A >, U q j ) for every universal state q j such that (q i , σ, q j ) ∈ δ. Notice that the universal state reached is not detailed when A 1 sends the confirmation of a move to A.
The remaining transitions are transitions of the form (Eq i , < End, A 1 >, S) and (U q i , < End, A 1 >, Sec) for every accepting state q i ∈ F . We also create transitions of the form Eq i , < End, A 1 >, Sec and U q i , < End, A 1 >, Sec for states that are not accepting. Reaching Sec indicates the failure to recognize a word chosen by A 1 along a path in which universal moves were played by A 1 and existential moves by A 2 .
We define a agent u 1 s secret S 1 as the automaton that recognizes all words that allow A 1 to reach sate Sec. Now, we can prove that if a word w is accepted by A alt then the strategy in which A 2 chooses letter w i at its i t h passage through a letter choice state (U or Eq i ), existential transitions appearing in the accepting run of A alt , and then transition < end, 2 A > at the i + 1 th choice, is a strategy to force U q i U q i , σ U q j Eq j σ, 1 A σ, U, 1 A σ, q j , 1 A Eq i Eq j σ, A 1, q j (q j ∈ Q ∃ ) Eq i U q j σ, A 1, q j (q j ∈ Q ∀ ) Eq i Sec End, A 1 (q i ∈ F ) Eq i Sec End, A 1 (q i ∈ F ) U q i Sec End, A 1 (q i ∈ F ) U q i Sec End, A 1 (q i ∈ F ) Fig. 5: Automaton A 1 in the proof of theorem 6, simulating Universal moves .
A 1 to reach the secret state. Conversely, one can associate to every run of A, A 1 , A 2 , a word w that is read, and a path in some run that is used to recognize w. If A 2 has a strategy to force A 1 secret leakage, then all path following this strategy lead to a winning configuration. As a consequence, there is a choice of existential moves such that all states simulated along a run of the alternating automaton with these existential moves end in accepting state. Hence, L(A alt ) is empty iff the system composed of A, A 1 , A 2 is opaque. Now, the system built to simulate A alt is of polynomial size in |A alt |, so there is a polynomial size reduction from the emptiness problem for alternating automata to the active opacity question, and active opacity is EXPTIME-complete. PROOF OF LEMMA 15 Proof: One can first recall that for the Hamming and Levenshtein distances, we have d(u, v) = d(u -1 , v -1 ), where u -1 is the mirror of u. Similarly, we have d K (u, R) = d(u -1
[1,K] , R -1 ). Let G R = (Σ, Q, q 0 , δ, F ) be the automaton recognizing language R. We can build an automaton C k that recognizes words of length at least K, which suffixes of length K are at hamming distance at most k of suffixes of length K of words in R. We define C k = (Σ, Q suf k , q suf 0,k , δ suf k , F suf k
). This automaton can be computed as follows : first build G -1 R , the automaton that recognizes mirrors of suffixes of R. This can be easily done by setting as initial states the final states of R, and then reversing the transition relation. Then by adding a K-bounded counter to states of G -1 R , and setting as accepting states states of the form (q, K), we obtain an automaton B -1 that recognizes mirrors of suffixes of R of length K. Then, for every k ∈ [0..K], we can compute B k , the automaton that recognizes mirrors of words of length K that are at distance k from words in B -1 , by adding another counter to states that counts substitutions, and which final states are of the form (q, K, k). Then we can build (by sequential composition of automata for instance) the automaton C k that reads any word in Σ * and then recognizes a word in (B k ) -1 .
Fig. 1 :
1 Fig. 1: An automaton G and the automaton G 3 Ham that recognizes words at Hamming distance ≤ 3 of L(G).
Figure 1 .Fig. 2 :
12 Fig. 2: Evolution of suspicion wrt profile of Figure 1 when reading word w = a.a.a.c.b.b.a.c.b.a.a. distance d K (w [i.i+5] , P j ) at each letter of w (plain line), and the evolution of the suspicion function (dashed line).One can easily define a notion of passive opacity with respect to a suspicion threshold T . Again, verifying this property supposes finding a witness run of the system that leaks information without exceeding suspicion threshold, which can be done in PSPACE (assuming that T is smaller than 2 |Conf | ). As for profiled opacity, we can define Strat T the set of strategies of an user that never exceed suspicion level T . This immediately gives us the following definitions and results.Definition 16: Let K ∈ N be a suffix size, α, β ≤ K and T ∈ N be a suspicion threshold. S is opaque with suspicion threshold T iff ∀i = j, ∀w ∈ L(S i ) ∩ L(S), Ω j (w) < T implies π -1
Proof: The winning path is of the form ρ = n 0 .n 1 . . . n b .n b+1 . . . n w . Let d b be the number of discrepancies in n 0 .n 1 . . . n b and λ b = d b b . Player u j can choose any integer value B and enforce path ρ B = n 0 .n 1 . . . n b .β B . The mean number of discrepancies in ρ B is equal to d b +B.d β i+B.|β| , i.e. as B increases, this number tends towards M β .
s and ES is th set of pairs (C m , s m ) such that there exits a pair (C p , s p ) in ES, and a sequence ρ of transitions from C p to C m , labeled by a word w such that Π j (w) = σ, and one can move in S i from s p to s m by reading w. Note that this set of sequences needs not be finite, but one can find in O(|Conf |) the set of possible pairs that are accessible while reading σ.• (n, n ) ∈ δ G if n = (1, C, s, ES), n = (1, C , s , ES ) and there exists σ ∈ Σ j , a transition C σ -→ C in S, a transition (s, σ, s ) ∈-→ S i and ES is the set of pairs of the form (C m , s m ) such that there exists (C m , s m ) ∈ ES (C m , σ, C m ) ∈-→ and (s m , σ, s m
• the current configuration C of S • the current state s of S i • an estimation ES of the system's configuration and secret's current state by u j ,ES j = {(C 1 , s 1 ), ...(C k , s k )}=⇒ C iff there exists a sequence of transitions of S which observation by u j is σ, and s from s to s in S i . Then we define moves among nodes as a relation δ
We write C
σ =⇒ S i s if there is
such a sequence
σ
This entails that we assume that queries are faster than the rest of the system, i.e. not event can occur between a query and its answer. Hence we have L(S Γ ) ⊆ L(S) (Σ Γ .{tt.ff }) * . We could easily get rid of this hypothesis, by remembering in states of S Γ which query (if any) was sent by an user, and returning the answer at any moment. |
01758006 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01758006/file/EUCOMES2016_Nayak_Nurahmi_Caro_Wenger_HAL.pdf | Abhilash Nayak
email: abhilash.nayak@irccyn.ec-nantes.fr
Latifah Nurahmi
email: latifah.nurahmi@gmail.com
Philippe Wenger
email: philippe.wenger@irccyn.ec-nantes.fr
Stéphane Caro
email: stephane.caro@irccyn.ec-nantes.fr
Comparison of 3-RPS and 3-SPR Parallel Manipulators based on their Maximum Inscribed Singularity-free Circle
Keywords: 3-RPS parallel manipulator, 3-SPR parallel manipulator, operation modes, singularity analysis, maximum inscribed circle radius 1
. Then, the parallel singularities of the 3-SPR and 3-RPS parallel manipulators are analyzed in order to trace their singularity loci in the orientation workspace. An index, named Maximum Inscribed Circle Radius (MICR), is defined to compare the two manipulators under study. It is based on their maximum singularity-free workspace and the ratio between their circum-radius of the movingplatform to that of the base.
Introduction
Zero torsion parallel mechanisms have proved to be interesting and versatile. In this regard, the three degree of freedom lower mobility 3-RPS parallel manipulator (PM) has many practical applications and has been analyzed by many researchers [START_REF] Schadlbauer | Husty : A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF]. Interchanging the free moving platform and the fixed base in 3-RPS manipulator results in the 3-SPR manipulator as shown in figure 1, retaining three degrees of freedom.
The study of 3-SPR is limited in the literature. An optimization algorithm was used in [START_REF] Lukanin | Inverse Kinematics, Forward Kinematics and Working Space Determination of 3-DOF Parallel Manipulator with S-P-R Joint Structure[END_REF] to compute the forward and inverse kinematics of 3-SPR manipulator. After the workspace generation it is proved that the 3-SPR has a bigger working space volume compared to the 3-RPS manipulator. The orthogonality of rotation matrices is exploited in [START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF] to perform the forward and inverse kinematics along with the simulations of 3-SPR mechanism. Control of a hydraulic actuated 3-SPR PM is demonstrated in [START_REF] Mark | Kinematic Modeling of a Hydraulically Actuated 3-SPR-Parallel Manipulator for an Adaptive Shell Structure[END_REF] with an interesting application on adaptive shell structure.
This paper focuses on the comparison of kinematics and singularities of the 3-RPS and 3-SPR parallel manipulators and is organized as follows: initially, the de-sign of 3-SPR PM is detailed and the design of the 3-RPS PM is recalled. The second section describes the derivation of the constraint equations of the 3-SPR manipulator based on the algebraic geometry approach [START_REF] Schadlbauer | Husty : A Complete Kinematic Analysis of the 3-RPS Parallel Manipulator[END_REF][START_REF] Nurahmi | Operation modes and singularities of 3-PRS parallel manipulators with different arrangements if P-joints[END_REF]. The primary decomposition is computed over these constraint equations and it shows that the 3-SPR has identical operation modes as the 3-RPS PM. Moreover, the actuation and constraint singularities are described with singularity loci plots in the orientation workspace. Finally, an index called the singularity-free maximum inscribed circle radius is introduced to compare the maximum singularity free regions of 3-RPS and 3-SPR manipulators from their home position. In [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF], maximum tilt angles for any azimuth for 3-RPS PM are plotted for different ratios of platform to base circumradii. However, these plots correspond to only one operation mode since the notion of operation modes was not considered in this paper. That being the case, this paper offers a complete singularity analysis in terms of MICR for both the manipulators. These plots are useful in the design choice of a manipulator based on their platform to base circumradii ratios and their operation modes.
Manipulator architectures
x 1 x 0 y 1 y 0 z 1 z 0 ∑ 0 ∑ 1 B 3 B 1 B 2 A 3 A 1 A 2 s 1 s 2 s 3 r 1 r 3 r 2 h 1 h 2 O 1 O 0 Fig. 1 3-SPR parallel manipulator h 1 h 2 x 0 y 0 z 0 ∑ 0 x 1 y 1 z 1 ∑ 1 r 1 r 3 r 2 A 3 A 1 A 2 B 1 B 3 B 2
Fig. 2 3-RPS parallel manipulator Figure 1 shows a general pose of the 3-SPR parallel manipulator with 3 identical legs each comprising of a spherical, prismatic and revolute joints. The triangular base and the platform of the manipulator are equilateral.
Σ 0 is the fixed co-ordinate frame attached to the base with the origin O 0 coinciding with the circum-centre of the triangular base. The centres of the spherical joints, namely A 1 , A 2 and A 3 bound the triangular base. x 0 -axis of Σ 0 is considered along O 0 A 1 which makes the y 0 -axis parallel to A 2 A 3 and the z 0 -axis normal to the triangular base plane. h 2 is the circum-radius of the triangular base.
The moving platform is bounded by three points B 1 , B 2 and B 3 that lie on the revolute joint axes s 1 , s 2 and s 3 . Moving co-ordinate frame Σ 1 is attached to the moving platform whose x 1 -axis points from the origin O 1 to B 1 , y 1 -axis being orthogonal to the line segment B 2 B 3 and the z 1 -axis normal to the triangular platform. Circum-radius of this triangle with B i (i = 1, 2, 3) as vertices is defined as h 2 .
The prismatic joint of the i-th (i = 1, 2, 3) leg is always perpendicular to the respective revolute joint axis in each leg. Hence the prevailing orthogonality of A i B i to s i (i = 1, 2, 3) no matter the motion of the platform is a constraint of the manipulator. The distance between the points A i and B i (i = 1, 2, 3) is defined by the prismatic joint variables r i .
The architecture of the 3-SPR PM is similar to that of the 3-RPS PM except that the order of the joints in each leg is reversed. The architecture of 3-RPS is recalled in figure 2 where the revolute joints are attached to the fixed triangular base with circum-radius h 1 while the spherical joints are attached to the moving platform with circum-radius h 2 .
Constraint equations of the 3-SPR parallel manipulator
The homogeneous coordinates of A i and B i in the frames Σ 0 and Σ 1 respectively are expressed as follows:
r 0 A 1 = [1, h 1 , 0, 0] T , r 0 A 2 = [1, - 1 2 h 1 , - 1 2 √ 3 h 1 , 0] T , r 0 A 3 = [1, - 1 2 h 1 , 1 2 √ 3 h 1 , 0] T r 1 B 1 = [1, h 2 , 0, 0] T , r 1 B 2 = [1, - 1 2 h 2 , - 1 2 √ 3 h 2 , 0] T , r 1 B 3 = [1, - 1 2 h 2 , 1 2 √ 3 h 2 , 0] T (1)
To express the coordinates of B i in the frame Σ 0 , a coordinate transformation matrix must be used. In this context, the Study parametrization of a spatial Euclidean transformation matrix M ∈ SE(3) is utilized and is represented as:
M = x 0 2 + x 1 2 + x 2 2 + x 3 2 0 T 3×1 M T M R , M T = -2 x 0 y 1 + 2 x 1 y 0 -2 x 2 y 3 + 2 x 3 y 2 -2 x 0 y 2 + 2 x 1 y 3 + 2 x 2 y 0 -2 x 3 y 1 -2 x 0 y 3 -2 x 1 y 2 + 2 x 2 y 1 + 2 x 3 y 0 , M R = x 0 2 + x 1 2 -x 2 2 -x 3 2 -2 x 0 x 3 + 2 x 1 x 2 2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 3 + 2 x 1 x 2 x 0 2 -x 1 2 + x 2 2 -x 3 2 -2 x 0 x 1 + 2 x 3 x 2 -2 x 0 x 2 + 2 x 1 x 3 2 x 0 x 1 + 2 x 3 x 2 x 0 2 -x 1 2 -x 2 2 + x 3 2 (2)
where M T and M R represent the translational and rotational parts of the transformation matrix M respectively. The parameters x i , y i , i ∈ {0, ..., 3} are called the Study parameters. Matrix M maps every displacement SE(3) to a point in a 7dimensional projective space P 7 and this mapping is known as Study s kinematic mapping.
An Euclidean transformation will be represented by a point P∈ P 7 if and only if the following equation and inequality are satisfied:
x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 ( 3
)
x 0 2 + x 1 2 + x 2 2 + x 3 2 = 0 (4)
All the points that satisfy equation ( 3) belong to the 6-dimensional Study quadric.
The points that do not satisfy the inequality (4) lie in the exceptional generator x 0 = x 1 = x 2 = x 3 = 0. To derive the constraint equations, we can express the direction of the vectors s 1 , s 2 and s 3 in homogeneous coordinates in frame Σ 1 as:
s 1 1 = [1, 0, -1, 0] T , s 1 2 = [1, - 1 2 √ 3 , 1 2 , 0] T , s 1 3 = [1, 1 2 √ 3 , 1 2 , 0] T (5)
In the fixed coordinate frame Σ 0 , B i and s i can be expressed using the transformation matrix M :
r 0 B i = M r 1 B i ; s 0 i = M s 1 i i = 1, 2, 3 (6)
As it is clear from the manipulator architecture, the vector along A i B i , namely r 0 B i -r 0 A i is orthogonal to the axis s i of the i-th revolute joint which after simplification yields the following three equations:
(r 0 B i -r 0 A i ) T s i = 0 =⇒ g 1 := x 0 x 3 = 0 g 2 := h 1 x 1 2 -h 1 x 2 2 -2 x 0 y 1 + 2 x 1 y 0 + 2 x 2 -2 x 3 y 2 = 0 g 3 := 2 h 1 x 0 x 3 + h 1 x 1 x 2 + x 0 y 2 + x 1 y 3 -x 2 y 0 -x 3 y 1 = 0 (7)
The actuation of prismatic joints leads to three additional constraint equations. The Euclidean distance between A i and B i must be equal to r i for the i-th leg of the manipulator. As a result, A i B i 2 = r 2 i leads to three additional equations g 4 = g 5 = g 6 = 0, which are quite lengthy and are not displayed in this paper due to space limitation.
Two other equations are considered such that the solution represents a transformation in SE(3). The study-equation g 7 = 0 in Equation (3) constrains the solutions to lie on the Study quadric. g 8 = 0 is the normalization equation respecting the inequality [START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF]. Solving these eight constraint equations provides the direct kinematic solutions for the 3-SPR parallel manipulator.
g 7 := x 0 y 0 + x 1 y 1 + x 2 y 2 + x 3 y 3 = 0 ; g 8 := x 0 2 + x 1 2 + x 2 2 + x 3 2 -1 = 0 (8)
Operation modes
Algebraic geometry offers an organized and an effective methodology to deal with the eight constraint equations. A polynomial ideal consisting of equations g i (i = 1, ..., 8) is defined with variables {x 0 , x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 } over the coefficient ring
C[h 1 , h 2 , r 1 , r 2 , r 3 ]
as follows:
I =< g 1 , g 2 , g 3 , g 4 , g 5 , g 6 , g 7 , g 8 > (9)
The vanishing set or the variety V (I) of this ideal I consists of the solution to direct kinematics as points in P 7 . However, in this context, only the number of operation modes are of concern irrespective of the joint variable values. Hence, the sub-ideal independent of the prismatic joint length, r i is considered:
J =< g 1 , g 2 , g 3 , g 7 > (10)
The primary decomposition of ideal J is calculated to obtain three simpler ideals J i (i = 1, 2, 3). The intersection of the resulting primary ideals returns the ideal J . From a geometrical viewpoint, the variety V (J ) can be written as the union of the varieties of the primary ideals V (J i ), i = 1, 2, 3 [START_REF] Cox | Shea: Ideals, Varieties, and Algorithms (Series: An Introduction to Computational Algebraic Geometry and Commutative Algebra[END_REF].
J = 3 i=1 J i or V (J ) = 3 i=1 V (J i ) (11)
Among the three primary ideals obtained as a result of primary decomposition, it is important to note that J 1 and J 2 contain x 0 and x 3 as their first elements, respectively. The third ideal, J 3 is obtained as J 3 =< x 0 , x 1 , x 2 , x 3 > and is discarded as the variety V (J 3 ∪ g 8 ) is null over the field of interest C. As a result, the 3-SPR PM has two operation modes, represented by x 0 = 0 and x 3 = 0. In fact, g 1 = 0 in Equation [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF] shows the presence of these two operation modes. It is noteworthy that the 3-RPS PM also has two operation modes as described in [START_REF] Schadlbauer | The 3-RPS Parallel Manipulator from an Algebraic Viewpoint[END_REF]. The analysis is completed by adding the remaining constraint equations to the primary ideals J 1 and J 2 . Accordingly, two ideals K 1 and K 2 are obtained. As a consequence, the ideals K i correspond to the two operation modes and can be studied separately.
K i = J i ∪ < g 4 , g 5 , g 6 , g 8 > i = 1, 2 (12)
The system of equations in the ideals K 1 and K 2 can be solved for a particular set of joint variables to obtain the Study parameters and hence the pose of the manipulator. These Study parameters can be substituted back in equation ( 2) to obtain the transformation matrix M. According to the theorem o f Chasles this matrix now rep-resents a discrete screw motion from the identity position (when the fixed frame Σ 0 and the moving frame Σ 1 coincide) to the moving-platform pose. The displacement about the corresponding discrete screw axis (DSA) defines the pose of the moving platform.
4.1 Ideal K 1 : Operation mode 1 : x 0 = 0 For operation mode 1, the moving platform is always found to be displaced about a DSA by 180 degrees [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method[END_REF]. Substituting x 0 = 0 and solving for y 0 , y 1 , y 3 from the ideal K 1 shows that the translational motions can be parametrized by y 2 and the rotational motions by x 1 , x 2 and x 3 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF].
4.2 Ideal K 2 : Operation mode 2 :
x 3 = 0
For operation mode 2, the moving platform is displaced about a DSA with a rotation angle α calculated from cos( α 2 ) = x 0 . It is interesting to note that the DSA in this case is always parallel to the xy-plane [START_REF] Kong | Reconfiguration analysis of a 3-DOF parallel mechanism using Euler parameter quaternions and algebraic geometry method[END_REF]. Substituting x 3 = 0 and solving for y 0 , y 2 , y 3 from the ideal K 2 shows that the translational motions can be parametrized by y 1 and the rotational motions by x 0 , x 1 and x 2 [START_REF] Schadlbauer | Operation Modes in Lower-Mobility Parallel Manipulators[END_REF].
Singularity analysis
The Jacobian of the 3-SPR manipulator in this context is defined as J i and the manipulator reaches a singular position when its determinant vanishes.:
J i = ∂ g j ∂ x k , ∂ g j ∂ y k where i = 1, 2 ; j = 1, ..., 8 ; k = 0, ..., 3 (13)
Actuation and constraint singularities
Computing the determinant S i : det(J i ) results in a hyper-variety of degree 8 in both the operation modes:
S 1 : x 3 • p 7 (x 1 , x 2 , x 3 , y 0 , y 1 , y 2 , y 3 ) = 0 and S 2 : x 0 • p 7 (x 0 , x 1 , x 2 , y 0 , y 1 , y 2 , y 3 ) = 0 (14)
The 7 degree polynomials describe the actuation singularities when the prismatic joints are actuated and that exist within each operation mode whereas x 0 = x 3 = 0 describes the constraint singularity that exhibits the transition between K 1 and K 2 .
Singularity Loci
The actuation singularities can be expressed in the orientation workspace by parametrizing the orientation of the platform in terms of Euler angles. In particular, the Study parameters can be expressed in terms of the Euler angles azimuth (φ ), tilt (θ ) and torsion (ψ) [?]:
x 0 = cos( θ 2 )cos( φ 2 + ψ 2 ) x 1 = sin( θ 2 )cos( φ 2 - ψ 2
)
x 2 = sin( θ 2 )sin( φ 2 - ψ 2 ) x 3 = cos( θ 2 )sin( φ 2 + ψ 2 ) (15)
Since K 1 and K 2 are characterized by x 0 = 0 and x 3 = 0, substituting them in equation ( 15) makes the torsion angle (ψ) null, verifying the fact that, like its 3-RPS counterpart, the 3-SPR parallel manipulator is a zerotorsion manipulator.
Accordingly, the x i parameters can be written in terms of tilt(θ ) and azimuth(φ ) only. The following method is used to calculate the determinant of J i in terms of θ , φ and Z, the altitude of the moving platform from the fixed base. The elements of the translational part M T of matrix M in equation ( 2) are considered as M T = [X,Y, Z] T that represent the translational displacement in the coordinate axes x, y and z respectively. Then, the constraint equations are derived in terms of X,Y, Z, x 0 , x 1 , x 2 , x 3 , r 1 , r 2 , r 3 . From these equations, the variables X,Y, r 1 , r 2 and r 3 are expressed as a function of Z and x i and are substituted in the determinant of the Jacobian. Finally, the corresponding x i are expressed in terms of Euler angles, which yields a single equation describing the actuation singularity of the 3-SPR PM in terms of Z, θ and φ . Fixing the value of Z and plotting the determinant of the Jacobian for φ ∈ [-180 0 , 180 0 ] and θ ∈ [0 0 , 180 0 ] depicts the singularity loci. The green curves in figure 3(a) and 3(b) show the singularity loci for operation mode 1 and operation mode 2 respectively with h 1 = 1, h 2 = 2 and Z = 1.
Maximum Inscribed Circle Radius for 3-RPS and 3-PRS PMs
From the home position of the manipulator (θ = φ = 0), a circle is drawn that has the maximum tilt value for any azimuth within the singularity-free region [START_REF] Briot | Singularity Analysis of Zero-Torsion Parallel Mechanisms[END_REF]. The radius of this circle is called the Maximum Inscribed Circle Radius (MICR). In Figure 3, the red circle denotes the maximum inscribed circle where the value of MICR is expressed in degrees.
The MICR is used as a basis to compare the 3-SPR and the 3-RPS parallel manipulators as they are analogous to each other in aspects like number of operation modes and direct kinematics. The 3-SPR PM has higher MICR values and hence larger singularity free regions compared to that of the 3-RPS PM in compliance with [START_REF] Lukanin | Inverse Kinematics, Forward Kinematics and Working Space Determination of 3-DOF Parallel Manipulator with S-P-R Joint Structure[END_REF][START_REF] Lu | Position and Workspace Analysis of 3-SPR and 3-RPS Parallel Manipulators[END_REF]. For 3-RPS parallel manipulator, there exists rarely any difference in the MICR values for different operation modes whereas in 3-SPR PM, the second operation mode has higher values of MICR compared to operation mode 1. The values of MICR ranges from 0 0 to 130 0 in operation mode 1, but from 0 0 to 160 0 in operation mode 2 for 3-SPR PM.
In addition, for 3-RPS PM, the ratio h 1 : h 2 influences operation mode 1 more than operation mode 2. It is apparent that the MICR values have a smaller range for different ratios in case of operation mode 2. On the contrary, for 3-SPR PM, high MICR values can be seen for operation mode 2, for lower ratios of h 1 : h 2 . Therefore, the MICR plots can be exploited in choosing the ratio of the platform to the base in accordance with the required application.
Conclusions
In this paper, 3-RPS and 3-SPR parallel manipulators were compared based on their operation modes and singularity-free workspace. Initially, the operation modes of the 3-SPR PM were enumerated. It turns out that the 3-SPR parallel manipulator has two operation modes similar to the 3-RPS PM. The parallel singularities were computed for both the manipulators and the singularity loci were plotted in their orientation workspace. Furthermore, an index called the singularity-free maximum inscribed circle radius was defined. MICR was plotted as a function of the Z coordinate of the moving-platform for different ratios of the platform circum-radius to the base circum-radius. It shows that, compared to the 3-RPS PM, the 3-SPR PM has higher MICR values and hence a larger singularity free workspace for a given altitude. For the ratios of the platform to base size, higher values of MICR are observed in operation mode 2 than in operation mode 1 for the 3-SPR mechanism and is viceversa for the 3-RPS mechanism. In fact, the singularity-free MICR curves open up many design possibilities for both mechanisms suited for a particular application. It will also be interesting to plot the MICR curves for constraint singularities and other actuation modes like 3-RPS and 3-SPR manipulators and to consider the parasitic motions of the moving-platform within the maximum inscribed circles. The investigation of MICR not started from the identity condition (θ = φ = 0 degrees) has to be considered too. Future work will deal with those issues.
Fig. 3 3 -
3 Fig. 3 3-SPR singularity loci and the maximum inscribed singularity-free circle (a) Operation mode 1 (b) Operation mode 2
Z h 1 1 in Figures 4 and 5 .
115 vs MICR is plotted for different ratios of h 2 : h The maximum value of MICR is limited to 160 degrees for all the figures and Z h 1 varies from 0 to 4 while eight ratios of h 2 : h 1 are considered. The data cursor in Figures 5(a) and 5(b) correspond to the red circles with MICR = 25.22 and 30.38 degrees in Figures 3(a) and 3(b), respectively. The MICR plots give useful information on the design choice of 3-RPS or 3-SPR parallel manipulators.
M 1 h2MFig. 4
14 Fig. 4 MICR vs. Z h 1 for the 3-RPS manipulator (a) Operation mode 1 (b) Operation mode 2
M 1 h2MFig. 5
15 Fig. 5 MICR vs. Z h 1 for the 3-SPR manipulator (a) Operation mode 1 (b) Operation mode 2 |
01758030 | en | [
"chim.inor",
"chim.coor",
"chim.cata",
"chim.mate",
"phys.phys.phys-chem-ph"
] | 2024/03/05 22:32:10 | 2017 | https://theses.hal.science/tel-01758030/file/LE_QUANG_2017_diffusion.pdf | Dr Isabelle Gautier-Luneau
Michael Holzinger
Rajaa
Lucy
Matthew
Deborah
Youssef
In order to complete my thesis, I have luckily received generous evaluation, advice and support from many people. I
CHAPTER 1 GENERAL INTRODUCTION
This thesis focuses on the use of TiO2 nanoparticles (NPs) modified with a [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) photosensitizer (PS) and a functional complex / organic entity for applications in photocatalysis and photocurrent generation. TiO2 is a semiconductor which has been extensively investigated in photocatalysis and photodegradation of organic pollutants in water. Here we propose to use TiO2 NPs (i) as a platform to immobilize different entities proximally, and (ii) as an electron relay between immobilized redox PS and an electron acceptor (A) to obtain a PS/TiO2/A hybrid triad. In this thesis, [Ru(bpy)3] 2+ is employed as the visible PS and a metal complex is used as the electron acceptor. [Ru(bpy)3] 2+ is one of the most investigated redox PS, notably because of its good absorption in the visible domain, long-lived excited state and reversible redox properties in both oxidation and reduction. In this chapter we will first highlight some important properties of TiO2 semiconductor and [Ru(bpy)3] 2+ PS that are relevant to our studies. Afterwards, some examples of hybrid systems containing TiO2, [Ru(bpy)3] 2+ and a functional entity for photocatalytic applications will be presented.
I.1. Semiconducting TiO2 nanoparticles
TiO2 is probably the most studied semiconductor, particularly in the field of photocatalysis. One of the earliest works on TiO2 as a photocatalyst was dated back in 1921 by Renz 1 . The author observed a color change of TiO2 powder from white to a darker color when TiO2 was irradiated with sunlight in the presence of glycerol. In this section we will highlight some of the most important properties of TiO2 semiconductor, together with a few relevant works using TiO2 as a photocatalyst. We will not discuss the quantum confinement effect of TiO2 as the NPs mentioned in this section are sufficiently large. For more information on the properties and applications of TiO2 NPs, the reader is kindly referred to recent reviews by Burda 2 , Henderson 3 and Fujishima 4 .
I.1.1. Crystal phases
Three main types of TiO2 crystal phases are often mentioned in literature namely anatase, rutile and brookite. They can be described as distorted TiO6 octahedra with different symmetries or arrangements (Figure I-1). In the anatase structure, two TiO6 octahedra are connected via a common edge, whereas in rutile and brookite structures both corner and edge are shared between two TiO6 units. As a result, the three structures differ in Ti-O bond length which plays a critical role in their electronic properties. The phase transformation in the bulk TiO2 has been extensively studied by Diebold. 5 The author concluded that rutile is the only thermodynamically stable phase, while anatase and brookite are metastable and can be transformed to rutile when heated. However, the phase stability of TiO2 NPs heavily depends on the particle size. For example, in the range of 325-750 0 C, anatase is the most stable phase for particles smaller than 11 nm, brookite is the most stable phase for particles with diameter from 11 nm to 35 nm, and rutile is the most stable phase for particles larger than 35 nm. 6
I.1.2. Photophysical properties of TiO2 particles
The electronic structure of a semiconductor consists of a valence band (VB) and a conduction band (CB). They are separated by a bandgap where there are no electron states.
The bandgap thus corresponds to the energy difference between the upper edge of the VB and the lower edge of the CB. In the ground state, all the electrons are located in the VB. Upon absorption of a photon with energy higher than the bandgap, an electron can be promoted to the CB, leaving a positively charged hole in the VB. Consequently, the material becomes conductive due to the electrons in the CB. 4 When the electron and hole recombine, a photon can be emitted. The positions of the CB and VB of several semiconductors are shown in The bandgap of bulk TiO2 depends on the crystal phase. Anatase and rutile phases show a bandgap Eg = 3.2 eV and 3.0 eV, respectively. 3 The bandgap in anatase and rutile phases is also indirect and direct, respectively. The bandgap in brookite is reported in the range of 3.1-3.4 eV 8 and is indirect 2 . A direct bandgap corresponds to the promotion of an electron from the VB to the CB without changing its momentum, whereas an indirect bandgap requires a change in the electron momentum. Consequently, when the electron and hole recombine, a photon is directly emitted in the rutile phase, while in the anatase phase the electron needs to pass through an intermediate state to transfer the momentum to the crystal lattice. The intermediate state is also referred as "electron trap" or "hole trap", which is located inside the bandgap at energy levels higher or lower than the Fermi level EF = (ECB + EVB)/2, respectively. The traps are induced by crystal defects, which create extra states inside the bandgap. 2 In the anatase phase, the recombination between CB electrons and trapped holes to emit photons is conventionally referred as "type 1" emission, while the recombination between trapped electrons and VB holes is called "type 2" emission. Both type 1 and type 2 emissions depend strongly on the energy of the defect sites, meaning the way to synthesize TiO2 particles. Usually the emission wavelength maximum is observed at ~ 550 nm for anatase phase and ~ 840 nm for rutile phase. Direct observations of CB electrons, trapped electrons and trapped holes can be achieved with a variety of spectroscopic techniques. For example, infrared (IR) spectroscopy can detect the CB electrons, and electron paramagnetic resonance (EPR) spectroscopy can be used to observed trapped holes in oxygen atoms (denoted as O -) and trapped electrons in Ti atoms (denoted as Ti 3+ ). 2 Dynamics of these charge carriers has also been studied with transient absorption spectroscopy (TAS). Tamaki et al. 10 reported that upon the bandgap excitation, photo-induced holes are trapped within 100 fs at sites near the surface of anatase TiO2 NPs, while photoinduced electrons are first trapped at shallow sites near the surface before relaxing into deeper trapping sites in the bulk within ~ 500 ps. Kinetics of the charge recombination is complicated. An early work by Gratzel et al. 11 in 1987 showed that the radiative recombination occurs within 30 ns for anatase TiO2 colloid with diameter smaller than 12 nm.
Recombination rate also depends on the size of TiO2 NPs and the number of e -/h + pairs that have been generated, meaning the power of the excitation light. For example, the recombination rate was estimated at 1 × 10 11 cm 3 .s -1 for 2 nm anatase NPs and 3 × 10 7 cm 3 .s -1 for 27 nm NPs. 3 Another approach to study the charge recombination process was reported by Colbeau-Justin et al. 12 using time-resolved microwave photoconductivity method. The authors concluded that there is a competition between a fast recombination process and a fast trapping process of a part of the charge carriers in anatase. The trapping process reduces the number of holes, thus increases the lifetime of the electrons on the CB. All of these information are important for the applications of TiO2 as a photocatalyst. Generally, a fast excitation rate and a slow charge recombination rate are required so that the electrons and holes will have enough time to react with substrates in solution.
I.1.3. TiO2 as a photocatalyst
As mentioned above, the photoexcitation of TiO2 results in the free electrons and holes as well as O -radicals, which are all reactive. They offer TiO2 capability to reduce or oxidize a variety of organic and inorganic species on the surface of the NPs. We choose three types of reactions, namely water oxidation, CO2 reduction and decomposition of aqueous pollutants, where TiO2 NPs have been tested as a photocatalyst.
Water oxidation reaction
Light-induced water splitting reaction consists of two half-reactions: oxidation of water to O2 at the photoanode and reduction of water to H2 at the photocathode. As shown in Figure I-2, it is thermodynamically more favorable for the positively charged holes on the TiO2 CB to oxidize water than for the CB electrons to reduce water. The water (or OH -) oxidation by VB holes can occur as follows:
4 h + + 2 H2O O2 + 4 H +
The first attempt to construct a photoelectrochemical cell for water splitting was proposed in 1972 by Fujishima and Honda using a n-type TiO2 electrode as photoanode and a
Pt counter electrode. 13 Under UV light, the holes generated on the VB of TiO2 are used to oxidize water whereas the electrons on the CB are collected through an external circuit to the Pt electrode where protons are reduced to form H2. Despite the fact that the process is highly thermodynamically favorable, no publication has shown direct experimental evidence for the water oxidation reaction on TiO2. It has been proposed that the required nucleophilic attack of water to a trapped surface hole is not achieved. 3 The mechanism of the water oxidation reaction by TiO2 photocatalyst was investigated by Kavan,Gratzel et al. 9 using anatase TiO2(101) and rutile TiO2(001) single crystal deposited onto an electrode. An anodic current was observed when the electrodes were irradiated with UV light at zero bias. Cyclic voltammograms were taken in the dark for the electrodes before and after they were exposed to UV irradiation. The authors observed an irreversible cathodic peak in the CVs of both anatase and rutile electrodes after UV light exposure, which was attributed to the reduction of a surface-bound peroxide intermediate. However, no exhaustive water oxidation experiment was mentioned.
CO2 photoreduction
Although many works on the photoreduction of CO2 have been made based on TiO2, they require a co-catalyst or thermal activation of TiO2 and not the direct photoexcitation of TiO2. 3 The only publication using TiO2 as a photocatalyst to reduce CO2 was conducted by Anpo et al. 14 The authors studied anatase TiO2 NPs and rutile NPs in the presence of CO2 + H2O gas mixture. UV irradiation of the solid TiO2 NPs at 275 K resulted in the formation of CH4 as the main product together with traces of C2H4 and C2H6 in the gas phase. These gas products were not present without the UV light. The photocatalytic activity of TiO2 was found to depend on the crystal phase and particle size of TiO2 NPs, as well as the ratio of CO2/H2O in the gas phase. Table I-1 summarizes the amount of CH4 produced and some relevant characteristic parameters of these TiO2 photocatalysts. The anatase phase was found to be more catalytically active than the rutile phase for small NPs. Increasing the particle size to several hundred nanometers lead to the decrease in the amount of CH4, probably due to the reduced surface area and, consequently, reduced number of available sites for CO2 molecules to be adsorbed.
Decomposition of aqueous pollutants
A wide variety of aqueous pollutants can be decomposed by TiO2 particles under UV irradiation, such as alkanes, aliphatic alcohols, carboxylic acids, alkenes, aromatics, polymers, surfactants, herbicides, pesticides and dyes. 4 The decomposition of pollutants occurs via the formation of highly oxidative hydroxyl (OH ) and hydroperoxyl (HO2 ) radicals in water following the UV irradiation of TiO2 (Scheme I-1). These radicals can oxidize organic compounds to CO2 and H2O.
Scheme I-1. Generation of OH and HO2 radicals in water by photoinduced e -and h + on TiO2
Only UV light and O2 are required for this type of photocatalysis; the reactions can occur at room temperature. However, the quantum yield and efficiency depend on many factors such as the crystal phase and size of TiO2 photocatalyst, concentration, pH, light intensity, etc. 4 For example, when studying the quantum yields of liquid-solid photocatalytic reactions using TiO2 slurry catalysts, Serpone et al. 15 found that the maximum quantum yield was obtained with low light intensity and high substrate concentration. The maximum quantum yield of 14 % was reached at 365 nm for phenol degradation using Degussa P25 TiO2 NPs (rutile/anatase = 3/1, d = 25 nm) at pH 3.
To summarize, although TiO2 shows some photocatalytic activities, it only works under UV light and the selectivity is low. The visible domain of the solar spectrum is not efficiently utilized. To address the former issue, an additional PS capable of absorbing visible light can be anchored on TiO2 NPs to better harvest the solar light. A suitable PS is [Ru(bpy)3] 2+ , which will be described in Section I.2. Meanwhile the selective issue can be solved by incorporating a more efficient and selective catalyst onto the surface of TiO2 NPs. This TiO2/catalyst system will be mentioned in Section I.3.
I.2. Ru(II) tris-bipyridine: a prototypical photosensitizer
A PS is a molecule capable of absorbing photons, leading to an excited state which is able to interact with its surrounding media. The [Ru(bpy)3] 2+ (Figure I-4) has been one of the most studied PS in the last decades due to its excellent properties such as chemical stability, redox reversibility, long-lived excited state and strong luminescence. In this section we will discuss in detail about these properties, which are fundamental for further studies in this thesis.
I.2.1. Electrochemical properties
The redox properties of [Ru(bpy)3] 2+ complex was first studied with cyclic voltammetry by Matsumoto et al. 16 in MeCN solution containing tetraethylammonium perchlorate (TEAP)
as electrolyte (Figure I-5). In the anodic part, a reversible one-electron oxidation wave is observed at E1/2 = 1.25 V vs NHE which is attributed to the metal oxidation to Ru III species with a low spin 4d 5 configuration. 17 In the cathodic part, three successive one-electron reduction processes are observed, with the first one centered at around -1.40 V vs NHE. All the added electrons are localized on the bpy ligands. 17 No ligand dissociation has been reported. Therefore, the [Ru(bpy)3] 2+ PS exhibits rich and reversible redox properties. To characterize the excited state, it is important to introduce two parameters namely lifetime and emission quantum yield . They are defined by the following equations: 18 The Dexter mechanism, also known as the exchange mechanism, involves a double electron exchange between the donor and the acceptor molecules. An electron moves from the LUMO of the donor to the LUMO of the acceptor, accompanied by a simultaneous electron transfer from the HOMO of the acceptor to the HOMO of the donor. This electron exchange requires a strong overlap between the orbitals of the donor and the acceptor, hence it strongly depends on the nature of the linker connecting the two entities. The rate constant of the Dexter energy transfer process kDexter depends on the distance r between the donor and the acceptor as where K is related to specific orbital interactions, J is the normalized spectral overlap integral between the emission spectrum of the donor and the absorption spectrum of the acceptor, and is the attenuation factor which is specific for each linker.
The Forster mechanism, also called Coulombic mechanism, is a long range mechanism, which means that there is no requirement for a physical contact between the donor and the acceptor. In this context, the electronic nature of the linker between the donor and the acceptor, if present, does not affect the energy transfer rate. The rate constant of the Forster energy transfer process kForster is expressed by Equation I-4:
22 6 D Forster D J k r (Eq I-4)
where D and D are the emission quantum yield and luminescence lifetime of the donor, r is the distance between the donor and the acceptor.
I.2.3.2. Photo-induced electron transfer mechanisms
Electron transfer reactions involve an exchange of electron(s) between two molecules in which one is oxidized while the other is simultaneously reduced. Photo-induced electron transfer reactions are only possible when the redox potentials of the two molecules are well aligned so that the overall process is thermodynamically favorable (G < 0). The process is governed by the redox potential of the excited state of the PS and the oxidation / reduction potential of the electron donor / acceptor, respectively. The redox potential of the PS excited state is estimated by the Rehm-Weller equation: 24 where E * 1/2 is the reduction or oxidation potential of the excited state, E1/2 is the reduction or oxidation potential of the ground state and E00 is the difference in energy between the zeroth vibrational states of the ground and excited states. The values of E1/2 can be experimentally determined from a cyclic voltammetry experiment. The zero-zero excitation energy E00 is usually approximated by the emission energy Eem of the compound at 77 K (Figure I-9). As an approximation, E00 value can be estimated from the emission energy at room temperature in the case of [Ru(bpy) where h is the Planck's constant, c is the speed of light in vacuum, e is the charge of an electron, em,77 K and em,RT are the emission wavelength maximum at 77 K and room temperature, respectively. In this equation, G represents the free enthalpy of the reaction, Eox(D) and Ered(A) are the oxidation potential of the donor and the reduction potential of the acceptor respectively, s is the solvent dielectric constant, D r and A r are the ionic radii and RC is the center-to-center separation distance between the donor and the acceptor. E00 is the singlet or triplet state energy determined by the emission maximum at 77 K. The third term represents the Coulombic interaction between the two charged moieties. The last term corrects the difference in the ion-pair solvation in which the redox potentials were measured. This expression can be also written in a more simplified form as follows: Grafting a metal complex onto a surface can be achieved by modifying the complex ligands with suitable anchoring groups, which depend on the nature of the surface. Table I-2 summarizes some common anchoring groups reported in literature for several surfaces such as TiO2, Au, SiO2, ZrO2 and ITO. The surfaces can be either a macroelectrode or NPs. In this thesis, we focus on using the TiO2 NPs as a platform to immobilize different species (PS, D, A). Although some early works about grafting [Ru(bpy)3] 2+ PS on TiO2 NPs employed carboxylic acid, 34 in recent years the phosphonic acid has emerged as a more suitable anchoring group. Since the pioneering work by Gao et al. 26 in 1996, numerous publications have been made using phosphonic acid to graft different species onto TiO2
NPs. 30 Herein, we summarize some critical details for the grafting of a species onto TiO2 NPs using the phosphonic acid groups.
On a computational point of view, the binding mode between -P(O)(OH)2 and TiO2
surface is still on debate. Using phosphonic acid as a model to study the grafting on anatase and rutile TiO2, Luschtinetz et al. 35 concluded that the bidentate mode was more stable than the tridentate one. However, Nilsing et al. 36 claimed that the most stable mode of binding was the monodentate when they used another computational method. It is also experimentally challenging to distinguish between the three binding modes.
On an experimental point of view, a recent publication by Hirsch et al. 27 compares the adsorption process of different derivatives of phosphonic acid or carboxylic acid on TiO2
anatase NPs (d ~ 34 nm). Thermogravimetric analysis (TGA) was employed to quantitatively determine adsorption parameters. As shown in Table I-3, the grafting density is slightly lower when phosphonic acid is used to graft the species on TiO2 instead of carboxylic acid. However, the phosphonic acid groups exhibit significantly stronger bonds with the surface as indicated by the adsorption constant Kads and free energy G. Negligible differences are found when a phenyl or a hexyl chain is grafted on TiO2 NPs. This work provides experimental evidence for the advantages of using phosphonic acid as anchoring group compared with carboxylic acid.
I.2.5. Photo-induced charge transfer processes on TiO2 NPs sensitized with [Ru(bpy)3] 2+
In this section we will highlight some relevant studies on the kinetics of photo-induced charge transfer processes on TiO2 NPs modified with [Ru(bpy)3] 2+ PS (denoted as TiO2/Ru II ).
The TiO2/Ru II NPs can be either dispersed in colloidal solutions or deposited on a nanocrystalline TiO2 thin film electrode. The carboxylic or phosphonic acid is used as anchoring group. For a comprehensive overview in this topic, the reader is kindly referred to several excellent reviews for studies in colloidal solutions 37,38 and on mesoporous thin film electrodes 2,38
I.2.5.1. In colloidal solutions
Scheme I-2. Photosensitizers described in Section I.2.5.1.
One of the pioneering works on the kinetics of photo-induced charge transfer processes in TiO2/Ru II colloidal solution was published by Desilvestro et al. 34 The authors grafted
[Ru(dcb)3] 2+ (dcb=2,2'-bipyridine-4,4'-dicarboxylic, Scheme I-2a) on anatase TiO2 NPs (d = 6 nm) and studied the charge transfer in aqueous solution at pH 2 by means of time-resolved emission and transient absorption spectroscopies (TAS). Following an excitation at 530 nm, TiO2/Ru II* exhibited a rate of electron injection to TiO2 of 3.2 × 10 7 s -1 . The acidic media of 2.5 < pH < 6 allowed for the existence of the complex. In comparison, a mixture of [Ru(bpy)3] 2+ (Scheme I-2b) and TiO2 NPs did not show any quenching of the excited state Ru II* in acidic solution. At pH 10 [Ru(bpy)3] 2+ could be physisorbed to TiO2 due to electrostatic interaction between the cation dye molecules and negatively charged surface.
The electron injection rate was significantly slower, at only 1.5 × 10 5 s -1 . The recombination rate of (e -)TiO2 and Ru 3+ was similarly slow in both cases, around 4 × 10 5 s -1 . The work emphasized the importance of covalently linking the dye to TiO2 semiconductor to enhance the electron injection to TiO2. with kb ~ 10 5 -10 6 s -1 . In the presence of phenothiazine (PTZ) as an electron donor, the oxidized dye was efficiently reduced at a rate of kD = 3 × 10 8 s -1 before the back electron transfer with (e -)TiO2. The charge separated state (e -)TiO2 and PTZ + remained stable in a millisecond lifetime.
The nature of photo-induced, trapped electrons on anatase TiO2 sites (Ti 3+ ) has been investigated by Rittmann-Frank et al. 40
I.2.5.2. On TiO2 mesoporous thin film electrodes
Scheme I-3. Photosensitizers described in Section I.2.5.2.
The breakthrough in mesoporous, nanocrystalline TiO2 thin film electrodes sensitized with a metal complex dye is the report by O'Regan and Gratzel in 1991, 41 where they deposited a monolayer of [Ru(dcb)2(µ-(CN)Ru(CN)(bpy)2)2] 6+ complex (Scheme I-3a) on 10µm thick layer of TiO2 NPs. The modified electrode combined the high area surface of the film and excellent UV-vis absorption properties of the dye to yield an incident photon-tocurrent efficiency (IPCE) of > 80 % and light-to-electric energy conversion of > 7 %, together with great stability. The rate of charge injection was estimated to exceed 10 12 s -1 in this photovoltaic cell, while that of complex decomposition was smaller than 2 × 10 4 s -1 .
Since the O'Regan and Gratzel's paper, there have been numerous extensive studies on the photo-induced kinetics of dye-sensitized solar cells (DSSC). Wang et al. 42 constructed a series of conducting, rigid-rod linkers to immobilize [Ru(bpy)3] 2+ on TiO2 NPs thin films to study the dependence of charge transfer processes on the distance between Ru 2+ sites and the semiconductor. The interfacial electron transfer was strongly pH-dependent: kinj > 10 8 s -1 (pH 1, unable to be time-resolved) and kinj = 1×10 7 s -1 (pH 11). The injection quantum yield was found to decrease as the distance increased. Back electron transfer was, on the contrary, independent of the linker length, remaining at ~ 5×10 5 s -1 . The size of TiO2 NPs has been shown to be critical for the kinetics of electron injection. 47 Using N3 dye anchored on a 15-µm thick layer of TiO2 NPs with various diameters, the authors studied the photo-induced charge separation and recombination processes by means of time-resolved emission spectroscopy and femtosecond TAS. The injection rates lied on ps-ns time scale for all the sizes, but the injection efficiency significantly decreased from ~ 90 % (diameter < 50 nm) to 30-70 % (diameter > 50 nm). The drop was attributed to the dye aggregation in the space between the large particles. This work highlights the importance of using small TiO2 NPs (d < 50 nm) to achieve good injection efficiency.
Gillaizeau
To summarize, the photo-induced charge transfer processes on TiO2/Ru II dyads have been well studied in colloidal solution and on mesoporous thin film electrodes. In recent years the phosphonic acid to anchor Ru dyes on TiO2 NPs has gained more attention due to its higher stability than carboxylic acid. However, to the best of our knowledge, there is no publication on the photo-induced charge transfer process in solution when the phosphonic acid is used. We are interested in this topic as it is fundamental for further charge transfer studies in this thesis when another component like an electron donor or acceptor is also immobilized on TiO2/Ru II NPs.
I.3. Incorporation of a catalyst on TiO2/Ru II nanoparticles
As mentioned at the end of Section I.1, the photocatalytic reactions induced by TiO2 suffer from low yields and low selectivity. Therefore, a catalyst can be anchored on TiO2 NPs for obtaining desired products. For example, a hybrid system comprising metal oxide NPs functionalized with the enzyme carbon monoxide dehydrogenase (CODH) and [Ru(bpy)3] 2+
visible PS was synthesized for CO2 photoreduction (Scheme I-5). 48 Under irradiation at > 420 nm, the Ru II /TiO2/CODH hybrid system produced CO as the only product. The ability to store multiple reduction equivalents in the [Cr(ttpy)2] 3+ complex in homogeneous solution and on the triad will be described.
Chapter 4 will show the electrocatalytic and photocatalytic activity for CO2 reduction using [Mn(ttpy)(CO)3Br] catalyst in homogeneous solution. The Mn catalyst and [Ru(bpy)3] 2+
PS are then both anchored on TiO2 NPs to form the Ru II /TiO2/Mn I triad. The triad is tested in the photocatalytic CO2 reduction under visible irradiation, in the presence of a sacrificial electron donor. Some mechanistic studies will also be described.
Finally, in Chapter 5 we will present the use of the oxidative equivalents stored on the Ru III sites of the photoinduced charge separated state (e -)TiO2/Ru III for the polymerization of pyrrole units. The synthesis of a hybrid system consisting of a [Ru(bpy)3] 2+ PS bearing two pyrrole moieties immobilized on TiO2 via phosphonic acid groups will be discussed. excité du [Ru(bpy)3] 2+* à la bande de conduction de TiO2 se produit dans l'échelle de temps de la nanoseconde pour former l'état à charge séparée (e -)TiO2/Ru III , alors que la recombinaison de charge se produit à l'échelle de la milliseconde. L'identification de l'électron sur les sites Ti 3+ de surface de la matrice TiO2 a été faite par spectroscopie RPE. La longue durée de vie de l'état à charge séparée (e -)TiO2/Ru III offre la possibilité d'engager l'électron ou le "trou" dans des réactions redox prédéfinies, qui seront discutées dans les chapitres suivants.
II.1. Introduction
Despite the various works on the photo-induced charge transfer processes of nanocrystalline TiO2/Ru II electrodes and TiO2/Ru II colloidal solution mentioned in Section I.2.5, to the best of our knowledge there are no investigations of the kinetics of a [Ru(bpy)3] 2+ dye grafted on TiO2 NPs via a phosphonic acid group in solution. We are interested in the colloids as they resemble the media in photocatalytic studies. Herein, we will present the kinetics of photo-induced charge transfer processes of a phosphonic-derivatized [Ru(bpy)3] 2+ dye immobilized on TiO2 NPs. The phosphonic anchoring group is chosen due to its good affinity with metal oxide surfaces and better stability than carboxylic group as described in Chapter 1. The asymmetric complex bearing only one phosphonic acid group prevents the possible binding to two adjacent nanoparticles, thus it may be potentially helpful for the charge injection to TiO2 semiconductor and simplifies the kinetics studies. It has also been demonstrated that additional phosphonic groups decrease the rate of photo-induced electron injection to TiO2 as mentioned in Chapter 1. 3 A long alkyl chain separating the [Ru(bpy)3] 2+ photosensitizer and TiO2 substrate may reduce both electron injection and back electron transfer rates between the PS and TiO2. 4 It is however noted that the Ru-complex with such additional methylene groups may be less electrochemically stable than that bearing a 4,4'-(PO3H2)2-bpy ligand under an applied potential of 1.5 V vs Ag/AgCl for 16 hours. 3 In order to study the effect of the phosphonic functional group on the electrochemical The extinction coefficient for the MLCT transition exhibits a minor variation: = 13000 M -1 .cm -1 for [Ru(bpy)3] 2+,5 and 14100 M -1 .cm -1 for [Ru-PO3Et2] 2+ .
II.2. Phosphonate-derivatized
After photoexcitation at 450 nm, the complexes exhibit a strong emission, with maxima at 608 nm for [Ru(bpy)3] 2+ and about 615 nm for the other two complexes. The maximum of emission of [Ru(bpy)3] 2+ is in accordance with literature. 6 The shape of the emission spectrum is broad and identical for all the complexes. The result indicates that the phosphonate group has negligible effect on the emission of this photosensitizer. The substituted alkyl chain is an electron donating group to the bpy ligand. A similar red shift in the emission spectrum has been reported for the methylene group attached to [Ru(bpy)3] 2+ complex. 3 Maestri et al. 7 explained the shift as follows: after the photo-induced MLCT process, the HOMO (t2g) metal orbital is destabilized causing the HOMO-LUMO gap to reduce. Therefore the emission maximum is shifted to longer wavelengths. II-1).
b) Time-resolved emission spectroscopy
The time-resolved emission of Ru This lifetime is in accordance with the one recorded by time-resolved emission spectroscopy.
II.2.3. Electrochemical properties
The and [Ru-PO3Et2] 2+ complexes exhibit in the anodic part a one-electron reversible oxidation wave centered on the metal at E1/2 = 0.91 V. In the cathodic part, they show three successive one-electron reduction processes centered on the bpy ligands at E1/2 = -1.66, -1.86 and -2.12 V. The reduction waves are reversible, with the first one producing [Ru II (bpy -)(bpy)2] + denoted hereafter as Ru I .
In the anodic scan, the Ru II complexes bearing a dimethyl bipyridine group show a shift to lower oxidation potentials compared with the [Ru(bpy)3] 2+ reference in accordance with the complex bearing an electron-donating substituted group on bpy ligands. 3 In the cathodic scan, the third reduction potential of the functionalized complexes is significantly shifted to a more negative potential (-2.12 V compared with -2.07 V) whereas the first two reduction potentials are comparable with that of [Ru(bpy)3] 2+ . This change could be explained by the charge localization on the bpy ligand, while the metal oxidation state remains unchanged. 5 Since the dimethyl is an electron donating group, the first and second reduction processes likely occur on the un-substituted bpy ligands, and the last reduction is on the bpy bearing the dimethyl group. To study the TiO2/Ru II NPs, we first prepared the analogous SiO2/Ru II as a reference. In this system there are no charge injections to the SiO2 due to an extremely negative potential of the conduction band (CB) of SiO2. The SiO2 NPs was synthesized from tetraethyl orthosilicate (TEOS) precursor following the Stober's sol-gel procedure 11 with some modifications. The procedure is schematically presented in Scheme II-3. The sonication was employed instead of mechanical stirring in order to obtain a higher dispersity. 12 The rate of addition of NH3 catalyst has been proved to determine the NPs size. 12 With NH3 dropwise addition, the NPs possess a nearly spherical shape, as shown in the Transmission Electron The synthesis of TiO2 NPs also followed a sol-gel procedure in an acidic condition with titanium(IV) isopropoxide (TTIP) as precursor. 13 The synthesis route is schematically demonstrated in Scheme II-4. TiO2 exhibits three main crystalline phases: anatase, rutile and brookite. The synthesis was carried out with a high water-to-titanium ratio, aiming at obtaining uniform, sub-20 nm anatase TiO2 particles. 13 The slow addition of HCl catalyst has been shown to be critical for obtaining nanometer-sized particles. The anatase phase of TiO2 was chosen in this study for (i) being a better photocatalyst than rutile, 14 and (ii) having less types of surface charge traps than P25 Degussa TiO2 NPs, facilitating the study of charge transfer mechanisms on/through the semiconducting particles. It should however be noted that the P25 TiO2 comprising ~75 % anatase and 25 % rutile has been demonstrated as the best TiO2 photocatalyst. 15 The anatase TiO2 was readily formed by annealing the as-synthesized powder at 450 0 C for 2 hours. For a comparative study, we also prepared the anatase/rutile mixed phase TiO2 by annealing at 700 0 C. It has been reported that the anatase to rutile phase transformation occurs at around 700 -800 0 C 16 because the rutile is more thermodynamically stable than anatase. 17 37.9 0 , 48.1 0 , 54.0 0 and 55.0 0 which could be ascribed to the (101), ( 103)+(004)+( 112), ( 200), ( 105) and (211) lattice planes of the anatase phase, respectively. The sample annealed at 700 0 C exhibits, beside the mentioned peaks of anatase, additional peaks at 2 = 27.4 0 , 36.1 0 , 41.2 0 , 54.3 0 and 56.6 0 (marked with asterisks). These peaks could be assigned to the (110), ( 101), ( 111), ( 211) and ( 220) lattice planes of the rutile phase, respectively. Integrating the area under the anatase and rutile characteristic peaks shows the ratio of anatase/rutile = 8/2. In both samples, the characteristic peak of brookite (30.8 0 ) is not observed.
The crystallite size of the NPs can be estimated from their XRD patterns using the Scherrer equation:
17 0 0.9 ( ) cos d (Eq II-2)
where is the X-ray wavelength (1.5406 Åfor CuK), 0 is the instrumental line broadening (0 = 0.12 rad), (rad) is the line broadening at half of the maximum intensity (FWHM) and (degree) is the Bragg angle. The XRD peaks become narrower when the size is larger. The crystallite size is, therefore, estimated as 10 nm for anatase TiO2 NPs. For anatase/rutile TiO2
NPs, the sizes of anatase and rutile domains are 91 nm and 204 nm, respectively. 3 and -14.4 (Hz) for SiO2 and TiO2 sensor, respectively. Using the Equation II-3, one can calculate the increased mass of 23 ng.cm -2 (SiO2) and 255 ng.cm -2 (TiO2). The surface coverage is then calculated as = 2.3 × 10 -11 mol.cm -2 and 2.5 × 10 -10 mol.cm -2 for SiO2 and TiO2, respectively, provided that the mass of coupled solvent is neglected. They are equal to 0.1 molecule.nm -2 for SiO2 and 1.5 molecules.nm -2 for TiO2. The contact angle of the modified surfaces with water is found to be quite identical: (24.4 ± 1.6) 0 for SiO2/Ru II and (22.4 ± 1.7) 0 for TiO2/Ru II , implying a similar outer layer. It also suggests a similar degree of solvation for both films. Given the initial surfaces having a contact angle of ~10 0 (SiO2) and ~4 0 (TiO2), the modified ones become clearly more hydrophobic, due to the bpy ligands. The experiments prove that the dye adsorption onto SiO2 and TiO2 surfaces could be achieved under ambient conditions within several hours. The TiO2 surface coverage is an order of magnitude higher than SiO2 as the mass of coupled solvent is expected to be the same for both films. It could be explained by the higher stability of the Ti-O-P bond than Si-O-P one. 19
II.3.3. Dye sensitization with [RuP] 2+ on colloidal nanoparticles
The [RuP] 2+ -dye sensitization of TiO2 and SiO2 NPs was achieved by mixing the particles and the complex in ethanol/acetone (8/2 v/v) solution at room temperature. The NPs were then centrifuged and washed with the solution multiple times until the ungrafted complex in the supernatant solution cannot be detected anymore with UV-vis spectroscopy.
By measuring the UV-vis absorbance of the solution after each centrifugation, we estimate the Ru II loading as 0.19 mmol.g -1 TiO2 (both anatase and anatase/rutile) and 0.02 mmol.g -1 SiO2
(< 10 nm) NPs. It is in good agreement with the QCM-D experiment on the flat surfaces where the surface coverage on TiO2 is about ten times higher than SiO2.
The main advantage of the heterogenization of the photosensitizer complex on NPs is their high surface area and high dispersibility in a variety of solvents. However, both the intact and sensitized NPs with diameter less than 10 nm show rapid precipitation in many solvents such as MeCN, ethanol, acetone, DMF and water. Large aggregates of bare NPs were observed with SEM and TEM. The precipitation prohibits the formation of a stable colloid for further studies.
As a comparative study, we then employed commercially available nanopowders of SiO2 (d < 25 nm) and anatase TiO2 (d < 20 nm) produced by Aldrich, which show better dispersibility in MeCN and water. After the dye sensitization with [RuP] 2+ , the particles still retain their dispersibility, forming a colloid stable for several days. The loading of Ru II on these NPs is estimated as 0.21 mmol.g -1 TiO2 and 0.076 mmol.g -1 SiO2. Thanks to the great dispersibility of these NPs, they were chosen for further studies in this thesis. [START_REF]mmol.g -1 (about 3800 Cr III molecules per particle or 2 Cr III molecules per nm 2 ). C6H5-CH2-PO3H2/TiO2/[Cr2P] 3+ (C6H5-CH2-PO3H2 = benzylphosphonic acid, BPA). Similar to the case of TiO2/[Cr2P] 3+ , but a mixture of [Cr2P] 3+ and BPA was used instead of References[END_REF] The spectral changes are characteristic of the grafting of [RuP] 2+ on TiO2 surface.
In the case of SiO2/Ru II , a strong, broad Si-O stretching band at about 1100 cm -1 precludes the detection of the phosphonate surface linkage, which is expected at ~1150 cm -1 . 21
However the spectrum of SiO2/Ru II displays a broader signal in this region compared to that of naked SiO2 NPs. The P=O stretch at 1250 cm -1 partially remains on SiO2/Ru II . The grafting of [RuP] 2+ is then expected to be more stable on TiO2 than on SiO2. Samples were measured in KBr pellets at room conditions.
II.3.5. Electrochemical properties a) [RuP] 2+ grafted on FTO
The electronic communication between grafted [RuP] 2+ and a metal oxide electrode was studied using a Fluorine-doped Tin Oxide (FTO) coated glass electrode. FTO was chosen because (i) the phosphonic group is suitable to be grafted on the surface, 22 and (ii) it is rather redox inactive in the range of Ru A quasi-reversible reduction peak at -1.25 V associated by an oxidation on the reversed scan at -0.65 V was detected. This system is attributed to the reduction of phosphonate shows that the charge injection to the CB occurs at around -0.6 V vs SCE (-0.9 V vs Ag/AgNO3 0.01 M). 25 In the CV of TiO2/Ru II NPs, the peak due to the reduction of TiO2 NPs is so intense that the detection of a small quantity of [Ru(bpy)3] + centered at -1.6 V, if any, is unfeasible.
a) UV-vis absorption spectroscopy and emission spectroscopy
The UV-vis absorption of SiO2/Ru II and TiO2/Ru II NPs was investigated in solid state using an integration sphere. The powders were dispersed in a KBr pellet as a transparent substrate in the UV-vis region. Both SiO2/Ru II and TiO2/Ru II NPs show a similar broad emission spectrum after being excited at 450 nm. While the emission peak of the former (612 nm) is comparable to the [Ru-PO3Et2] 2+ complex (615 nm), that of the latter is red-shifted to 627 nm. Since the Ru II loading on TiO2 is 3 times higher than on SiO2 NPs, the redshift could be a consequence of a selfquenching process between adjacent Ru II species on the same NP. Using Equation II-1, the oxidation potential of grafted Ru II* excited state is calculated as -1.11 V for SiO2/Ru II and -1.00 V for TiO2/Ru II . free complex in solution, which may be a consequence of an additional energy transfer deactivation pathway between adjacent Ru II centers on the NPs. The time-resolved luminescence decay shows that the energy quenching is more pronounced for TiO2/Ru II than SiO2/Ru II . It is probably due to the higher loading of Ru II on TiO2 than on SiO2: about 2 molecules/nm 2 TiO2 surface area compared with 0.5 molecules/nm 2 SiO2 surface area.
Comparing the lifetimes of TiO2/Ru II* state with or without BPA spacers, one can observe a significant drop in the lifetime value (32 to 8 ns) associated with an increase in the fractional amplitude. It could be explained by the separation of grafted Ru II species favoring the electron injection to TiO2 and diminishing the energy transfer pathway. than on TiO2 (0.21 mmol.g -1 ). The charge injection rates of TiO2/Ru II NPs to TiO2 are significantly lower than that reported for a phosphonic-derivatized Ru-dye immobilized on a nanocrystalline TiO2 electrode, 27 but they are in accordance with previous works in colloidal solutions. 28,29 The rates of energy transfer are comparable to a Ru-carboxylic dye grafted on a TiO2 thin film electrode. 30 In the presence of a lateral spacer, the electron injection is significantly enhanced in terms of kinetics and percentage. where D is the quantum yield of the donor, n is the refractive index of the solvent, is the orientation factor (usually equal to 2/3 for a chromophore free to rotate in solution) and J (M -1 .cm 3 ) is the overlap integral between the absorbance and emission spectra. The overlap integral J is thus determined to be 4.4 × 10 -15 M -1 .cm 3 and the Forster distance Ro is 1.9 nm. The FRET efficiency and Ru-Ru distance on SiO2 and TiO2 are summarized in Table II-6. Excitation light was chosen at 532 nm and a UV filter was placed in front of the lamp to avoid direct bandgap excitation of TiO2 NPs.
We first investigated the TA spectrum of [RuP] 2+ complex grafted on SiO2 NPs and compare it with the spectrum of [Ru-PO3Et2] 2+ free complex which has been presented in
(Eq II-11)
This equation is conveniently fit with a power function.
KWW cr KWW KWW KWW KWW KWW k k (Eq II-12)
The recombination kinetics is estimated as (140 ± 30) s -1 . This result suggests the recombination kinetics 5 orders of magnitude slower than the kinetics of electron injection to TiO2. It also shows that a longer linker between the [Ru(bpy)3] 2+ PS and phosphonic goup reduces the charge recombination rate more significantly than shorter linkers: for example without methylene groups the [Ru(dtb)2(dcb)] 2+ grafted on TiO2 thin film exhibits recombination kinetics of ~1 × 10 3 s -1 . 32 Such a very slow recombination process allows efficient charge injection and utilization of the photo-generated CB electrons and oxidized "hole" dye molecules on TiO2 surface in successive redox reactions.
II.3.7. Electron paramagnetic resonance spectroscopy
Electron paramagnetic resonance (EPR) spectroscopy has been employed to identify the sites of photo-induced trapped electrons on TiO2 NPs as they are paramagnetic. Upon photon absorption at 455 nm by the [Ru(bpy)3] 2+ dye, an electron is injected to the CB of TiO2 and then trapped in Ti 3+ sites, which will be called here "electron traps". The charge separated state (e -)TiO2/Ru 3+ was then created and directly observed with the EPR spectroscopy at 20 K.
We first examined different irradiation methods:
-Ex situ irradiating an EPR tube containing TiO2/Ru II colloid in water or MeCN by a 455 nm LED for 2-3 hours, then freezing the tube with liquid nitrogen -In situ irradiating an EPR tube containing the powder or colloid of interest by immersing an optical fiber inside
Our experiments show that the ex situ irradiation method does not produce enough paramagnetic species to be observed. It is probably due to charge recombination between (e -)TiO2 and Ru 3+ sites occurring when the tube is transferred from the dewar to the EPR cavity. In situ irradiation with a fiber immersed inside the tube containing TiO2/Ru II in powder form or in colloid does not improve the signal. Comparing the powder form and colloid, we found that the colloid gives better signal, possibly due to its better ability to diffuse light throughout the sample. Therefore, a colloid in MeCN and in situ irradiation from the wall of the EPR cavity have been chosen for further studies. MeCN solvent was used in order for the EPR experiment to be consistent with previously described characterizations.
Details of the experimental setup can be found in the Experimental Section. sites, 34,35 while the electrons on the CB of TiO2 are EPR silent. 36 There are also no trapped holes (usually denoted as O -centers in literature) 35 detected in this study, proving that the electron transfer from Ru 2+* to TiO2 generates the signal instead of electron promotion from the VB to the CB of TiO2. The signal of Ru 3+ is, however, not detectable. This can be explained by the fact that its EPR spectrum is around 70 times broader than the spectrum of Ti 3+ (~3500 G for Ru 3+ compared with ~50 G for Ti 3+ ), so the EPR signal corresponding to Ru 3+ should appear roughly 70 × 70 = 4900 times smaller than the EPR signal corresponding to Ti 3+ . Therefore, the Ru 3+ signal is not detectable. The signal reproducibility upon changing temperature proves that the electron transfer process is reversible.
In radicals. 37 In our process it may come from the photocatalytic degradation of ethanol 38,39 as it may not be completely removed by heating in an oven at 80 0 C.
II.4. Conclusion
In The results indicate a nanosecond injection rate from Ru II* to TiO2 with ~ 90 % yield, followed by a millisecond charge recombination rate, thus proving an efficient, long-lived charge separated state. The addition of lateral spacers to separate neighboring Ru II species shows a marked increase in the kinetics and the yield of electron injection to TiO2, which is attributed to the reduced energy transfer between the photosensitizers. The FRET study also reveals an average distance of 2.5 nm (SiO2/Ru II ) and 1.8 nm (TiO2/Ru II ) between neighboring Ru II species on the surface. Finally, the photo-induced charge separated state (e -)TiO2/Ru 3+ has been directly observed in situ by EPR spectroscopy. The mechanistic studies in this Chapter serves as a fundamental basis for the following chapters where additional redox active components are anchored on the TiO2 NPs.
CHAPTER III CHROMIUM(III) BIS-TOLYLTERPYRIDINE COMPLEXES AS ELECTRON ACCEPTORS
Abstract
In this chapter, the rich electrochemical and photophysical properties of [Cr(ttpy)2] 3+ (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine) complex will be discussed. Different redox states of [Cr(ttpy)2] n+ (n = 3, 2, 1, 0) complex have been studied with UV-vis absorption spectroscopy and Electron Paramagnetic Resonance (EPR) spectroscopy. DFT calculations show that all the reduction steps occur on the ttpy ligands. The [Cr(ttpy)2] 3+ complex can be doubly reduced to [Cr(ttpy)2] + by [Ru(bpy)3] 2+ photosensitizer under visible irradiation in presence of triethanolamine (TEOA), showing its ability to store two reduction equivalents on the Cr complex. This multi-electron storage ability can also be achieved with a mixture of TiO2/Ru II nanoparticles (NPs), [Cr(ttpy)2] 3+ and TEOA in solution. Furthermore, the doubly reduced [Cr(ttpy)2] + complex shows some activity as a catalyst for the proton reduction reaction in a non-aqueous solution. Afterwards, the [Cr(ttpy)2] 3+ alone or in association with the [Ru(bpy)3] 2+ complexes are grafted on TiO2 NPs via phosphonic acid as anchoring group to form TiO2/[Cr2P] 3+ dyad and Ru II /TiO2/[Cr2P] 3+ triad structures. In contrast to the free [Cr(ttpy)2] 3+ complex in solution, the Ru II /TiO2/[Cr2P] 3+ triad does not show the ability to store electrons on the [Cr2P] 3+ units under continuous irradiation in the presence of TEOA. It may be due to accelerated kinetics of back electron transfer between grafted transient [Cr2P] 2+ and Ru 3+ species in close proximity at the surface of the NPs.
Résumé
III.1. Introduction
III.1.1. Chromium(III) polypyridyl complexes
Among the polypyridyl complexes of first-row transition metals, Cr(III) complexes still remain underexplored in comparison with notably Co(III) and Fe(II) complexes. 1 Herein, the photophysics andelectrochemistry of [Cr(tpy)2] 3+ (tpy = 2,2':6',2''-terpyridine) and[Cr(bpy)3] 3+ (bpy = 2,2'-bipyridine) complexes will be briefly summarized to emphasize its rich properties. For more detail the reader is kindly referred to recent reviews by Kane-Maguire 2 and Ford 3 .
The hexacoordinated Cr(III) complexes possesses an [Ar]3d 3 configuration of the metal ion and an octahedral symmetry. 2 A simplified Jablonski energy diagrams of relevant energy levels is shown in Scheme III-1. In the ground state, the three electron occupy three t2g orbitals rendering the spin state S = 3/2. The quartet ground state is hence denoted 4 A2g. Upon photon absorption, an electron can be promoted from a t2g orbital to one of the two unoccupied eg orbitals without changing its spin state. Depending on the orientation of the t2g and eg orbitals involved in this electron promotion, two energy levels are formed, namely 4 T2g and 4 T1g. The former has a lower energy level. As the electron is relaxed to the lowest energy level 4 T2g, it can return to the ground state by fluorescence emission or non-radiative decays. complexes, also exhibit a similar trend. The authors proposed that due to a more open structure in the tpy complexes, the metal core is more exposed to solvent molecules which can enhance the non-radiative relaxation of its excited state. Consequently, the emission lifetime is lower than in the bpy complexes. In addition, quenching of the 2 Eg states of [Cr(bpy)3] 3+* by I -, Fe 2+ or O2 has also been shown to be efficient. The bimolecular quenching rate is ranging from 10 7 M -1 .s -1 (for O2) to 10 9 M -1 .s -1 (for I -).
Furthermore, homoleptic Cr(III) bipyridyl and terpyridyl complexes also show rich electrochemical properties. For example, Wieghardt and co-worker 5 reported that [Cr(tpy)2] 3+
shows four reversible one-electron reduction processes. Based on density functional theory (DFT) calculations supported by a wide range of spectroscopic techniques, they attributed all the four added electrons to ligand-reduction processes while Cr remains +3. The study suggests the possibility to accumulate multiple electrons on the complex, which is a crucial step towards multi-electron catalysis such as proton or CO2 reduction. The complex has also been proposed as a potential candidate for the water reduction because water molecules, in solid state, are found to stay close to the Cr(III) core of the [Cr(tpy)2] 3+ complex encompassed by two orthogonal tpy ligands. 6 However, this proposed catalytic reaction has not been published yet.
An important and recent progress in this field is the successful synthesis of heteroleptic Cr(III) polypyridyl complexes. 7,8 It opens a new route to functionalize the tpy ligands with suitable anchoring groups to graft the complex onto surfaces or to synthesize dyads in order to explore its capability of storing multiple charges at a single Cr(III) site.
III.1.2. Accumulation of multiple redox equivalents in molecularsemiconductor hybrid systems
In the search for sustainable energy sources to replace traditional but polluted fossil fuels, hydrogen produced by proton reduction reaction arises as an absolutely clean source.
Besides, the capture and conversion of the infamous greenhouse gas CO2 into more valueadded energy sources has also received particular attention in recent years. However, both the water reduction to form H2 and CO2 reduction using molecular catalysts require the accumulation of more than one electron on the catalytic sites. For instance the proton reduction to H2 is a two electron reduction process. We are interested in molecular systems capable of accumulating multiple redox equivalents under visible light. In this part we will highlight the challenges associated with the multiple charge accumulation processes and some recent advances in the field, with a focus on a novel class of molecular-semiconductor hybrid systems. Detailed reviews on the multi-charge storage and catalytic applications can be found in literature. 9,10 According to Hammarstrom,9 The charge accumulation faces several major challenges such as: (i) efficient PS to harvest light and to selectively inject the photo-excited electron to A, (ii) suitable energy levels of the components, (iii) sufficient long lifetime of the charge separated state D + -PS-A -, (iv) chemically stable A -and D + species. Therefore, it is challenging to design purely molecular systems for the charge accumulation. Redox relays are sometimes required to suppress the back electron transfer steps. The competing back electron transfer processes and the difficulties to synthesize these supramolecules are of the main barriers.
Meanwhile, molecularsemiconductor (TiO2) hybrid systems have several advantages over the purely molecular systems such as: (i) easier synthesis of individual components, (ii) long-lived charge separated state (e -)TiO2/Ru 3+ (see Chapter 2), (iii) TiO2 as a scaffold for grafting a wide range of complexes. This chapter is hence dedicated to this hybrid material class (two examples have already been described in Section I.3).
a) Accumulation of multiple oxidative equivalents
The first example for the accumulation of multiple oxidative equivalents using a regenerative PS and a multi-electron donor has been reported by Hammarstrom and coworkers 11 . The authors covalently linked a [Ru(bpy)2(CN)] 2+ PS to an oligo(triarylamine) (OTA) electron donor, and the Ru-OTA complex was grafted onto TiO2 nanoparticles (NPs) (Scheme III-3). After the first laser pulse excitation at 510 nm, the Ru 2+* state injects an electron to TiO2 to form (e -)TiO2/Ru 3+ within 5 ps. A single electron transfer from OTA regenerates the Ru 2+ dye in less than 1 ns. The charge separated state (e -)TiO2/Ru II -OTA + is then formed with ~ 100 % yield. Afterwards, the second laser pulse at 480 nm excites the Ru 2+ dye and another electron is injected to TiO2, followed by the rapid reduction of Ru 3+ by OTA + ( << 15 ns). The resulting multiple charge separated state, (2e -)TiO2/Ru 2+ -OTA 2+ , has been shown to be formed with remarkable nearly 100 % yield and attributed to the efficient separation between the A sites (TiO2) and D sites (OTA). The photo-induced electron transfer is schematically described in Scheme III-3.
Scheme III-3. Accumulation of two oxidative equivalents on OTA site of TiO2/Ru-OTA NPs which is mentioned in reference 11. Note: OTA + and OTA 2+ oxidative charges are delocalized over the whole OTA unit.
The pioneering work by Hammarstrom's group has motivated other researchers to develop this D-PS-A design for catalytic applications. For example, T. J. Meyer and coworkers reported a PSmolecular catalyst dyad anchored on TiO2 thin film electrode for water oxidation in aqueous solution 12 (Scheme III-4a). They covalently grafted a Rua II -Rub II -OH2 dinuclear complex on mesoporous TiO2 thin film electrode, where Rua II acts as a visible light PS and Rub II -OH2 is a known pre-catalyst for water oxidation to produce O2. 13 Steadystate and transient spectroscopic studies have been conducted to probe the formation of transient species under irradiation and their associated electron transfer kinetics. After the first laser pulse, the Rua II PS is excited and quickly injects an electron to TiO2 film in less than 20 ns, followed by the electron transfer from Rub II site to transiently form (e -)TiO2/Rua II -Rub III -OH species with ~ 10 % quantum yield. When a bias is applied to pre-oxidize the system to form TiO2/Rua II -Rub III -OH, the irradiation excites an electron of the Rua II PS, followed by electron injection to TiO2. Subsequently, another mono-electron transfer occurs between Rub III -OH and Rua III species to form Rua II and Rub IV =O within 20 ns and < 15 % yield. The work has provided experimental spectroscopic evidence for sequential mono-electron transfer steps towards the accumulation of two oxidative equivalents in Ru IV =O centers, which is the catalytic center for water oxidation reaction 13 . Scheme III-4a schematically illustrates the electron transfer events.
As a comparison, T. J. Meyer and co-workers also applied the similar concept but a Rua II PS and Rub II molecular catalyst have been co-loaded on TiO2 thin film electrode (Scheme III-4b) instead of being covalently linked together. 14 The first photoexcitation and electron injection also occurs at less than 20 ns. Lateral hole hopping process between Rua III and Rub II -OH2 to yield Rua II and Rub III -OH is accomplished within hundreds of microseconds, which is significantly lower than the covalently linked Rua-Rub-OH2 complex. However, when the Rub II -OH2 is pre-oxidized to Rub III -OH by an applied bias, no signal of Rub IV =O has been detected when the Rua/Rub molar ratio is around 1.8:1. It is attributed to the increased recombination rate between (e -)TiO2 and oxidized Rua III or Rub III -OH species. Increasing the Rua/Rub ratio to 6.4:1 has indeed allowed for the formation of the desired Rub IV =O species.
b) Accumulation of multiple reductive equivalents
On the other hand, efforts have been made to accumulate multiple reductive equivalents under visible light. From a purely molecular approach, Wasielewski and co-workers 15 made the first report using perylene bis(carboxyimide) (PBDCI) as an electron acceptor connected with two porphyrins (P2H) as PS (Scheme III-5). Under laser excitation, both P2H units can be simultaneously excited to the singlet state, then sequentially transferring an electron to PBDCI moiety to achieve a doubly reduced species, PBDCI 2-. The resulting charge separated state, + H2P-PBDCI 2--P2H + , shows a relatively short lifetime of ~ 5 ns. Interestingly, this work emphasizes the role of excitation light intensity for the charge accumulation on the PBDCI unit.
Scheme III-5. Molecular structure of H2P-PBDCI-P2H mentioned in reference 15
To date, most of the works rely on purely molecular systems. 9,10,16,17 In this chapter we will first discuss the electrochemical and photophysical properties of [Cr(ttpy)2] 3+ (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine) complex. The ttpy instead of tpy ligand has been chosen for the convenience of subsequent functionalization of the methyl group with phosphonic acid to be anchored on TiO2. Multiple electron storage induced by visible light will also be mentioned. The electrocatalytic activity of the homogeneous complex towards proton reduction will be addressed as well. Afterwards, the building and characterization of TiO2/Cr 3+ and Ru 2+ /TiO2/Cr 3+ NPs following the PS-TiO2-A design will be thoroughly described.
III.2. Chromium(III) bis-tolylterpyridine complexes in solution III.2.1. Synthesis
The synthesis of [Cr(ttpy)2] 3+ complex with ClO4 -counter anions is depicted in Scheme III-7. Starting material has been chosen to be CrCl2 thanks to the lability of the metal ion. Indeed, Cr 3+ is known to be inert, which makes it difficult for complexation. After the complexation between Cr 2+ salt and ttpy ligand under Ar atmosphere, an excess amount of NaClO4 was added to precipitate the complex out of water/ethanol solvent. The reaction mixture was then bubbled with air to yield the desired product [Cr(ttpy)2] 3+ . It has been shown that this strategy may prevent the dissociation of the ttpy ligand in the presence of water, as water can coordinate with the Cr 3+ metal core. 19 As the sample is paramagnetic, it cannot be investigated with 1 H NMR spectroscopy. The complex shows no oxidation waves when the potential is swept until 1.5 V vs Ag/AgNO3 0.01 M. In the cathodic scanning, it shows four successive one-electron reduction processes at E1/2 = -0.47, -0.85, -1.35 and -2.25 V. All the waves show peak-to-peak splitting Ep around 60 -70 mV, indicating electrochemically reversible processes. The redox potentials are in accordance with literature. 7 The analogous [Cr(tpy)2] 3+ complex also shows comparable redox potentials: E1/2 (V) = -0.44, -0.86, -1.36 and -2.28 vs Ag/AgNO3 0.01 M reference electrode. [START_REF]mmol.g -1 (about 3800 Cr III molecules per particle or 2 Cr III molecules per nm 2 ). C6H5-CH2-PO3H2/TiO2/[Cr2P] 3+ (C6H5-CH2-PO3H2 = benzylphosphonic acid, BPA). Similar to the case of TiO2/[Cr2P] 3+ , but a mixture of [Cr2P] 3+ and BPA was used instead of References[END_REF] Although the ttpy ligand in this studies contains tolyl substituting group at the para position which is a weak electron donating group, its effect on the redox potentials is negligible. It can be seen that the spin number remains around 3 in all the series, suggesting the oxidation state of Cr atom is always +3. The spin number slightly increases from the +3
to -1 oxidation states, i.e. up to four-electron reduction. Furthermore, the excited states of [Cr(ttpy)2] 0 and [Cr(ttpy)2] -1 complexes have also been investigated. The result suggests that all four added electrons are mainly localized on the ttpy ligands. Therefore, the electronic structure of [Cr(ttpy)2] n+ complexes at ground states can be described as:
n = 2 [Cr III (ttpy)(ttpy -)] 2+ n = 1 [Cr III (ttpy -)2] + n = 0 [Cr III (ttpy -)(ttpy 2-)] 0 (ground state) n = -1 [Cr III (ttpy 2-)(ttpy 2-)] 1-(ground state)
The result of these ground states are in accordance with Wieghardt et al. 5 on analogous complexes [Cr(tpy)2] n+ (tpy = 2,2':6',2"-terpyridine) using the B3LYP functional calculation. to the intraligand charge transfer between the tolyl and tpy groups. The transition at 767 nm is assigned to the charge transfer between tolyl and metal center. Therefore, the overall broad absorption band at 800 nm is a consequence of different charge transfer processes which finally leads to the charge transfer between ttpy ligand and Cr metal center. The whole process is thus described as ligand-to-metal charge transfer (LMCT). The calculation also shows a strong electronic transition at 532 nm (18800 cm -1 ) which can be assigned to the strong absorption peak at 500 nm. between the tolyl and tpy moieties. Therefore, the whole process can also be described as a LMCT transition. Moreover, this calculation also shows a strong electronic transition at 485 nm (20600 cm -1 ) and 559 nm (17900 cm -1 ) which can be assigned to the strong absorption peak at 524 nm and 560 nm.
b) Steady-state and time-resolved emission spectroscopy
The steady state luminescence emission of [Cr(ttpy)2] 3+ complex in MeCN solution under Ar has been studied by irradiating the sample at 370 nm. This wavelength was chosen as it is the absorption maximum of the complex. The emission spectrum is shown as dotted line in Figure III-6, in comparison with the absorption spectrum of the complex (solid line).
The emission, whose maximum is at 770 nm, is very weak. Its quantum yield at room temperature is estimated to be less than 10 -3 and attributed to the relaxation of the 2 Eg state to the 4 A2g ground state. 22 The shoulder at around 730 nm could be due to the relaxation of the From the diagram one can calculate the oxidizing power of Cr
III.2.4. Electron paramagnetic resonance spectroscopy
Since the [Cr(ttpy)2] n+ (n = 3, 2, 1) complexes are all paramagnetic, we employed electron paramagnetic resonance (EPR) spectroscopy to study changes in spin state when the Considering stepwise monoelectronic reductions of the original complex [Cr(ttpy)2] 3+ , the expected spin state of these complexes is as follows: [Cr(ttpy)2] 3+ : S = 3/2; [Cr(ttpy)2] 2+ : S = 1; [Cr(ttpy)2] + : S = 1/2 and [Cr(ttpy)2] 0 : S = 0. Therefore, we should expect that all the complexes except the last one are paramagnetic and detectable by EPR spectroscopy.
However, the [Cr(ttpy)2] 2+ complex is EPR silent when an X-band EPR spectrometer is used to record the signal. A similar study for [Cr(tpy)2] 2+ complex also shows no EPR signal, but its spin state of S = 1 has been confirmed by SQUID (Superconducting Quantum Interference Device) measurement. 5 The [Cr(ttpy)2] 3+ complex (S = 3/2) exhibits a very broad EPR
Photoreduction of [Cr(ttpy)2] 3+ by TEOA under continuous visible irradiation
In The inset shows a plateau after ~ 10000 s, suggesting the transformation has been complete.
Based on the absorbance at 723 nm we estimate the second reduction step to occur with around 80 % yield.
This quenching rate is close to the limit of the diffusion rate of species in solution, which is usually around 10 10 M -1 .s -1 . In literature, a similar result was obtained for the quenching of [Cr(ttpy)2] 3+* by a mononucleotide base. 26 The authors recorded KSV constants ranging from 1.7 × 10 4 with guanine to 10
0 0 1 [ ] 1 [ ] SV q K TEOA k TEOA (Eq III-2)
Photoreduction of [Cr(ttpy)2] 3+ by [Ru(bpy)3] 2+ and TEOA under continuous visible irradiation
The previous section shows that [Cr(ttpy)2] 3+ complex can be excited by visible light and then be photoreduced. However it is not a good PS due to its low absorbance in the visible region. Therefore, the quest for a better PS is necessary. A recent publication has proposed to substitute the ttpy ligand with a strong electron donor group to shift the absorption of the Cr(III) complex to the visible domain. 22 In our study we focus on the Afterwards, the band at 800 nm gradually disappears while a peak at ~ 720 nm and a shoulder at ~ 560 nm are formed and increase in intensity. These changes correspond well to the [Cr(ttpy)2] + species. During the experiment, the peak at 450 nm, which is assigned to the It is clear that in both cases, the products are the same. The reaction ( 1) is more energetically favorable. The [Ru(bpy)3] 2+ complex also absorbs more than the [Cr(ttpy)2] 3+ complex in the visible region due to higher spectral coverage and extinction coefficient.
Therefore, we assume that the reaction ( 1) is the main quenching mechanism. However, to verify whether the reaction (2) can occur or not, we need to find a non-photoactive, redox chemical to replace [Ru(bpy)3] 2+ . This chemical should also have a similar redox potential as the Ru 3+ /Ru 2+ couple (E1/2 = 0.94 V). Triphenylphosphine (PPh3) with Epa (POPh3/PPh3) = 0.91 V 28 in MeCN solution has been chosen since it does not absorb in the visible region.
Photoreduction of [Cr(ttpy)2] 3+* by PPh3 under continuous visible irradiation
The monoelectronic photoreduction of [Cr(ttpy)2] 3+ complex by PPh3 under visible In our study, when [Ru(bpy)3] 2+ complex is used instead of PPh3 (reaction 2 mentioned above), the quenching of [Cr(ttpy)2] 3+* by [Ru(bpy)3] 2+ can then be possible.
Quenching of [Ru(bpy)3] 2+* excited state by [Cr(ttpy)2] 3+
In order to study the kinetics of the quenching reaction between [Ru(bpy)3] 2+* and [Cr(ttpy)2] 3+ (reaction 1 above), the Stern-Volmer experiment has been carried out. 19 A linear plot of 0/ vs the concentration of [Cr(ttpy)2] 3+ was obtained upon excitation at 460 nm and the quenching constant kq was found to be ~ 3 × 10 9 M -1 .s -1 . As there are no overlaps between the emission spectrum of [Ru(bpy)3] 2+* and the absorption spectrum of [Cr(ttpy)2] 3+ , the quenching reaction cannot occur via an energy transfer mechanism. Therefore, we attribute this quenching process to an electron transfer mechanism. When [Cr(bpy)3] 3+ was used instead of [Cr(ttpy)3] 3+ , a very similar kq value was found (kq = 3.3 × 10 9 M -1 .cm -1 ). 29 This quenching constant is one order of magnitude higher than the constant for the quenching of It can be seen that the monoexponential function is a better fitting model. The kinetics of back electron transfer is estimated to be (3.2 ± 0.1) × 10 6 s -1 .
Mechanism of the photoreduction of [Cr(ttpy)2] 3+ in the presence of [Ru(bpy)3] 2+
and TEOA
Since TEOA, a sacrificial electron donor, is present in the exhaustive photoreduction experiment mentioned above, there are two possible photoreduction pathways for the quantitative photogeneration of [Cr(ttpy)2] + complex:
- M -1 .s -1, 34 (in deaerated aqueous solution, CTEOA ~ 0.1 M). Converted to the pseudo first-order kinetics, this rate is comparable to the back electron transfer rate (3.2 × 10 6 s -1 ). As the amount of TEOA in the exhaustive photoreduction experiment is about 1000 times higher than [Cr(ttpy)2] 3+ , it is statistically more likely that the [Ru(bpy)3] 3+ complex is reduced by TEOA than by [Cr(ttpy)2] 2+ , suggesting that step 4 is more likely to occur than step 3.
Therefore, the back electron transfer is short-circuited by TEOA, allowing [Cr(ttpy)2] 2+ complex to be reduced again.
After the [Cr(ttpy)2] 2+ singly reduced complex is formed, it can quench the [Ru(bpy)3] 2+* state via either a reductive quenching process (step 5) or an oxidative quenching process (step 6). Although the reductive quenching reaction is more energetically favourable (G5 = -0.84 eV compared with G6 = -0.25 eV), as we observed the gradual formation of [Cr(ttpy)2] + species, the oxidative quenching reaction should be kinetically faster.
Furthermore, the TEOA sacrificial agent is known to efficiently reduce [Ru(bpy)3] 3+ PS (step 4) to sustain the step 6.
Conclusion for the photoreduction mechanism of [Cr(ttpy)2] 3+ by [Ru(bpy)3] 2+
and TEOA:
To summarize, [Cr(ttpy)2] 3+ complex can be reduced twice to form [Cr(ttpy After 3 hours under the -1.2 V bias, the CV recorded with a clean carbon disk electrode was very similar to that obtained before the experiment, except for declines in intensity of the redox waves. The CV recorded with the carbon plate WE produced unstable responses. After 3 hours, the surface of the carbon plate was covered with a visible layer that was not soluble in MeCN. In the absence of TFA, there is no such layer covering the WE when an exhaustive electrolysis of [Cr(ttpy)2] 3+ complex was conducted at a similar potential. It is hence suspected that during a long time under the -1.2 V bias and in the presence of H + , the [Cr(ttpy)2] + complex is partly decomposed, releasing a ttpy ligand that is then deposited onto the WE. This layer blocks the conductivity of the electrode so that the catalytic reaction is stopped. In fact, this electrocatalytic experiment is not reproducible as sometimes the WE was blocked quickly within 1 or 2 hours at the -1.20 V bias. The suspected ttpy layer is always observed.
In an attempt to get insight into the catalytic mechanism, we first created [Cr(ttpy We also attempted to use pulsed EPR technique to study the samples mentioned above.
The experiment is based on the following assumptions: (i) if another paramagnetic complex formed during the catalytic process has a similar continuous EPR spectrum as [Cr(ttpy)2] + , it may show changes in spin relaxation time; (ii) if decoordination of ttpy ligands occurs, interaction between the S = 1/2 system and the 14 N atoms should change as well, which can be detected by the HYSCORE method; (iii) if a hydride complex is an intermediate for H2
production, interaction between the S = 1/2 system and protons should be enhanced in the HYSCORE spectrum. Therefore, we recorded the spin relaxation time and HYSCORE spectra (Figure III-19) for 2 samples: [Cr(ttpy)2] + complex alone and the complex with 1 equivalent of TFA. However, the two samples show completely similar spin relaxation and HYSCORE spectra, except for the reduced intensities as seen in the continuous X-band EPR spectra. where hydrogen atom is linked to the Cr core is not proved. Moreover, the released ttpy ligand is neutral in charge at -1.2 V bias, thus it is electrochemically inactive and cannot be involved in the proton reduction reaction. Therefore, we tend to attribute the catalytic process to a ligand-based reduction process where a protonated terpyridine complex, [Cr III (ttpyH)(ttpy -)] 2+ , can be an intermediate.
III.3. [Cr(ttpy)2] 3+ complex immobilized on TiO2 nanoparticles
After studying the [Cr(ttpy)2] 3+ complex in homogeneous environment, we immobilized it on TiO2 NPs to study photo-induced charge transfer processes. Using TiO2 NPs as a support can be more convenient in photocatalytic studies than using an electrode due to higher loading of the catalyst. Another advantage of this TiO2 NPs support over an electrode surface is that it is also easier to study the evolution of photo-induced paramagnetic species by EPR spectroscopy. Similar to Chapter 2, the phosphonic acid was used as anchoring group to the TiO2 surface.
III.3.1. Synthesis
The synthesis route to ttpy-PO3Et2 and ttpy-PO3H2 ligands, their complexation with Cr 2+ UV-vis absorbance of supernatant solution after each centrifugation step, we estimate the loading of [Cr2P] 3+ complex to be about 0.20 mmol.g -1 , which corresponds to about 3800
[Cr2P] 3+ molecules per particle or 2 molecules per nm 2 of surface.
III.3.2. Electrochemical properties
III.3.2.1. ttpy-PO3Et2 ligand and [Cr(ttpy-PO3Et2)2] 3+ complex in solution
The electrochemical behaviors of the ttpy-PO3Et2 ligand and corresponding
[Cr(ttpy-PO3Et2)2] 3+ complex have been characterized with cyclic voltammetry in order to study the effect of the phosphonate group. Comparing ttpy and ttpy-PO3Et2, the phosphonate derivatization increases the reduction potential of the ttpy ligand: from -2.35 V for ttpy to -2.13 V for ttpy-PO3Et2. The phosphonate substituent induces an electron withdrawing effect on the -conjugated system, thus making its reduction easier. The ttpy-PO3Et2 ligand also shows a more reversible behavior than the ttpy ligand.
The [Cr(ttpy-PO3Et2)2] 3+ complex exhibits a very similar CV as the [Cr(ttpy)2] 3+ complex appears at a potential similar to the analogous [Cr(ttpy)2] 3+ complex rather than the free ttpy-PO3Et2 ligand. Besides, no signals of the free ttpy-PO3Et2 ligand at -2.13 V are detected.
III.3.2.2. Grafting [Cr2P] 3+ complex on ITO
We first attempted to study the unprotected [Cr2P] 3+ complex directly with cyclic voltammetry to compare with the results mentioned above. However, the [Cr2P] 3+ complex (with ClO4 -as counteranion) is not soluble in a variety of non-aqueous solvents like MeCN, DMF and CH2Cl2. We therefore chose to anchor the complex onto an ITO electrode to study its electrochemical properties. where (mol.cm -2 ) is the surface coverage of the complex. For the first reduction step at -0.42 V, a linear plot between anodic peak current Ipa and scan rate v is obtained (Figure
III-21b
). This linear relationship suggests the electronic communication between anchored
[Cr2P] 3+ and ITO electrode is kinetically controlled, thus proving the successful grafting of
[Cr2P] 3+ onto the ITO support.
III.3.2.3. Grafting [Cr2P] 3+ complex on TiO2 NPs
We first attempted to characterize the electrochemical behavior of TiO2/[Cr2P] 3+ NPs in colloidal solution using C or Pt disk electrode as WE. However our attempts were not successful due to low conductivity of TiO2 NPs and probably low surface coverage. employed in an analogous EPR experiment for the free complex (see Section III.2.4).
Therefore, the local concentration on the NP surface is the main cause for this EPR line. This assumption is supported by the estimated loading of 2 [Cr2P] 3+ molecules per nm 2 . The high local concentration of paramagnetic [Cr2P] 3+ complex provides evidence for the successful grafting of [Cr2P] 3+ on TiO2 NPs.
We then prepared TiO2/{[Cr2P] 3+ +BPA} (BPA = benzylphosphonic acid) NPs containing 2 % Cr and 98 % BPA as lateral spacers. With this dilution, the average distance between Cr sites is expected to significantly increase. complexes are estimated to be 60 µmol.g -1 (~ 1000 molecules per NP, 0.6 molecule per nm 2 ) and 6 µmol.g -1 (~ 100 molecules per NP, 0.06 molecule per nm 2 ), respectively. Further details can be found in the Experiment Section.
III.4.2. Photophysical properties a) UV-vis absorption spectroscopy and emission spectroscopy
Similar to previous UV-vis absorption experiments, the Ru 2+ /TiO2/[Cr2P] 3+ NPs were also mixed with KBr and pressed into a pellet for measurement. As we are interested in the possibility to store multiple electrons at the Cr sites, the molar ratio of Ru to Cr species was set at 10 so that the electron supply from [RuP] 2+ PS is not a limiting factor for the [Cr2P] 3+ reduction. BPA, which is optically transparent in the visible region, was also included as lateral spacer to enhance the photo-induced electron injection rate as shown in Chapter 2.
Another reason for using BPA is to separate the paramagnetic Cr 3+ sites from each other for the EPR characterization, which will be discussed later in this section.
b) Emission spectroscopy
The emission of Ru 2+ /TiO2/[Cr2P] 3+ triad has been recorded in steady state after photoexcitation at 450 nm.
(Eq III-8)
From the KWW and KWW parameters, one can estimate a representative charge recombination rate kcr by applying the Equation III-9. 45 The KWW fitting parameters (KWW and KWW) as well as the recombination rate kcr are collected in Table III-7 and compared to those obtained with the TiO2/Ru 2+ dyad.
KWW cr KWW KWW KWW KWW KWW k k
(Eq III-9) triad compared with that of TiO2/Ru 2+ dyad was surprising to us, as the presence of an electron acceptor like [Cr2P] 3+ should scavenge the injected electron on the CB of TiO2, preventing back electron transfer between (e -)TiO2 and Ru 3+ species. However, as we have assigned the signal decay at 450 nm to the (e -)TiO2 disappearance, two assumptions should be considered in the triad case: (i) the charge recombination process between (e -)TiO2 and Ru 3+ is accelerated, or (ii) another electron transfer pathway besides the charge recombination process to consume these trapped electrons. The charge recombination rate between two species should only depend on their nature, distance and the solvent, thus the first assumption is not sensible. As the lifetime of the charge recombination is very long (cr = (kcr) -1 = 7.1 ms),
[Cr2P] 3+ should be able to scavenge the electrons on TiO2, supporting the second assumption.
Therefore, we tend to attribute this two-fold increase in kcr value to the additional electron transfer between (e -)TiO2 and [Cr2P] 3+ to transiently form the Ru 3+ /TiO2/[Cr2P] 2+ charge separated species. The rate of electron transfer between (e -)TiO2 and [Cr2P] 3+ should be in the same range with the recombination rate between (e -)TiO2 and Ru 3+ .
Theoretically, such a transient charge separated state like Ru 3+ /TiO2/[Cr2P] 2+ can also be achieved by lateral electron transfer between Ru 2+* and Cr 3+ species. To verify whether it is formed via the "through particle" or "on particle" mechanism, a redox-inert substrate like SiO2 NPs can be employed so that only the latter mechanism can govern the electron transfer event. However, the surface coverage of phosphonic anchoring group on SiO2 surface has been proved to be much lower than on TiO2. Further reductions in the concentration of adsorbates like that will make it very challenging to characterize with the TA technique.
Besides that, the rate of photo-induced electron injection from Ru 2+* to TiO2 is very fast (kinj ~ 10 7 -10 9 s -1 , see Chapter 2). The quenching of [Ru(bpy)3] 2+* by [Cr(ttpy)2] 3+ in solution has been shown to be 3.0 × 10 9 M -1 .s -1 . 19 Taking into account the concentration of [Cr(ttpy)2] 3+ complex in the range of 10 -5 M, the pseudo first-order quenching constant will be ~ 3 × 10 4 s -1 , which is significantly smaller than the injection rate. Besides, as the complexes are fixed on TiO2 surface and the concentration of Cr 3+ is only 2 %, the chance for electron transfer to occur between Ru 2+* species and underlying Ti 4+ sites should be significantly higher than between Ru 2+* and Cr 3+ . Therefore, it is reasonable to assume the "through particle" charge transfer as the main mechanism for the Ru 3+ /TiO2/[Cr2P] 2+ triad.
III.4.3. Electrochemical properties a) [Cr2P] 3+ and [RuP] 2+ co-immobilized on ITO electrode
In a similar fashion as the previous section, we attempted to study the electrochemical [Ru(bpy)3] 2+ complex in solution. It is therefore concluded that the photoreduction rate is reduced when immobilized Ru 2+ dye is used instead of [Ru(bpy)3] 2+ complex in solution. We assume that this rate decline is due to the poorer contact between the colloid and the [Cr(ttpy)2] 3+ homogeneous complex, therefore anchoring both complexes on TiO2 NPs may accelerate this rate. This assumption will be checked and discussed in the next section.
III.4.5. Photoreduction of immobilized
b) Charge transfer reactions monitored by UV-vis spectroscopy
In order to compare with the photoreduction experiment described in the previous section, we conducted a similar experiment using Ru 2+ /TiO2/[Cr2P] 3+ triad instead of TiO2/Ru 2+ and [Cr(ttpy)2] 3+ mixture, in the presence of TEOA. However, no spectral changes are observed. The photoreduction of immobilized [Cr2P] 3+ is therefore deemed unsuccessful.
c) Charge transfer reactions monitored by EPR spectroscopy
In order to understand why the photoreduction of immobilized [Cr2P] 3+ in the triad is not accomplished, a light-coupled EPR experiment has been carried out at low temperature and in the absence of TEOA sacrificial donor to study the formation of photo-induced paramagnetic species. Our research hypothesis is that if [Cr2P] + doubly reduced complex is formed, it can be observed by EPR as in the case of [Cr(ttpy)2] + complex (S = 1/2). The spectrum before irradiation seems to suggest an EPR-silent system, although
[Cr2P] 3+ molecules is expected to absorb in a wide range of magnetic field in a similar fashion as [Cr(ttpy)2] 3+ free complex. However, the amount of grafted [Cr2P] 3+ in this study (~ 3.6 × 10 -5 M) is about two order of magnitude lower than in the experiment with Cr free complex (~ 10 -3 M). Due to its very broad spectrum, the signal of grafted [Cr2P] 3+ on TiO2 may be too small to record. With this degree of Cr complex dilution of only 2 %, the broad line centered at g ~ 2 due to high local concentration of paramagnetic Cr sites is also not observed.
Under continuous irradiation at 455 nm, the trapped electrons on TiO2 are quickly created. They are characterized by EPR lines at g = 1.990 and g// = 1.959. 46,47 No trapped holes (usually denoted as O -species in literature) 47 are detected in this triad, indicating that the trapped electrons are due to electron injection from Ru 2+* PS rather than direct bandgap excitation of TiO2 semiconductor. Similar to the TiO2/Ru 2+ dyad, four small EPR lines (ratio = 1:3:3:1) separated by 23 G are also detected (marked with asterisks). They are ascribed to CH3 radicals 48 which have possibly been produced by the photo-degradation of remaining ethanol adsorbed on TiO2. 49,50 Meanwhile, the expected signal of sites is high than in homogeneous solution, thus this back electron transfer may be enhanced.
Therefore, unfortunately we are not able to observe the electron accumulation in [Cr2P] 3+ sites.
III.5. Conclusion
In initial complexes, thus preventing the [Cr2P] 2+ complex to be further reduced.
In the next chapter we will present a study on another complex as an electron acceptor that undergoes a coupled chemical reaction after being reduced. The coupled reaction may reduce the back electron transfer kinetics.
CHAPTER IV
Ru(II)/TiO 2 /Mn(I) TRIAD FOR PHOTOCATALYTIC CO 2 REDUCTION
IV.1. Introduction
In recent decades, the rise of the CO2 content in the Earth atmosphere has become a major concern for human beings. As a greenhouse gas, it has been identified as a main factor contributing to the global warming 1 and sea level rise 2 . Chemical conversion of CO2 into more value-added products such as CO and HCOOH provides a means to reduce its atmospheric content and to recycle CO2 emitted from coal burning factories. However, the process is energy consuming as the O=C=O bonds are quite stable (around 800 kJ.mol -1 ).
Table IV-1 shows the standard reduction potentials for several important products. 3 The potentials represent equilibrium potentials that are required for the reaction to occur, but usually a higher potential is necessary to achieve a significant reaction rate. In this chapter we will mainly focus on the photocatalytic CO2 reduction, with an emphasis on molecular and hybrid systems to tackle the issue. The reader is kindly referred to recent reviews by Perutz et al. 1 , Ishitani et al. 5,6 , Rieger et al. 7 and Fontecave et al. 8 for more information about homogeneous and hybrid systems for the electro-and photo-catalytic CO2 reduction.
Since the pioneering works by Lehn and co-workers on the photocatalytic 9 and electrocatalytic 10 CO2 reduction using [Re(bpy)(CO)3Cl] catalyst, numerous publications on homogeneous catalysts have been made. The general strategy is to covalently link a photosensitizer (PS) and a molecular catalyst together for CO2 reduction under visible light.
This approach has been extensively summarized in the aforementioned reviews. It, however, often requires challenging multi-step synthesis to reach the desired supramolecules.
In recent years, an emergent approach is to immobilize the molecular PS and catalyst onto the surface of another material such as semiconductors, nanoparticles (NPs), electrodes and metal-organic framework (MOF) 5 to form a new hybrid system. We are interested in the systems containing NPs as they provide a good scaffold with a great surface area to accommodate the molecular PS and catalyst. Furthermore, semiconducting NPs such as TiO2 can also act as an electronic component for the electron transfer reaction between the PS and the catalyst centers. Finally, this approach usually requires significantly less complicated synthesis than the supramolecular approach. This introduction section will highlight recent developments in this class of hybrid materials.
Cowan and co-workers 11 immobilized [Ru(bpy)3] 2+ PS and Ni(cyclam) 2+ (cyclam = 1,4,8,11-tetraazacyclotetradecane) catalyst on ZrO2 for photocatalytic CO2 reduction to CO (Scheme IV-1). The system Ru 2+ /ZrO2/Ni 2+ (Ru:Ni = 2.6:1 % mol) produceed both CO (14.0 ± 1.4 µmol.g -1 ) and H2 (58.1 ± 9.1 µmol.g -1 ) after 7 hours of visible irradiation in the presence of ascorbic acid as a sacrificial electron donor. It corresponded to the maximum turnover number (TONmax) for CO equal to 4.8, while the selectivity of CO2 over H2 was still poor.
Increasing the irradiation time lead to decrease in activity, possibly due to deactivation pathway via the formation of Ni I (cyclam)-CO species. Under the same conditions, a mixture of the two complexes in solution exhibited negligible CO production (TONmax (CO) ~ 0.2 after 7 hours). After excitation of the Ru 2+ complex, transient absorption spectroscopy (TAS)
showed that Ru 2+* excited state was partly reductively quenched by ascorbic acid to form Ru + species, which then transferred an electron to the Ni 2+ sites. The rate of electron transfer between Ru + and Ni 2+ immobilized on TiO2 was estimated as 7.7 × 10 3 s -1 , whereas the rate between the two complexes in solution was significantly slower (10 s -1 ). This enhanced electron transfer rate in the Ru 2+ /TiO2/Ni 2+ triad was proposed to be critical for the higher photocatalytic activity of the triad than the complex mixture in solution. Since ZrO2 is redoxinactive, the charge transfer events in Ru 2+ /ZrO2/Ni 2+ could only occur via electron transfer between neighboring Ru 2+ and Ni 2+ species on surface, called "on particle" mechanism. This study proved that the immobilization of the PS and catalyst on surface increased the electron transfer kinetics by eliminating the complex diffusion in solution. In order to improve the photocatalytic efficiency of TiO2/Re I system, Kang and coworkers 14 immobilized an organic dye and [Re(bpy)(CO)3Cl] catalyst on TiO2 NPs (Scheme IV-3). In contrast to the Reisner's work on TiO2/Re I where Re complex acted as both PS and catalyst, the Re species in this study only operated as a catalyst. Since photocatalytic CO2 reduction in homogeneous solution using the [Re(bpy)(CO)3Cl] suffers from short durability due to the catalyst degradation, fixing it on a surface was expected to reveal its intrinsic photocatalytic activity. Under visible irradiation ( > 420 nm), the hybrid system exhibited remarkable TONmax (CO) > 435 versus the Re catalyst in DMF solution and in the presence of 1,3-dimethyl-2-phenyl-2,3-dihydro-1H-benzo[d]imidazole (BIH) as sacrificial electron donor.
The system also reached good selectivity for CO, since H2 content was less than 5 % of CO content, and no formic acid was detected with the HPLC analysis of the liquid phase. Mechanistic studies have also been discussed in the three publications mentioned above.
After the first monoelectronic reduction step on the metal center, the Br -leaves and a Mn 0 -Mn 0 dimer is formed. Reduction of the dimer generates the catalytic active species
[Mn(bpy)(CO)3] -where the pentacoordinated Mn -I center offers a vacancy for a CO2 adduct.
The reduction steps can be summarized as follows:
[Mn I (bpy)(CO)3Br] + e - [Mn I (bpy -)(CO)3Br] - [Mn I (bpy -)(CO)3Br] - [Mn 0 (bpy)(CO)3] + Br - [Mn 0 (bpy)(CO)3] 1/2 [Mn 0 (bpy)(CO)3]2 1/2 [Mn 0 (bpy)(CO)3]2 + e - [Mn -I (bpy)(CO)3] -
The first example concerning [Mn(bpy)(CO)3Br] complex immobilized on TiO2 thin film electrode has been reported by Reisner and co-workers. 19 They functionalized the bpy ligand with two phosphonic acid groups (called [MnP]) to anchor on the TiO2 surface. The TiO2/[MnP] modified electrode was used for the electrocatalytic CO2 reduction in MeCN/H2O
(95/5, v/v) solution. The reason to deposit the complex on a mesoporous TiO2 thin film was to achieve higher loading and better electrical conductivity between the electrode and the catalyst than the free complex in solution. Under an applied potential at -1.7 V vs Fc + /Fc (-1.6
V vs Ag/AgNO3 0.01 M) during 2 hours, the modified electrode produced CO as the main product with TONmax = 112. H2 was also detected at ~ 20 % yield of CO, while HCOOH was not detected. The intermediate [Mn 0 (bpy)(CO)3]2 dimer was observed with UV-vis absorption spectroscopy although the Mn complex has been fixed on the surface. The authors attributed this observation to two possible mechanisms: (i) the lability of the phosphonic acid anchoring groups that the complex could be temporarily desorbed, forming a dimer then being readsorbed, and/or (ii) the local concentration of the Mn complex was high enough to be dimerized.
Recent works by Marc Bourrez [START_REF]mmol.g -1 (about 3800 Cr III molecules per particle or 2 Cr III molecules per nm 2 ). C6H5-CH2-PO3H2/TiO2/[Cr2P] 3+ (C6H5-CH2-PO3H2 = benzylphosphonic acid, BPA). Similar to the case of TiO2/[Cr2P] 3+ , but a mixture of [Cr2P] 3+ and BPA was used instead of References[END_REF] and Matthew Stanbury 21 , former PhD students in our CIRe laboratory have focused on a novel catalyst [Mn( 2 -ttpy)(CO)3(MeCN)] + (ttpy = 4'-(ptolyl)-2,2':6',2''-terpyridine) for CO2 reduction to CO and/or HCOOH. The complex showed better electrocatalytic CO2 reduction efficiency (TONmax (CO) = 13) than the previously reported [Mn( 2 -tpy)(CO)3(MeCN)] + (tpy = 2,2':6',2''-terpyridine) complex by Kubiak et al. 22 (TONmax (CO) ~ 4). In this chapter we will briefly present the characterization, electroand photo-catalytic CO2 reduction using a similar complex, [Mn( 2 -ttpy)(CO)3Br], in homogeneous solution. This chapter focuses on the immobilization of this Mn complex on TiO2 NPs via a phosphonic acid anchoring group to form TiO2/Mn I dyad, and subsequently the co-immobilization of a [Ru(bpy)3] 2+ -based PS and Mn complex onto TiO2 to yield
Ru II /TiO2/Mn I triad. Scheme IV-5 shows the structures of the dyad and triad that will be presented in this chapter. The photocatalytic CO2 reduction using the triad will also be described. = -1.44 V is assigned to the leaving of Br -and pyridyl coordination to form a tridentate linkage between the ttpy ligand and Mn metal ion (reactions 5 and 6). It is followed by the release of a CO group and the formation of a dimer through metal-metal bonding (reaction 7).
On the reverse scanning, the cathodic peak at -1.44 V is associated with an anodic peak at -0.89 V which corresponds to the oxidation of the Mn 0 -Mn 0 dimer (reaction 8). Another irreversible oxidation peak is observed at 0.27 V due to the oxidation of the metal center in the tridentate complex (reaction 9). This peak is not observed if the CV is first swept in oxidation (Figure IV-1a).
(
) [Mn I ( 2 -ttpy)(CO)3Br] + e - [Mn I ( 2 -ttpy -)(CO)3] + Br - Epc1 = -1.44 V (6) [Mn I ( 2 -ttpy -)(CO)3] + [Mn 0 (ttpy)(CO)3] (7) [Mn 0 (ttpy)(CO)3] 1/2 [Mn 0 (ttpy)(CO)2]2 + CO (8) 1/2 [Mn 0 (ttpy)(CO)2]2 + MeCN [Mn I ((ttpy)(CO)2(MeCN)] + + e -E = -0.89 V 5
IV.2.4. Electrocatalytic CO2 reduction
The electrocatalytic CO2 reduction experiment has been conducted using In order to study the reduction products of CO2, a potential of -1.7 V was applied for comparison with previous studies. CO gas is produced as the only reduction product with TONmax (CO) = 12 after 4 hours and 100 % Faradaic yield while no H2 or HCOOH is formed. reduction products like CO or HCOOH are formed via a two-electron, two-proton reduction process.
In our series of photocatalytic CO2 reduction experiments, the molar ratio of Mn:Ru was varied to determine the optimal value, which will be used to graft the complexes on TiO2
NPs. It is an advantage to work with NPs as the PS/catalyst ratio can be easily controlled, in comparison with supramolecules where PS and catalyst molecules are linked to each other, usually, at 1:1 ratio. It is noted that the excitation wavelength has to be carefully chosen to minimize the photo-induced decarbonylation of the [Mn(ttpy)(CO)3Br] complex as this side reaction can occur under room light. 27 In our study, the result obtained with the ratio of Mn:Ru = 1:1 is in agreement with a previous work 21 (TONmax (HCOOH) = 11, TONmax (CO) = 8) where the sample was monochromatically irradiated at 480 nm instead of 470 40 nm. We thus conclude that irradiation at 470 40 nm can selectively excite Ru 2+ PS in the same way as the monochromatic irradiation at 480 nm. Interestingly, this photocatalytic reduction process in DMF/TEOA mixed solvent produces a mixture of CO and HCOOH while the electrocatalytic reduction using [Mn(ttpy)(CO)3Br] in MeCN only produces CO. This observation is in accordance with literature for the electrocatalytic and photocatalytic CO2 reduction using [Mn(bpy)(CO)3Br] complex in MeCN and DMF/TEOA solvent, respectively. 17
The table also shows that decreasing the Mn:Ru ratio leads to higher total TON as expected due to higher chance for each Mn I molecule being reduced twice by two Ru I molecules. The best Mn:Ru ratio is determined at 1:10 where the total TON reaches the highest value of 50 in this series, although the selectivity of HCOOH over CO at about 1.6:1 is not high.
IV.3. TiO2/Mn I dyad and Ru II /TiO2/Mn I triad nanoparticles
IV.3.1. Synthesis
Synthesis of ttpy-C3-PO3H2 ligand
To graft the [Mn(ttpy)(CO)3Br] complex on TiO2 NPs, we chose phosphonic acid as anchoring group. As presented in Chapter 3 for the [Cr(ttpy)2] 3+ complex bearing two phosphonic acid groups, the ttpy-PO3H2 ligand cannot be isolated from DMSO for characterization. Therefore, we employed a new synthesis route to ttpy-(CH2)3-PO3H2
(denoted as ttpy-C3-PO3H2) ligand that has three methylene units as a bridge between the ttpy ligand and the phosphonic functionality (Scheme IV-9). In the last step, after the solvents were evaporated, there was still an unknown amount of HBr side product in the sample. This new ligand shows good solubility in DMSO. The ligand was characterized by mass spectroscopy and NMR ( 1 H, 31 P) spectroscopy before deposition on TiO2 NPs. Further details can be found in the Experimental Section.
Scheme IV-9. Synthesis route to ttpy-C3-PO3H2 ligand
Synthesis of TiO2/Mn I dyad NPs
We first attempted to synthesize a new complex [Mn(ttpy-C3-PO3H2)(CO)3Br] by mixing Mn(CO)5Br and the ttpy-C3-PO3H2 together in the same way we synthesized the parent complex [Mn(ttpy)(CO)3Br]. However, this approach was not successful due to the poor solubility of the metal precursor in DMSO, and the presence of HBr that may protonate the tpy ligand. Working with mixed solvent like DMSO/diethyl ether or DMSO/acetone did not solve the issue. Therefore, we employed a two-step synthesis: first, the ligand was grafted on TiO2 NPs in DMSO solution, followed by the complexation reaction between Mn(CO)5Br
and TiO2/ttpy-C3-PO3H2 NPs in acetone (Scheme IV-10). A similar "complexation on surface" strategy has been realized for the immobilization of [Re(bpy)(CO)3Cl] on SiO2 NPs via amide linkages 28 , and the immobilization of [Mn(bpy)(CO)3Br] on either a metal-organic framework (MOF) 29 or a periodic mesoporous organosilica (PMO) 30 substrate. After each step, modified TiO2 NPs were washed with copious amounts of corresponding solvents to remove the excess amount of the ligand, metal precursor and HBr, prior to being dried under vacuum. Loading of Mn I species on TiO2 is estimated to be ~ 0.20 mmol.g -1 (around 3800 molecules per particle or 2 molecules per nm 2 ) by measuring the absorbance of washing solutions after each centrifugation step. It is important to note that in the second step of our procedure, acetone was used instead of diethyl ether because the ether was so volatile that it was completely evaporated during the necessary centrifugation steps after the grafting process.
Scheme IV-10. Synthesis route to TiO2/Mn I NPs Synthesis of Ru II /TiO2/Mn I (Mn:Ru = 1:10) triad NPs
We prepared the Ru II /TiO2/Mn I triad with the ratio of Mn:Ru = 1:10 based on the optimization of the Mn:Ru molar ratio in the photocatalytic CO2 reduction described above.
The complex ratio was roughly controlled by using an amount of [RuP] 2+ that was 10 times greater than the amount of Mn(CO)5Br precursor. Further detail can be found in the Experimental Section. Since there are much less Mn species, grafting Mn first ensures that they can be better distributed on the TiO2 NPs. After the two-step synthesis of TiO2/Mn I NPs, Ru 2+ PS was then grafted as shown in Scheme IV-11. Loading of Mn I and Ru II complexes are roughly estimated to be 14 µmol.g -1 and 140 µmol.g -1 , respectively, by measuring the absorbance of washing solutions after each centrifugation step. The loading correspond to: (i)
for Mn I : 270 molecules per NP or 0.14 molecules per nm 2 , (ii) for Ru II : 2700 molecules per NP or 1.4 molecules per nm 2 .
Scheme IV-11. Synthesis route to Ru II /TiO2/Mn I NPs
IV.3.2. Infrared spectroscopy
IR characterization has been employed to prove that the Mn complex was successfully anchored on TiO2 by focusing on the CO stretching bands. It is important to note that the mode of coordination of the ttpy ligand to the Mn I metal ion depends on the solvent used in the complexation reaction. Acetone has been shown to form a tridentate homogeneous complex, [Mn( 3 -ttpy)(CO)2Br], 23 which can be distinguished from the bidentate one based on the missing of the peak at ~ 2040 cm -1 . In our case, since the peak at 2022 cm -1 is identical to the parent bidentate complex [Mn( 2 -ttpy)(CO)3Br], we conclude that the immobilized Mn I complex retains the bidentate linkage even though the "complexation on surface" step has been carried out in acetone. (
) [Mn 0 (ttpy-C3-PO3H2)(CO)2(MeCN)] [Mn I (ttpy-C3-PO3H2)(CO)2(MeCN)] + e - Epa = -1.16 V 16
The first reduction process is centered on the ligand as for the complex in solution, and should lead to the release of Br -(reaction 14). Subsequently, an intramolecular charge transfer process induces the formation of Mn 0 complex (reaction 15). While the Mn 0 complex in solution undergoes dimerization after releasing CO (equation 7), we suppose that the immobilized Mn 0 complex on FTO stays on a monomeric form (equation 15). We have no experimental evidence for the release of CO when the complex is grafted on FTO. On the reverse scan, the oxidation peak at -1.16 V is attributed to the oxidation of the Mn 0 complex (equation 16).
Based on the first reduction peak at -1.43 V, a linear relationship between the cathodic peak current Ipc and scan rate v has been obtained (Figure IV-5b), proving the successful grafting of the Mn I complex onto the electrode. The surface coverage of Mn I complex can be estimated to be Mn = 1.9 × 10 -10 mol.cm -2 (see Experimental Section for the calculation)
or about 1.1 molecules per nm 2 . This number is in line with the loading of Mn I on TiO2 NPs
(2 molecules per nm 2 ) shown in Section IV.3.1. The value also suggests that the Mn I complex molecules stay sufficiently far from each other on the FTO surface, supporting our assumption that the dimerization should be prohibited.
b) Mn I complex grafted on TiO2 NPs
Afterwards, we investigated the CV of TiO2/Mn I NPs in solid state using a microcavity Therefore, the anodic peak at 0.86 V is assigned to the oxidation of Mn I metal center to Mn II (reaction 17) in a similar way to the homogeneous complex in solution (Eox1 = 0.62 V, Section IV.2.2, reaction 1), although the potential has been shifted to a considerably higher value than that of the free complex [Mn I ( 2 -ttpy)(CO)3Br] in solution.
The oxidation process is irreversible since the electron transfer reaction may lead to a ligand exchange process between Br -and MeCN solvent (reaction 18). The second oxidation peak associated with the fac to mer configuration transformation of the Mn complex and subsequent oxidation of the mer Mn I complex in solution are not observed for the FTO/Mn I electrode. A reason for this phenomenon could be the geometric constraints of the immobilized Mn species precluding the fac to mer transformation.
In the cathodic region of TiO2/Mn I (Figure IV-6b), the first reduction peak appears at -1.11 V associated with an oxidation peak on the reverse scan at -1.01 V. These two redox peaks are not observed for TiO2/ttpy-C3-PO3H2 NPs (Figure IV-7b), thus it must be due to the reduction of the Mn I complex. Similar to the FTO/Mn I modified electrode, we also assume that the formation of dimer should be prohibited once the complex has been anchored on a surface. Therefore, we attribute the two peaks at -1.11 V and -1.01 V to the formation and re-oxidation of [Mn 0 (ttpy-C3-PO3H2)(CO)2(MeCN)] species in the same way as FTO/Mn I electrode (reactions 14-15 and reaction 16, respectively).
IV.3.4. Photophysical properties a) UV-vis absorption spectroscopy
The UV-vis absorption spectra of TiO2/Mn I dyad and Ru II /TiO2/Mn I triad are shown in (dotted line) NPs. Loadings of Mn I complex on TiO2/Mn I and Ru II /TiO2/Mn I are estimated to be ~ 0.20 mmol.g -1 and 14 µmol.g -1 , respectively.
b) Emission spectroscopy
The emission spectra of TiO2/Mn I dyad and Ru II /TiO2/Mn I triad NPs have been
c) Transient absorption spectroscopy
After light excitation, the charge separated state Ru III /(e -)TiO2/Mn I is created. Transient absorption (TA) experiments have been carried out to study the absorption spectra of these transient species and the kinetics of charge recombination. Similar to Chapter 3, low concentration of the colloidal solution is required to avoid as much light scattering as possible. As a result, the full TA spectra from 400 nm to 700 nm need to be accumulated for at least 20 minutes to obtain a good signal-to-noise ratio. However, we encountered timeinduced precipitation phenomenon, thus the result is not reliable. Therefore, we only focus on the signal decay at 450 nm to compare with the signal of TiO2/Ru II at the same wavelength (Section II.3.6c). The decay at 450 nm for the Ru II /TiO2/Mn I triad is shown in It can be seen that both KWW parameters are very similar to those of TiO2/Ru II NPs.
The KWW value is far from 1, justifying the use of the stretch exponential function. The charge recombination kinetics can be calculated as follows 32 :
11 1 1 1 KWW cr KWW KWW KWW KWW KWW k k (Eq IV-1)
The rate of charge recombination is then estimated to be ~ 150 s -1 , which is identical to that of TiO2/Ru II (140 s -1 ). This result suggests that the recombination of (e -)TiO2 and Ru III is not affected by the presence of Mn I complex on the surface, implying that the Mn I complex cannot be reduced by the electrons injected to TiO2. It is in agreement with thermodynamic considerations where E(Ti 4+ /Ti 3+ on TiO2) = -1.0 V and E(Mn I /Mn 0 ) = -1.43 V. The electron transfer between (e -)TiO2 and Mn I , if occurs, would be endergonic by 0.43 eV.
IV.3.5. Photocatalytic CO2 reduction
The photocatalytic CO2 reduction using Ru II /TiO2/Mn I triad NPs has been carried out in the same conditions as in Section IV.2.5. After 16 hours under irradiation at 450 nm, the triad photocatalyst produces HCOOH as the only product with the yield of 390 µmol.g -1 NPs, corresponding to TONmax (HCOOH) = 27 vs Mn I catalyst. The experiment was reproduced twice to confirm the result. Increasing the irradiation time to 60 hours results in less formic acid (TON = 25) due to its degradation. Although the total TON is lower than that of the free [Ru(bpy)3] 2+ and [Mn(ttpy)(CO)3Br] complexes in solution (TONtotal = 50), the selectivity of HCOOH over CO is significantly improved for the triad.
A control experiment using TiO2/Ru II instead of Ru II /TiO2/Mn I does not produce CO or HCOOH, which confirms that the Mn I complex acts as the catalyst. When TiO2/Ru II NPs and free Mn I complex are used, we obtained the similar results as for the free complexes in homogeneous solution. When SiO2/Ru II and free Mn I complex are used, the amounts of CO and HCOOH are significantly reduced. These experiments suggest that TiO2 has an active role in the photocatalytic mechanism, and the immobilization of the Mn I complex is related to the enhanced selectivity of HCOOH over CO. It is important to note that the Mn 0 -Mn 0 dimer, which has been proposed to be the photocatalytically active species for the [Mn(bpy)(CO)3Br] complex in solution 17 , was not able to formed when the Mn complex was anchored on the framework. The increased activity for UiO-67/Mn I was attributed to the isolation of the Mn I active sites on the framework, which stabilized the catalyst and prevented dimerization of the singly reduced Mn 0 species.
Scheme IV-12. Proposed mechanism for the photocatalytic CO2 reduction using UiO-67/Mn I catalyst in ref 29
Our results also highlight the role of semiconducting TiO2 NPs as a substrate compared to SiO2. In literature, Li and co-workers 28 immobilized [Re(bpy)(CO)3Cl] catalyst on SiO2
NPs via amide linkages for photocatalytic CO2 reduction in the presence of [Ru(bpy)3] 2+ PS.
The SiO2/Re I system only showed photocatalytic activity similar to the free complex in solution (TONmax (CO) ~ 12).
The proposed mechanism of the photocatalytic CO2 reduction shown in Scheme IV-8 requires that the Mn I complex needs to be reduced twice to form the catalytically active species Mn -I . For the mixture of [Ru(bpy)3] 2+ and [Mn(ttpy)(CO)3Br] complexes in solution, the reduction of Ru 2+* excited state by BNAH occurs first to yield Ru + species, which can reduce the Mn complex twice. In the case of Ru II /TiO2/Mn I triad, there are two reduction pathways to reduce Mn I to Mn 0 : (i) electron injection from Ru II* to TiO2 followed by electron transfer to Mn I , then Ru III is regenerated by BNAH (reactions 20, 21 and 22); or (ii) Ru II* is reductively quenched by BNAH first to form Ru I , then Ru I transfers an electron to Mn I via lateral transfer on surface and/or injects an electron to TiO2 (reactions 23, 24 and 25). Thermodynamically, the electron transfer between (e -)TiO2 and Mn I in the first pathway (reaction 21) is not favorable. The TAS results mentioned in Section IV.3.4 also show that the injected electrons on TiO2 CB are not consumed by Mn I species. Therefore, the second pathway is more feasible to reduce the Mn I complex. In this pathway, TiO2 seems to be inactive, which is in accordance with a previous publication 12 . TiO2 only acts as a substrate so that the Ru II PS and Mn I catalyst molecules are arranged sufficiently close to each other to enhance the charge transfer kinetics. Since the molar ratio of Mn:Ru is equal to 1:10, the chance that a Mn I molecule is surrounded by several Ru II molecules is expected to be high.
However, our experiments using (TiO2/Ru II + free Mn I ) and (SiO2/Ru II + free Mn I ) photocatalytic systems resulted in significantly less products for the latter system (Table IV-4), suggesting that TiO2 may have an active role but it is not well understood yet.
In contrast with the electrocatalytic CO2 reduction experiment, the second reduction step to generate Mn -I complex has not been observed for the grafted Mn complex on FTO by cyclic voltammetry. However, the photocatalytic active species can be the singly reduced
b) Ru II /TiO2/Mn I triad and BNAH as electron donor
In a previous study 21 on homogeneous Mn I complex, the first reduction step leads to Mn 0 -Mn 0 dimer which is EPR-silent due to the antiferromagnetic coupling of two Mn 0 (S = 1/2) species. In this study, since the Mn I complex has been anchored on TiO2 surface, the dimerization cannot occur, hence the expected singly reduced Mn 0 complex (S = 1/2) may be detectable with the EPR technique.
We carried out the EPR investigation of the Ru II /TiO2/Mn I triad in MeCN solution containing BNAH (0. It can be seen that the two samples exhibit two EPR lines at g = 1.990 and g// = 1.960 corresponding to trapped electrons in Ti 3+ sites. 33,34 As shown in Chapter 2, they are attributed to the injected electrons from Ru 2+* to TiO2. Based on the line at g// = 1.960, the amount of trapped electrons in both samples is about equal, which is due to the same amount of Ru II species and the same time of signal accumulation.
Beside the signal of Ti 3+ , an intense signal at 3441 G has been detected in both samples.
As it is similar in shape and position to the BNAH radical obtained under O2, we also attribute this signal to the BNAH radical. However, since there are no O2 in this case, the BNAH radical must be a consequence of photo-induced electron transfer reaction from BNAH to Ru II* excited state to form BNAH + and Ru I species. If Mn I is redox-inactive, the signal of BNAH radical should be identical in both cases due to the same amounts of Ru II and TiO2. In fact, the signal intensity for Ru II /TiO2/Mn I triad is significantly greater than for TiO2/Ru II dyad. Meanwhile, there are no reasons for the reductive quenching of Ru 2+* by BNAH to be accelerated in the triad. Therefore, the increase in intensity of the line at 3441 G may imply a superimposition of two paramagnetic species.
To further study the signal centered at 3441 G, pulsed EPR technique was employed. 55 Mn isotope (I = 5/2), which shows a quadrupolar interaction at 55 MHz. This interaction will spread the HYSCORE signal of Mn in Mn 0 complex on a very broad range, making it undetectable. Therefore, the presence of monomeric Mn 0 complex is not observed by the EPR technique. It is also important to note that the doubly reduced species [Mn -I (ttpy)(CO)2] -in homogeneous solution has been shown to exhibit a broad EPR line spread over ~ 3000 G 21 , which is characteristic of an unpaired electron on the metal center. However, this doubly reduced species is not observed in our study for Ru II /TiO2/Mn I triad. From the electrochemistry and EPR results, we propose that the grafted Mn I in the triad undergoes the one-electron reduction pathway, which has been proved for the analogous [Mn(bpy)(CO)3Br] complex anchored on the MOF named UiO-67 (Scheme IV-12). 29 In this scenario the catalytically active species is thought to be [Mn 0 (ttpy-C3-PO3H2)(CO)2] (Scheme IV-13)
where a coordination site is ready for the formation of a CO2 adduct 5 .
Scheme IV-13. Proposed catalytically active species [Mn 0 (ttpy-C3-PO3H2)(CO)2] for the Ru II /TiO2/Mn I triad
IV.4. Conclusion
In this chapter, firstly, the electrochemical properties of [Mn I ( 2 -ttpy)(CO)3Br] complex were investigated. The complex showed promising electrocatalytic activity for CO2 reduction with TONmax (CO) = 12, which could occur via one-or two-electron pathway mechanism. In the presence of [Ru(bpy)3] 2+ as a PS and BNAH as a sacrificial electron donor, the Mn complex could be reduced and the mixture showed photocatalytic activity for the CO2 reduction to produce both CO and HCOOH under irradiation at 470 nm. These results are in accordance with the analogous [Mn I ( 2 -ttpy)(CO)3(MeCN)] + complex which has been already
studied by former PhD students in the CIRe group.
Afterwards, a terpyridine-based ligand bearing a phosphonic acid group was synthesized and adsorbed on TiO2 NPs. The [Mn I ( 2 -ttpy-C3-PO3H2)(CO)3Br] complex on TiO2 (called TiO2/Mn I ) was obtained by coordination chemistry between Mn(CO)5Br
precursor and TiO2/ttpy modified NPs. The grafting process was proved by IR, UV-vis absorption as well as cyclic voltammetry. Subsequently, [Ru(bpy)3] 2+ was also immobilized on TiO2/Mn I NPs to form a Ru II /TiO2/Mn I triad. The ratio of Mn:Ru was chosen to be 1:10 as it was the optimal condition for the photocatalytic CO2 reduction using the two homogeneous complex in solution. The Ru II /TiO2/Mn I triad NPs showed good photocatalytic activity for the CO2 reduction: TONmax (HCOOH) = 27 over 16 hours of irradiation with 100 % selectivity for HCOOH. Although the total TON was lower than that of the two homogeneous complexes in solution (TONtotal = 50), the selectivity for HCOOH was greatly improved.
A reason for this improved photocatalytic behavior using the triad is thought to be related to the immobilization of Mn I catalyst. In homogeneous solution, the reduction of Mn I complex may follow the two-electron pathway: first, it undergoes monoelectronic reduction and dimerization to form Mn 0 -Mn 0 , which is subsequently reduced again to yield the catalytically active species Mn -I . When the Mn I complex is immobilized, the dimerization step is assumed to be prohibited, thus the monomeric paramagnetic Mn 0 species (S = 1/2) is expected. Unfortunately, its presence was not confirmed by EPR, possibly due to a very broad signal of the Mn 0 quadrupolar interaction in the HYSCORE spectrum. The Mn -I species were not observed by X-band EPR technique. Based on the electrochemistry and EPR results, we propose that the reduction of the immobilized Mn I complex in Ru II /TiO2/Mn I triad may follow the one-electron pathway without the formation of Mn 0 -Mn 0 dimer, hence the catalytically active species is assumed to be [Mn 0 (ttpy-C3-PO3H2)(CO)2]. Under visible light and in the presence of BNAH as an electron donor, Ru II is reduced to Ru I , followed by electron transfer from Ru I to Mn I to form the Mn 0 species. The electron transfer process probably occurs between neighboring Ru I and Mn I species on the surface of TiO2, and not through TiO2 conduction band. The specific role of TiO2 scaffold compared to SiO2 is not yet fully understood.
V.1. Introduction
Photoactive hybrid nanocomposites are promising materials for a wide range of applications from photocatalysis to photoelectrodes. Engineering the interface between organicinorganic components has important impact on the material properties, thus it is critical for every application. Notably, coating of an organic polymer on TiO2 nanomaterials may change its hydrophobic / hydrophilic property, stability or electrical conductivity. In the latter case, the use of pyrrole, vinyl or styrene monomers offer an advantage to easily control the polymerization process, which can be initiated by TiO2 band-gap excitation. The resulting material consists of TiO2 nanoparticles (NPs) embedded in a conducting polymer network.
This section is focused on the preparation methods of these materials, their characterization and applications after deposition on transparent electrodes in photo-to-current energy conversion.
Strandwitz et al. 1 studied the in situ photopolymerization of pyrrole in mesoporous TiO2 thin film. First, a layer of TiO2 (thickness ~ 0.5 µm) was dip-coated onto a Fluorine-doped Tin Oxide (FTO) electrode. The electrode was then immersed into an aqueous solution of pyrrole and methyl viologen (MV 2+ ) under 365 nm irradiation. This TiO2 band-gap excitation promotes electrons to the conduction band (CB) and leaving a hole on the valence band (VB).
The electron is then transferred to MV 2+ , while pyrrole is subsequently oxidized by the holes.
In turn, the oxidized pyrrole initiates the oxidative polymerization of pyrrole to form polypyrrole. The photopolymerization process and mechanism of pyrrole oxidative polymerization are summarized in Scheme V-1. Pore size between TiO2 NPs coated on FTO was monitored by N2 adsorption isotherms. After 24 hours of irradiation the average pore diameter slightly decreases from 4.7 nm to 4.3 nm, which equals to ~ 20 % pore filling by the polypyrrole film. This poorly efficient coverage is probably due to several reasons such as: (i) the competitive UV absorption of polypyrrole, thus blocking the light to reach TiO2; (ii) polypyrrole acting as an energy barrier for other pyrrole monomers in solution to be in contact with the electrode and then be oxidized; and (iii) decomposition of polypyrrole due to photocatalytically active TiO2 NPs. The aforementioned works requires UV light to initiate the photopolymerization process since TiO2 does not absorb visible light. However, such condition may eventually lead to destruction of the polymer as TiO2 can photodegrade organic materials under UV light. 3 Incorporating pyrrole and [Ru(bpy)3] 2+ photosensitizer (PS) allows the polymerization to occur under visible irradiation. One of the pioneering works in this field has been done by Deronzier et al. 4 The authors covalently linked the PS and pyrrole via alkyl chains, named
[Ru(bpy-pyr)3] 2+ (Scheme V-3). The complex can be electropolymerized and deposited on a Pt electrode in either MeCN or H2O solvent, although the polymerization kinetics is much slower in H2O. In air-saturated aqueous solution, the complex can be photopolymerized under visible irradiation ( > 405 nm), forming a film in less than 30 s. The film is electroactive in the anodic region in both MeCN and H2O solvents (with suitable electrolytes); however, it is not in the cathodic region in aqueous solution due to its hydrophobicity. The authors later proposed a mechanism 5 for the photopolymerization process (Scheme V-3). The critical steps are the reductive quenching of [Ru II (bpy-pyr)3] 2+* excited state by O2 to generate Ru III species (step 2), followed by an electron transfer between the pyrrole moiety and Ru II leading to a radical cation on pyrrole moiety (step 3). This radical then initiates the polymerization (step 4).
= -1.73, -1.93 and -2.16 (V) are assigned to the reduction of the complex localized on its three bpy ligands since they are in accordance with the [Ru(bpy)3] 2+ complex. 8 The weakly electron-donating -CH2-group on a bpy ligand shifts all the potentials to more negative values than that of [Ru(bpy)3] 2+ . Table V- nm ( = 13900 M -1 .cm -1 ) is similar to that of [Ru(bpy)3] 2+ at 450 nm ( = 13000 M -1 .cm -1 ) 8 .
After excitation at 450 nm, the [Ru-pyr] 2+ complex exhibits a broad emission spectrum with a maximum at 621 nm, which is characteristic of Ru 2+* 3 MLCT state. Both visible absorption and emission wavelength maxima are only slightly red-shifted compared to those of [Ru(bpy)3] 2+ . The shifts are in accordance with literature for a series of [Ru(bpy)3] 2+ linked to two or three pyrrole groups via alkyl chains 10 and show that the substitution only has a minor influence on the electronic structure of the ground state and excited state of the complex. Using the simplified Rehm-Weller equation (Equation V-2), the redox potential of grafted Ru III /Ru II* in TiO2/[Ru-pyr] 2+ is estimated at -1.16 V vs Ag/Ag + 0.01 M, which is equal to that of the free complex [Ru-pyr] 2+ in solution. The result confirms that the complex reduction power has been retained on the NPs. s -1 , which is comparable to that of TiO2/Ru II NPs (3.0 × 10 7 s -1 ). All the lifetimes and kinetic rates of TiO2/[Ru-pyr] 2+ are in line with those of TiO2/Ru II NPs (Table V-3) since the pyrrole moieties do not interact with [Ru(bpy)3] 2+* centers as previously observed. In the meantime, the rate of energy transfer process is only increased by 3 times, thus making it two order of magnitude lower than the injection rate in the nanocomposite obtained due to its broad EPR signal (see Section II.3.7). However, a radical is detected for pyrrole which suggests an efficient reduction of Ru 3+ by pyrrole moieties even at 20 K. EPR line width Hp-p has been shown to decline with temperature for an electrochemically synthesized film, staying at less than 0.15 G at 100 K. 14 In this case the large line broadening value (Hp-p = 9 G) suggests a decrease in polaron mobility due to polypyrrole immobilization. 15 In order to get more insight into the isotropic g value of 2.0037, DFT calculations have been conducted by Dr. Jean-Marie Mouesca (INAC/SyMMES) using the SAOP (Statistical Average of Orbital Potentials) exchange-correlation potential. 16 First, as in our system there is a long alkyl chain attached to the nitrogen atom of pyrrole, we performed g-value calculations for pyrrole and pyrrole-CH3 radical cations (Table V -5) to study the effect of the electrondonating methyl group. To simplify the task, we only examined monomer, dimer and trimer radical cations. The calculated g-values are presented in Table V-6. In both pyrrole and pyrrole-CH3 cases, the g-values of all the cations slightly increase as the polymer chain propagates. In literature, an electrochemically synthesized polypyrrole film also exhibits a slight increase in the g-value when the synthesis time is prolonged. 17 Nevertheless, the differences between pyrrole and pyrrole-CH3 cations are insignificant. It can be concluded that the introduction of the methylene group has negligible effects on the isotropic g-values of the cations. Since these g-values of pyrrole and pyrrole-CH3 cations cannot explain the experimental g = 2.0037, we attempted to investigate the pyrrole=O species (Table V-5). This species is a side product between oxygen and corresponding radical cations 17 (Scheme V-10). As the methylene group has almost no effects on the g-values of the cations, we chose the pyrrole=O molecule without the methylene group to study in order to reduce the calculation complication. Interesting, the g-value of the dimer cation of this species (2.00399, Table V-6) is very close to the experimental value. The result suggests that the EPR line centered at g = 2.0037 may be due to the formation of the pyrrole=O dimer as a side product. where k is the Boltzmann's constant, T the absolute temperature and the solvent viscosity (0.341 mPa.s for MeCN at RT) 18 . The diffusion coefficient D can be calculated from the correlation function g(). For a monodisperse colloid this correlation function is given by: V.4.5. X-ray photoelectron spectroscopy X-ray photoelectron spectroscopy (XPS) is a surface sensitive technique which allows for qualitative and quantitative determination of elemental compositions on the surface. In principle it can detect all the elements with Z = 3 (Li) and above. The chemical state of an element, which depends on its oxidation state and its bonding with surrounding atoms, can be distinguished by change in binding energy.
In collaboration with Dr. Anass Benayad (LITEN, CEA-Grenoble), we have conducted a comparative XPS study for TiO2/Ru II , TiO2/[Ru-pyr] 2+ and TiO2/poly(Ru-pyr) NPs. The main purpose was to record any changes in N, Ru and P peaks before and after the photopolymerization and to quantitatively determine the atomic concentration of the elements of interest. TiO2/Ru II was used as reference to monitor spectral changes associated with the incorporation of the pyrrole function. Spectral calibration was based on the Ti2p3/2 peak at 457.9 eV, as this peak of TiO2 support remains unchanged for all the samples. Firstly, XPS survey spectra have been recorded for the three samples (Figure V-13).
The presences of Ti, O, N, C, and P elements are clearly observed. The Ru3d signal appears at the binding energy very close to that of C1s signal. Other elements are not detected. In an XPS study of polypyrrole, Joo et al. 14 have also shown that the C1s core level signal could be decomposed into three Gaussian lines. The line due to C-N linkages appeared at the binding energy higher than the line due to C-C linkages. The line at the highest binding energy was characterized by a vastly greater FWHM compared to the other two lines due to C-C and C-N linkages. It was attributed to disorders in the polypyrrole chains such as interchain connections and chain terminations. In our study, the line at ~ 287.5 eV remains almost similar in all three samples and its FWHM is comparable to the other peaks, it is not expected to be a consequence of the polymer chain disorders since the amount of the disorders should increase with respect to the irradiation time. Therefore, we attribute this line to the contaminated -C=O/COO containing species on the particle surface. molecules, while the one at 530.3 eV can be attributed to the oxygen atoms in phosphonate groups. The line at highest binding energy shows a very small contribution to the O1s signal and could be ascribed to hydroxyl groups on the TiO2 particle surface. 21 TiO2/[Ru-pyr] 2+ spectrum displays the same Gaussian lines after deconvolution. However, after photopolymerization only two lines at lower binding energies remain. The disappearance of the sub-oxides is explained by the coating polypyrrole network on the particle surface. The result for relevant elements is shown in Figure V-19. Firstly, it is noted that the Ru and P contents remain the same after photopolymerization (0.2 % and 0.9 % respectively), indicating the PS is stable in the experimental conditions. Ti content is also kept unchanged at ~ 19 %. Slight decreases in contents of N (bpy) from 1.6 % to 1.2 % and C (C-N) from 8.0 % to 6.4 % may suggest a small degree of decoordination of the bpy ligands and/or a buried photosensitizer inside polypyrrole. The decoordination of the bpy ligands from [Ru(bpy)3] 2+ has been known to occur under prolonged irradiation time. 22 However, taken into account the small growth in N (pyrrole) from 1.4 % to 1.6 % while the shape of Ru3d spectra is the same, the buried PS inside the polymer chains is probably the main reason for the declines in N (bpy) and C(C-C) contents during the photopolymerization. achieved by applying a bias between the cathode and anode immersed in a solution of charged particles in a polar solvent. Under the bias the positively charged particles migrate to the cathode while negatively charged ones are drifted to the anode. The method has found various applications in depositing charged particles onto a conducting substrate 23 or metal organic framework (MOF) thin films 24 for electronic component fabrication or surface coating. The deposition process can be conveniently controlled and reproducible by manipulating the electric field between the electrodes, the colloid concentration and the time of deposition. 23 In this study we found this method suitable for our positively charged nanoparticles / nanocomposites which could be coated on the cathode. Acetonitrile was chosen as solvent due to its rather large electrochemical active window, which makes it chemically inert in the presence of strong oxidants or reductants such as [Ru(bpy)3] 2+* , ppyr n+ and (e -)TiO2 species.
Moreover, the solvent should not have too high or too low dielectric constant to favor the EPD process. 23 The solvent should also be sufficiently polar to well suspend the NPs. The deposition was performed under Ar to avoid any undesired side reactions.
To prepare the TiO2/poly(Ru-pyr) nanocomposite-coated FTO electrode, we employed two strategies which are named after the sequential steps for a comparative study: contents remain almost the same while those of Ti and P markedly decrease. Since the XPS technique is highly surface sensitive to only a few nm depth while primary TiO2 NPs are ~ 25 nm in diameter, the result suggests that TiO2, together with the phosphonic anchoring group, may be buried inside the polypyrrole network. The increase in C (C-C bonds) content, which is present in the C11 chain linking [Ru(bpy)3] 2+ PS and pyrrole moiety, supports this explanation. Moreover, the EPD technique does not lead to the degradation of the complex despite a high applied electric field (2.4 × 10 3 V.m -1 ) during 1 hour of the deposition process. 10.5 µA.cm -2 , EPD-Photo: 7.2 µA.cm -2 ) compared with FTO/TiO2/Ru II electrode (5.5 µA.cm -2 ). The result proves the importance of using conducting polypyrrole layer for this light-to-electricity energy conversion application. It is noted that the Photo-EPD strategy allows for a photocurrent 30 % greater than the EPD-Photo strategy.
Photocurrent stability has also been investigated under prolonged visible irradiation (Figure V-32b). After 16 minutes, the electrode fabricated by the Photo-EPD method shows ~ 60 % decrease in current magnitude, while that fabricated by the EPD-Photo method exhibits only 40 % decrease. The latter case is similar to the case of FTO/TiO2/Ru II electrode.
Therefore, the Photo-EPD fabricated electrode allows for greater but less stable photocurrent than the EPD-Photo fabricated one. As the Photo-EPD method induces a less homogeneous surface, the aggregates may be desorbed more easily than the EPD-Photo electrode during the irradiation time. V.
Conclusion
The complex [Ru-pyr] 2+ where a [Ru(bpy)3] 2+ PS is covalently linked to two pyrrole units has been systematically characterized in solution for its electrochemical and photophysical properties. The complex can be electropolymerized onto an ITO electrode.
Immobilization of the complex on TiO2 NPs with phosphonic anchoring group has been achieved via a stepwise approach. First, bpy-PO3H2 ligand was allowed to chemically adsorbed on TiO2, before surface complexation between TiO2/bpy NPs and [Ru(bpy-pyr)2Cl2] was conducted. The resulting hybrid system, called TiO2/[Ru-pyr] 2+ , has also been characterized by electrochemical and photophysical methods. Under visible irradiation the [Ru(bpy)3] 2+ PS is excited and injects electrons to the CB of TiO2 with the rate of 2.4 × 10 7 s -1 , which is comparable to the system without pyrrole function, i.e. TiO2/Ru II .
In air-saturated MeCN solution, the TiO2/[Ru-pyr] 2+ hybrid NPs shows its ability to photopolymerize the pyrrole units under visible irradiation. The nanocomposite exhibits faster electron injection to TiO2 and energy transfer between grafted neighboring Ru II* species than TiO2/[Ru-pyr] 2+ NPs. XPS spectroscopy reveals that the PS conserves its coordination sphere after the polymerization and a polypyrrole film covers the particle surface. However, the Photo-EPD electrode exhibits a less stable photocurrent, which could be related to the desorption of the immobilized NPs from FTO substrate.
GENERAL CONCLUSIONS AND PERSPECTIVES
General conclusions
In this thesis, we used TiO2 nanoparticles (NPs) as a redox-active substrate to in solution to [Cr(ttpy)2] + in the presence of TEOA. The photoinduced electron transfer occurs through the CB of TiO2. TiO2 therefore acts as an electron relay between the Ru 2+* as an electron donor and Cr 3+ as an electron acceptor.
We also aimed to utilize the positive charge on the oxidized Ru 3+ in the photoinduced charge separated state (e -)TiO2/Ru 3+ for the oxidative polymerization of pyrrole. We synthesized a hybrid system containing TiO2 NPs as a scaffold and an immobilized of TEOA as a sacrificial electron donor. The enhanced photocurrent for the nanocomposite is attributed to the better conductivity and nanostructuration of the polypyrrole network.
In the case of [Mn(ttpy)(CO)3Br] anchored on TiO2/Ru II NPs, the injected electrons on TiO2 by Ru 2+* cannot be transferred to the Mn I sites because the reduction power of (e -)TiO2
is not sufficient to reduce Mn I . In the presence of a sacrificial electron donor like 1-benzyl-1,4-dihydronicotinamide (BNAH) and under visible light, the Ru 2+* excited state is first reductively quenched by BNAH to form Ru + , which has a stronger reduction power than Ru 2+* . Therefore, the Ru + can reduce Mn I to Mn 0 by a lateral electron transfer on the surface of TiO2 NPs and not through the CB of TiO2. Therefore, TiO2 may not participate in the electron transfer process leading to Mn 0 . We propose that the singly reduced Mn 0 complex is the catalytically active species on the Ru II /TiO2/Mn I triad for the photoreduction of CO2.
Photocatalytic CO2 reduction under visible light using the triad and BNAH as a sacrificial electron donor produces HCOOH as the only product with TONmax = 27 after 16 hours. The role of TiO2 scaffold in this photoinduced electron transfer process is not yet fully understood, however it seems to have a more active role than just a redox-inert scaffold like SiO2.
Perspectives
Different publications have emphasized the interest of using polypyridyl complexes of Cr(III) in photoredox catalysis. For instance, recent works have proved that [Cr(bpy)3] 3+based complexes can accelerate the Diels-Alder cycloaddition under visible irradiation and in the presence of O2, due to its high photooxidation power. 1,2 In this case, [Cr(ttpy)2] 3+ may be beneficial as the redox potential of its excited state (1.14 V vs Ag/AgNO3 0.01 M) is higher than that of [Cr(bpy)3] 3+ (0.89 V 3 ). Also, our work showed that the lifetime of [Cr(ttpy)2] 3+* excited state is long enough for electron transfer reactions. Moreover, with the phosphonic acid derivatization we can immobilize the complex on NPs for the easier separation of the catalyst after the reactions.
An older publication also reported the possible reduction of CO2 to HCHO by a polyvinyl film bearing [Cr(tpy)2] 3+ (tpy = 2,2':6',2"-terpyridine) units as an electrocatalyst. 4 In this thesis, we studied the electroreduction of H + to H2 from trifluoroacetic acid (TFA) in MeCN catalyzed by [Cr(ttpy)2] 3+ . It will be interesting to test the triad Ru 2+ /TiO2/Cr 3+ in the photocatalytic proton reduction as well. In order to regenerate the Ru 2+ PS an electron donor like TEOA or ethylenediaminetetraacetic acid (EDTA) is usually used in non-aqueous and aqueous media, respectively. However, they only work in basic solutions. Meanwhile, TFA is a relatively strong acid (pKa = 12.8 in MeCN 5 ). Therefore, another electron donor and/or proton source should be employed to drive the photocatalytic proton reduction using this triad.
In the photocatalytic CO2 reduction experiments using Ru II /TiO2/Mn I triad, the use of electron donor BNAH has several disadvantages as recently described. 6 Firstly, the reductive quenching of Ru II* excited state by BNAH is not total. Secondly, after formation of Ru I and BNAH + , a fast back electron transfer between them occurs which prevents the utilization of the charge on Ru I reduced species. Thirdly, (BNA)2, the oxidation product of BNAH, is more efficient than BNAH to quench the Ru II* excited state and can terminate the catalytic cycle.
Therefore, the photocatalytic result may be enhanced nanocomposite and deposited on an electrode for proton reduction. In this way, the loading of these catalysts on a geometrical electrode area should be higher than simply grafting them on a flat electrode surface.
if 1,3-dimethyl-2-phenyl-2,3-dihydro- 1H-benzo[d]-imidazole (BIH) or 1,3-dimethyl-2-(o-hydroxyphenyl)-2,3-dihydro-1H- benzo[d]imidazole (BI(OH)H) ( Scheme
Chemicals
Solvents: Acetonitrile (CH3CN), dichloromethane (CH2Cl2) and ethanol (C2H5OH, all purchased from Fisher, HPLC grade), dimethyl sulfoxide (DMSO, Acros, anhydrous 99.7%), N,N-dimethylformamide (DMF, Acros, anhydrous 99.8%), ethylene glycol (Prolab), chloroform (CHCl3, Carlo Erba, HPLC grade), diethyl ether (Aldrich, 99.8%), hexane (95%), pentane (Carlo Erba), methanol (SDS anhydrous, analytical grade) and acetone (Aldrich, 99.5%) were used as purchased without any further purifications. Dry solvents were obtained by distillation under Argon. Distilled water was prepared with a Milli-Q system.
Reagents: all reagents have been used without further purifications, unless otherwise stated.
Nanoparticles (NPs): commercially available TiO2 (anatase, d < 25 nm, Aldrich), TiO2
(Degussa P25) and SiO2 (d < 20 nm, Aldrich) were used as received.
Others: Column chromatography was carried out on silica gel 60 (Merck, 70-230 mesh). Thin layer chromatography (TLC) was performed on plates coated with silica gel 60 F254.
Apparatus and techniques
Nuclear magnetic resonance (NMR): 1 H NMR, 13 C NMR and 31 P NMR spectra were recorded with a 400 or 300 MHz Bruker spectrometer at room temperature (RT). Chemical shifts in the 1 H NMR spectra were referenced to residual solvent peaks. Coupling constants (J) and the chemical shifts () were shown in Hz and ppm, respectively. The abbreviation for the characterization of the peaks are as follows: s = singlet, d = doublet, t = triplet, q = quartet, m = multiplet, dd = doublet of doublet, dt = doublet of triplet.
UV-visible absorption spectroscopy (UV-vis):
UV-vis spectra of a homogeneous solution or a colloid in a conventional quartz cell were recorded with a Varian Cary 300 or a MCS 500 UV-vis-NIR Zeiss spectrophotometer. To record spectra of a powder or a thin film on electrode, a Perkin Elmer Lambda 650 spectrometer and an integration sphere were used.
The NPs of interest were mixed with KBr, then a hydraulic press was used to make a pellet for the experiments.
Luminescence: Emission spectra were recorded with a Fluoromax-4 (Horiba Scientific) in a quartz cuvette with 4 transparent faces. Samples were purged with Ar for 15 minutes prior to experiment. The luminescent quantum yield was calculated as follows: Time-resolved emission spectroscopy: Spectra were recorded with a time-correlated single photon counting (TCSPC) after the samples were excited at 400 nm by a picosecond Nd:YAG laser. The decay of [Ru(bpy)3] 2+* state at 610 nm was measured by a PicoHarp 300 TCSPC detector (PicoQuant). The fitting of the luminescent decays was performed with a FluoFit software (PicoQuant). In case of a colloid, light scattering was taken into account by subtracting the decay at 400 nm from the decay at 610 nm prior to the fitting.
Transient absorption spectroscopy: Spectra were acquired using a LP920K system (Edinburgh Instruments) equipped with a nanosecond Nd:YAG laser (Brilliant -Quantel).
Pump excitation of the sample was achieved by a third harmonic (355 nm) or a second harmonic (532 nm) module. A Xe900 pulsed Xenon lamp was used to generate a probe light.
The photons were dispersed using a monochromator, transcripted by a R928 (Hamamatsu) photo-multiplicator, and recorded on a TDS 3012C (Tektronix) oscilloscope. All samples were purged with Ar for 15 minutes prior to the measurement.
Electrochemistry: All electrochemical measurements were performed in a standard where Epa and Epc are anodic and cathodic peak potentials respectively.
The surface coverage (mol.cm -2 ) of a redox complex grafted on an ITO or FTO electrode is calculated as follows:
Q nFA
where Q (C) is the integrated area under the oxidation or reduction peak , n is the number of electron transfer, F = 96500 C.mol -1 the Faraday constant, and A (cm 2 ) the apparent electrode area.
Electrocatalysis: Experiments were carried out at RT under Ar (for proton reduction) or CO2/CH4 (95/5 v/v) mixture (for CO2 reduction) in a sealed conventional three-electrode cell.
The solvent was MeCN + TBAPF6 (0.1 M), to which water may be added in some CO2 reduction experiments. Reference and counter electrodes were Ag/AgNO3 0.01 M and Pt plate, respectively. Working electrode was a carbon plate (8 cm 2 ) which had been cleaned by diamond paste and washed with ethanol prior to experiment. In a normal setting with 3 electrodes and 10 mL solution, the headspace volume was measured to be 170 mL. During the experiments, at each time interval a 100 µL sample was taken from the headspace gas using a gas tight injection syringe. Gas products were analyzed with the gas chromatographs described below. At the end of the experiments, a liquid sample was taken for HCOOH analysis.
Quartz crystal microbalance with energy dissipation (QCM-D): All QCM-D experiments were run at a flow rate of 10 L/min, using SiO2-or Ti-coated quartz crystals (Q-Sense). All solutions were kept in a thermomixer (Ependorf) for a constant temperature of 21 0 C during the experiment. A pretreatment of the surface before measurement is required to improve the surface cleanliness. The SiO2-coated sensor was sonicated in a 2 % SDS aqueous solution for 20 minutes, washed with water, then exposed to UV-O3 treatment for 30 minutes.
The Ti-coated sensor was rinsed in Hellmanex 1 % solution for 30 minutes, washed with water, dried and rinsed in pure ethanol for 10 minutes, then exposed to UV-O3 treatment for 2 hours to oxidize the surface to TiO2. Finally, both sensors were rinsed again in ethanol for 20 minutes, dried under N2 before a QCM experiment.
Photocatalytic CO2 reduction: A glass tube (12.5 mL) was charged with a [Ru(bpy)3] 2+ photosensitizer complex and a catalyst in DMF/TEOA mixture (5 mL, CTEOA = 1 M). The PS and catalyst can be homogeneous or anchored on TiO2 NPs. The mixture was sonicated for a few minutes prior to be purged with Ar (20 mins) and then CO2/CH4 (95/5 v/v) mixture (20 mins). The dead volume was estimated at 7.5 mL. Afterwards, the tube was irradiated with a Xenon lamp (4 cm apart) in the presence of a UV filter and a 450 nm bandpass filter in order to selectively excite the PS. The light power is estimated to be 0.3 mW.cm -2 . During the experiment, at each time interval a 100 µL sample was taken from the headspace gas using a gas tight injection syringe. Gas contents were analyzed by GC which is described below. At the end of the experiment, a liquid sample was taken to analyze HCOOH content by HPLC.
Gas chromatography: Gas samples (100 µL) were taken from the headspace of the tube containing the reaction mixture. If the gas phase only contains H2, the analysis was run with a Perkin Elmer Autosystem XL gas chromatograph equipped with a 5 Å molecular sieve column (oven temperature = 32 0 C) and a thermal conductivity detector (TCD), using argon as the carrier gas. If the gas phase contains both CO and H2, the analysis was conducted with a Perkin Elmer Clarus 500 gas chromatograph equipped with a PDID detector and a 30 m Carboplt 1010 column (Antelia) and 560S mass spectrometer using a TurboMass 5.4.2 program. The gas contents were calculated against 5 % CH4 as internal reference in the gas phase. The results were compared to a standard gas mixture of CO, H2 and CO2 (Air Liquid).
Helium was used as the carrier gas. If the gas phase only contains H2, the analysis was run on Transmission electron microscopy (TEM): Experiments were conducted in collaboration with Dr. Jean-Luc PUTAUX (CERMAV) using a Phillips CM200 Cryomicroscope operating at 80 kV under RT and high vacuum. A small volume of colloid was deposited onto a glow-discharged copper grid and the solvent was evaporated.
X-ray photoelectron spectroscopy (XPS): Experiments were conducted in collaboration with Dr. Anass BENAYAD (LITEN/CEA Grenoble). The XPS analyses were performed with a ULVAC PHI 5000 VersaProbe II spectrometer using AlKα X-ray radiation (1486.6 eV).
The residual pressure inside the analysis chamber was 7 × 10 -8 Pa. A fixed analyzer pass energy of 23 eV was used for core level scans, leading to an overall energy resolution of 0.6 eV. Survey spectra were captured at a pass energy of 117 eV. All spectra were referenced against an internal signal, typically by adjusting the Ti2p 3/2 level peak at a binding energy of 457.9 eV. The XPS spectra were fitted using Multipak V9.1 software in which a Shirly background is assumed and the fitting peaks of the experimental spectra are defined by a combination of Gaussian (80%) and Lorentzian (20%) distributions.
Synthesis
General conditions:
In case of air-sensitive syntheses, a Schlenk technique or a glovebox was used. The flasks and other glassware were dried in oven overnight prior to the syntheses. Loadings of metal complexes on NPs were roughly estimated by measuring the UV-vis absorbance of supernatant solutions after centrifugation which indicates the amount of unadsorbed complexes.
Ligands and complexes
[Ru(bpy)2(dmbpy)](PF6)2. A mixture of [Ru(bpy)2Cl2] (52 mg, 0.10 mmol) and 4,4'dimethyl-2,2'-bipyridine (dmbpy, 20.3 mg, 0.11 mmol, 1.1 equiv) in ethylene glycol (anhydrous, 5 mL) was refluxed at 120 0 C in 2 hours under Ar. The solution color gradually changed from purple to dark red. After the mixture was allowed to cool to RT, an excess amount of saturated solution of KPF6 in water was added, resulting in immediate red precipitates. The powder was filtered, then solubilized in acetone to remove the excess KPF6 salt. Subsequently, the solution was evaporated to give red crude product. Small amounts of acetone and diethyl ether were used to solubilize the crude product and then evaporated several times to completely eliminate water. Finally a red fine powder was obtained (70 mg, 80 % yield). [Cr(ttpy-PO3H2)2](ClO4)3 ([Cr2P] 3+ ). In a glove box, ttpy-PO3H2 (30 mg, 0.077 mmol) was dissolved in 5 mL DMSO. To this was added CrCl2 (4.5 mg, 0.037 mmol) dissolved in 2 mL H2O. Upon addition, the solution turned dark purple in color. A further 40 mL of H2O was added and the solution stirred for 1 h at 25 0 C. Three spatula-fulls of LiClO4 were added but no precipitation was observed. The purple solution was removed from the glove box and stirred for 12 h at RT whilst exposed to air. The resulting dark yellow solution was filtered to afford the product [Cr2P] 3+ as a pale yellow solution. The solvent was evaporated under vacuum to afford a grey/yellow solid (25 mg, 60 %). [Cr(ttpy)Cl3]. The complex has been synthesized following reported procedures. 2,3 To 10 mL EtOH was added anhydrous CrCl3 (0.19 g, 1.17 mmol) then 4'-tolyl-2,2':6',2"-terpyridine (ttpy, 0.48 g, 1.49 mmol). The mixture was heated to reflux then granulated zinc (42 mg, 0.64 mmol) was added. The mixture color turned to khaki green. It was stirred vigorously for 12 h at 80 °C. The khaki green precipitate was filtered, washed twice with EtOH and dried under high vacuum (0.45 g, 79 %).1 H NMR cannot be done because the product is paramagnetic.
[Cr(ttpy)(OTf)3] (OTf = CF3SO3 -). The complex has been synthesized following reported procedures. 2,3 To [Cr(ttpy)Cl3] (0.20 g, 0.42 mmol) was added CF3SO3H (3.68 g, 24.5 mmol) giving a yellow/brown solution which was stirred for 12 h at RT. The reaction was cooled to 0 °C and 30 mL Et2O was added. The solid produced was separated by filtration, washed with Et2O and the resulting brown solid dried under high vacuum (0.21 g, 59 %). 1 H NMR cannot be done because the product is paramagnetic.
[Cr(ttpy)(ttpy-PO3H2)](PF6)3 ([Cr1P] 3+ ). The complex has been synthesized following reported procedures. 2,3 Precursors [Cr(ttpy)(OTf)3] (25 mg, 0.029 mmol) and ttpy-PO3H2 (10.5 mg, 0.027 mmol) were dissolved in 8 mL degassed MeCN plus 8 mL degassed DMSO and the brown solution was stirred at 80 0 C under an Ar atmosphere for 48 h. The solution was cooled to RT and an excess amount of KPF6 in water added. No obvious precipitate was formed, so the solution was filtered to remove any insoluble impurities prior to be used for grafting [Cr1P] 3+ onto ITO electrodes or TiO2 NPs.
ttpy-(CH2)3-PO3H2.
The synthesis requires three consecutive steps as follows:
(i) Step 1: Synthesis of [3-(4-formyl-phenoxy)-propyl]-phosphonic acid diethyl ester: 4-hydroxybenzaldehyde (200 mg, 1.637 mmol) was dissolved in 5 mL of acetonitrile before addition of K2CO3 (340 mg, 2.457 mmol, 1.5 equiv) and diethyl(3bromopropyl)phosphonate (0.35 mL, 1.802 mmol, 1.1 equiv). The reaction was heated at 50°C overnight. After cooling to RT, 50 mL of water were added and the solution was extracted with 50 mL of CH2Cl2 (3 times). The organic layer was washed with brine before being dried with MgSO4. After filtration and evaporation, the crude was purified by chromatography on silica gel with ethyl acetate as eluent to give 389 mg (1.295 mmol) of the pure compound as a white solid with 79 % yield. 190.8, 163.9, 132.1, 130.1, 114.8, 67.9, 67.7, 61.8, 61.7, 23,3, 22.6, 21.4, 16.6, 16.5. 31 (ii) Step 2: Synthesis of [3-(4-[2,2';6',2'']-terpyridin-4'-yl-phenoxy)-propyl]-phosphonic acid diethyl ester
The aldehyde (558 mg, 1.858 mmol) was dissolved in 12 mL of methanol before addition of 2-acetylpyridine (0.42 mL, 3.1716 mmol, 2 equiv), KOH (208 mg, 3.1716 mmol, 2 equiv) and 12 mL of NH4OH (30%). The reaction was heated at 50 °C during 3 days. After cooling to RT, the solvent was evaporated and the crude was purified by chromatography on silica gel. The elution started with AcOEt and went on with MeOH 2%, 5% before finishing with a mixture of AcOEt/MeOH/Et3N (94/5/1). The crude compound was dissolved in EtOH and precipitated with addition of water. After filtration, the compound was recrystallized twice with a mixture of AcOEt/Cyclohexane to give 209 mg (0.415 mmol) of a white powder with a 22 % yield. 9, 156.3, 155.7, 150.0, 149.0, 137.4, 130.9, 128.8, 124.0, 121.7, 118.7, 115.1, 67.7, 67.6, 61.8, 61.7, 23.2, 22.9, 21.8, 16.7, 16.6. 31 (iii) Step 3: Synthesis of [3-(4-[2,2';6',2'']-terpyridin-4'-yl-phenoxy)-propyl]-phosphonic
22%
To a solution of bromotrimethylsilane (TMSBr, 0.315 mL, 2.383 mmol, 10 equiv) in 5 mL of anhydrous CH2Cl2 under inert atmosphere, 120 mg of the phosphonic ester dissolved in 5 mL of CH2Cl2 was added dropwise in 5 minutes at RT. After two hours of reaction, 1 mL of a mixture acetone/water (1/1) was added. After one more hour, the solvent was evaporated and the crude was triturated in 20 mL of hot ethanol. After cooling to RT, the compound was filtrated and dried. A yellow powder was obtained (126 mg). The yield cannot be determined because of the unknown amount of HBr in the molecule. 2, 152.1, 151.2, 150.1, 146.7, 141.6, 128.6, 128.2, 126.0, 123.0, 119.2, 115.2, 67.8, 67.6, 24.6, 23.2, 22.8. 31
Nanoparticles and their surface modifications
SiO2 nanoparticles. The ultrasound-assisted synthesis of SiO2 NPs follows the Stober's method 5 with some modifications 6 . To a mixture of 50 mL ethanol and water was added 4 mL tetraethyl orthosilicate (TEOS), then sonicated for 2 h. Ammonia was added to the mixture at different feed rates to achieve >100 nm NPs (dropwise) or <10 nm NPs (0.05 mL.min -1 ). A milky gel was formed after 5 hours of sonication. It was separated from the solution by centrifugation, washed with ethanol, dried in an oven at 80 0 C overnight and calcined at 250 0 C for 2 hours to yield a white powder.
TiO2 nanoparticles. The synthesis of anatase TiO2 NPs follows a modified sol-gel process. 7 To a flask containing 50 mL pure ethanol was dropwise added 6 mL (0.02 mol) titanium isopropoxide (TTIP), then the mixture was vigorously stirred for 30 minutes to obtain a milky solution. A solution of 2 mL H2O/HCl (3/1, v/v) was added dropwise to the TTIP mixture.
After 2 hours the solution became clear, and allowed for peptization by refluxing it at 70 0 C for 24 hours, followed by aging at RT overnight. The nanoparticles were then separated and Abstract: This thesis aims to investigate the possibility of using TiO2 nanoparticles (NPs) as a platform to immobilize proximal coordination complexes that can interact with each other by photoinduced electron transfer. We have studied hybrid nanomaterials combining [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) as a photosensitizer and [Cr(ttpy)2] 3+ or [Mn(ttpy)(CO)3Br (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine) as electron acceptors. To immobilize the various complexes on the surface of TiO2, a phosphonic acid functional group was introduced on one of the bipyridines of the [Ru(bpy)3] 2+ center and on the terpyridines of the [Cr(ttpy)2] 3+ complex. Under visible light, the TiO2/Ru II colloid undergoes a photo-induced charge transfer process leading to a long-lived charge separation state (e -)TiO2/Ru III , which makes it possible to be engaged in successive oxidation or reduction reactions. In particular, the visible irradiation of the TiO2/Ru II colloid in the presence of [Cr(ttpy)2] 3+ and triethanolamine (TEOA) as a sacrificial electron donor allows the two-electron reduction of [Cr(ttpy)2] 3+ . Subsequently, the [Cr(ttpy)2] 3+ complex has been immobilized on the TiO2/Ru II NPs to form a Ru II /TiO2/Cr III assembly in which the photoinduced electron transfer processes were investigated. In order to propose a system for the photocatalytic reduction of CO2, the [Mn(ttpy)(CO)3Br] and [Ru(bpy)3] 2+ complexes were co-immobilized on TiO2 NPs following a chemistry on surface approach to form a Ru II /TiO2/Mn I triad. Under irradiation at 470 nm, this system exhibits excellent selectivity towards HCOOH as the only product of CO2 photoreduction in DMF/TEOA solvent mixture, in the presence of 1-benzyl-1,4-dihydronicotinamide (BNAH) as a sacrificial electron donor. Another hybrid system linking a [Ru(bpy)3] 2+ unit to two pyrrole functions and being immobilized on TiO2 has also been synthesized and studied. Under visible light, the transient (e -)TiO2/[Ru-pyr] 3+ species induce the polymerization of pyrrole to form a TiO2/poly(Ru-pyr) nanocomposite. The nanocomposite deposited on an electrode generates, in the presence of TEOA, a stable anodic photocurrent of more than 10 μA.cm -2 . All the results show that TiO2 NPs can be used to associate different complexes in a close environment by limiting the interactions in the ground state but allow photoinduced electron transfer processes between them. Depending on the redox potentials of the different components, the electron transfer takes place either through the semiconducting NPs or on the surface.
Remerciement: Cette thèse a été financée par le LabEx grenoblois Arcane (ANR-11-Labx-003-01).
Figure I- 1 .
1 Figure I-1. Representation of TiO2 crystal phases: (a) anatase: tetragonal, a = 3.785 Å, c = 9.513 Å; (b) rutile: tetragonal, a = 4.593 Å, c = 2.959 Å); (c) brookite: orthorhombic, a = 9.181 Å, b = 5.455 Å, c = 5.142 Å. Adapted from reference 2
Figure
Figure I-2.
Figure I- 2 .
2 Figure I-2. Positions of the upper edge of the valence band (green) and the lower edge of the conduction band (red) of several semiconductors in contact with aqueous electrolyte at pH 1. Adapted from reference 7
2 Figure I- 3 .
23 Figure I-3. Representation of the photon absorption and emission of TiO2 in anatase and rutile phases. Straight and wavy lines indicate radiative and non-radiative decays, respectively. Adapted from reference 2
Figure I- 4 .
4 Figure I-4. Molecular structure of [Ru(bpy)3] 2+ photosensitizer
Figure I- 5 .
5 Figure I-5. Cyclic voltammogram (CV) of [Ru(bpy)3] 2+ in MeCN + 0.2 M TEAP (tetraethylammonium perchlorate) solution. Adapted from references 16,17
Figure I- 6 .
6 Figure I-6. (a) Simplified molecular orbital diagram of octahedral Ru(II) polypyridine complexes with arrows indicating electronic transitions occuring in the UV-vis region. 18 (b) Simplified Jablonski diagram indicating radiative and non-radiative decays of the excited states of Ru(II) polypyridine complexes. 20 MLCT = metal-toligand charge transfer, MC = metal-centered transition, LC = ligand-centered transition, GS = ground state.
kr and knr are the rate constants of radiative and non-radiative decays, respectively. The UV-vis absorption and emission spectra of [Ru(bpy)3] 2+ are shown in Figure I-7 together with proposed assignments for the electronic transitions. 17 In the absorption spectrum (Figure I-7a), the strong band at 285 nm is assigned to spin allowed LC * transition. The two broad bands at 240 nm and 450 nm are attributed to the MLCT d * transitions. The shoulders at 322 and 344 nm are assigned to the MC d d transitions. Excitation of [Ru(bpy)3] 2+ in any of its absorption bands leads to the emitting 3 MLCT state. This emission is centered at around 600 nm where its energy, intensity and lifetime are dependent on the temperature and the solvent (Figure I-7b). In deaerated acetonitrile and at room temperature the lifetime of this luminescence is estimated to be around 1 µs with a quantum yield of em= 0.06.
Figure I- 7 .I. 2 . 3 .I. 2 . 3 . 1 .
723231 Figure I-7. (a) Electronic absorption spectrum of [Ru(bpy)3] 2+ in alcoholic solution. (b) Emission spectrum of [Ru(bpy)3] 2+ in alcoholic solution at 77 K (solid line) and room temperature (dashed line). Adapted from reference 18 I.2.3. Quenching of the [Ru(bpy)3] 2+* excited state In the presence of suitable electron donors (D) and acceptors (A), the 3 MLCT excited state of [Ru(bpy)3] 2+ can take part in bimolecular reactions resulting in the quenching of its luminescence. The quenching reaction is governed by the nature of the quenchers. It can occur via an energy transfer reaction, oxidative or reductive quenching reaction as follows: [Ru(bpy)3] 2+* + Q [Ru(bpy)3] 2+ + Q * Energy transfer [Ru(bpy)3] 2+* + Q [Ru(bpy)3] 3+ + Q - Oxidative quenching [Ru(bpy)3] 2+* + Q [Ru(bpy)3] + + Q + Reductive quenching
Figure I- 8 .
8 Figure I-8. Illustration of the Forster and Dexter energy transfer mechanisms 23
Figure I- 9 .
9 Figure I-9. Electronic configuration of the [Ru(bpy)3] 2+ complex in ground state and excited stateAfter calculation of the potentials of all the redox couples which could be involved, a first approximation of the exergonicity of the photo-induced electron transfer process can be evaluated using the derived equation by Weller:25
Figure I-10 shows some structures mentioned in this publication. Table I-3 summarizes important adsorption parameters such as monolayer grafting density , adsorption constant Kads and free binding energy G.
Figure I- 10 .
10 Figure I-10. Structures of some derivatives of phosphonic anc carboxylic acids anchored on TiO2 NPs 27Table I-3. Calculated adsorption parameters: monolayer grafting density , adsorption constant Kads and free binding energy G mentioned in reference 27
using X-ray Absorption Spectroscopy. A [Ru(dcb)2(NCS)2] 2+ dye (Dye N3, Scheme I-2d) was anchored on 20-nm anatase TiO2 colloidal solution. The trapping sites, located on the surface, were shown to be completely populated within 70 ps. The lifetime of the trapped electrons was found to be slightly higher for dye-sensitized TiO2 NPs than bare TiO2, lying on a nanosecond time scale. Two types of surface trapping sites were distinguished as sixfold-and fivefold-coordinated Ti 3+ sites located at energies of 1.6 eV and 1.2 eV below the CB respectively.
photoexcitation of the dye-sensitized film, the electron injection which occurred directly from the1 MLCT delocalized excited state was too fast to be observed, suggesting a sub-hundred femtosecond process. The injection from the 3 MLCT state of the attached bpy was also not detectable. The slower component of the injection occurred from the thermalized triplet excited state in a multiexponential fashion with time constants ranging from 1 to 50 ps. This slower process results from an excited state localized on a bpy ligand of the Ru(II) dye that is not attached to the TiO2 surface. This electron needs to be first transferred to the attached bpy ligand by interligand charge transfer (ILCT) and then injected to TiO2. ILCT efficiency is claimed to control the triplet channel of electron transfer from the dye. It can be changed by chemical modification of the bpy ligand and by the solvent. Scheme I-4 illustrates the discussed charge injection pathways from the N3 dye to TiO2 film. Therefore, localization of the initially photoexcited electron from the MLCT state and its orientation with respect to the semiconductor surface play an important role in the kinetics of electron injection.
Scheme I-5. (a) Structure of the Ru II /MOx/CODH hybrid system: MOx is either TiO2 (P25), anatase TiO2, rutile TiO2, ZnO or SrTiO3 NPs; D is a sacrificial electron donor (2-(N-morpholino)ethanesulfonic acid. (b) CO production under visible irradiation using various MOx substrates. From a molecular approach, Durrant, Reisner et al. 49 have proposed to graft [Ru(bpy)3] 2+ as a PS and cobaloxime as a catalyst on TiO2 NPs (Scheme I-6) for proton reduction reaction. This reaction requires two electrons to produce H2. Using transient absorption spectroscopy and time-resolved emission spectroscopy, the authors showed that the electron transfer from the Ru II* excited state and Co III site occurs through the conduction band (CB) of TiO2. The long-lived charge separated state Ru III /(e -)TiO2/Co III (1/2 ~ 400 ms)allowed the Co III sites to be reduced by the CB electrons of TiO2. However, there were no experimental evidences for the accumulation of two electrons on the Co III site, although it
Scheme II-1. Synthesis procedure for [Ru(bpy)2(bpy-PO3Et2)] 2+ ([Ru-PO3Et2] 2+ ) and [Ru(bpy)2(bpy-PO3H2)] 2+ ([RuP] 2+ )
Scheme II-2. Synthesis of [Ru(bpy)2(dmbpy)](PF6)2
Figure II- 1 .
1 Figure II-1. (a) UV-vis spectra and (b) emission spectra of [Ru(bpy)3] 2+ (black), [Ru(bpy)2(dmbpy)] 2+ (red) and [Ru-PO3Et2] 2+ (blue) in MeCN under Ar. Emission spectra were recorded in a fluorescent cuvette after 450 nm light excitation.
Figure II- 2 .
2 Figure II-2. Time-resolved emission spectra of (a) [Ru(bpy)3] 2+* , (b) [Ru(bpy)2(dmbpy)] 2+* and (c) [Ru-PO3Et2] 2+* in MeCN under Ar. The samples were excited at 400 nm using a picosecond pulsed laser, while the emitted photons were collected at 610 nm using a TCSPC photometer. The red curves represent the monoexponential fitting of the decays
c)
Transient absorption spectroscopy TAS was employed to study the UV-vis absorption of photoexcited [Ru-PO3Et2] 2+* species. Following the excitation by a nanosecond pulsed laser at 532 nm, the complex shows a bleaching at 450 nm which recovers to the initial value after ~2000 ns (Figure II-3a). The bleaching is associated with an increased absorbance at 370 nm. The TA spectra are characteristic of the photoexcited state [Ru III (bpy -)(bpy)(bpy-PO3Et2)] 2+* . The bleaching at 450 nm is fit with monoexponential function to yield a lifetime of 931 ± 5 ns (Figure II-3b).
Figure II- 3 .
3 Figure II-3. (a) TA spectra of [Ru-PO3Et2] 2+ complex in Ar-saturated MeCN solution at various time intervals after the sample was photoexcited by a 532 nm nanosecond pulsed laser. The data points at 530 nm have been removed to avoid light scattering from the 532 laser. (b) Signal bleaching at 450 nm (black) fit with a monoexponential function (red) and the fitting residual.
Figure II- 4 ,
4 Figure II-4, while the redox potentials are collected inTable II-2. Both [Ru(bpy)2(dmbpy)] 2+
Figure II- 4 .
4 Figure II-4. Cyclic voltammograms of (a) [Ru(bpy)2(dmbpy)] 2+ and (b) [Ru-PO3Et2] 2+ (0.5 mM) in MeCN + 0.1 M TBAPF6. WE = C disk (d = 3 mm), CE = Pt, RE = Ag/AgNO3 0.01 M, v = 100 mV.s -1
Figure II- 5 .
5 Figure II-5. EPR signals and simulations of (a) [Ru(bpy)3] 3+ and (b) [Ru(bpy)3] + produced by exhaustive electrolysis of 1 mM [Ru(bpy)3](PF6)2 in MeCN + 0.1 M TBAPF6. The signals were recorded with an X-band EPR spectrometer (f 9.65 GHz, 2 mW) at 10 K (signal a) and 60 K (signal b). They were accumulated in 80 s.
Scheme II- 3 .Figure II- 7 .
37 Scheme II-3. Synthesis procedures of SiO2 NPs with various sizes
Figure II- 8 .
8 Figure II-8. (a) TEM and (b) FE-SEM micrographs of SiO2 NPs with diameter < 10 nm
Scheme II- 4 .
4 Scheme II-4. Synthesis procedures of TiO2 NPs with anatase and/or rutile phases In collaboration with Prof. Isabelle Gautier-Luneau (Institut Néel -Grenoble, CNRS), the morphology of the two TiO2 NPs were studied with X-ray diffraction (XRD) and compared with the standard values recorded in JCPDS No. 21-1272 (anatase), JCPDS No. 21-1276 (rutile) and JCPDS No. 29-1360 (brookite). XRD patterns of the two TiO2 samples are presented in Figure II-9. The sample annealed at 450 0 C shows XRD peaks at 2 = 25.3 0 ,
Figure II- 9 .
9 Figure II-9. XRD patterns of TiO2 NPs annealed at (a) 450 0 C and (b) 700 0 C. The black and blue numbers indicate anatase and rutile lattices, respectively. The asterisks mark the peaks of the rutile phase.
Figure
Figure II-10 shows the FE-SEM micrographs of the synthesized anatase and anatase/rutile TiO2 NPs. The primary particle size is estimated to be <10 nm for anatase TiO2 and 30-40 nm for anatase/rutile TiO2. The increased size of TiO2 NPs due to thermal treatment is in accordance with literature. 17 The aggregation of the NPs offers larger crystallite domains than individual particle size. The aggregation precludes statistical studies of size distribution.
Figure II- 10 .II. 3 . 2 .
1032 Figure II-10. FE-SEM micrographs of TiO2 NPs annealed at (a) 450 0 C and (b) 700 0 C II.3.2. In situ observation of [RuP] 2+ dye adsorption on surface Before grafting the [RuP] 2+ photosensitizer on SiO2 and TiO2 NPs, we studied the dye adsorption on their analogous flat surfaces to determine the dye concentration, temperature and time required, as well as surface coverage. The adsorption is conveniently monitored by Quartz Crystal Microbalance with Energy Dissipation (QCM-D) technique, which is based on the change in vibrational frequency of the quartz crystal upon the adsorption of an analyte.
Figure II- 11 .
11 Figure II-11. QCM-D responses for the adsorption of [RuP] 2+ complex on (a) SiO2 and (b) TiO2 coated sensors. Red and blue lines represent changes in frequency and dissipated energy, respectively, at various overtones. All experiments were run at 21 0 C.
Figure
Figure II-12. (a) FT-IR spectra of: (1) [RuP] 2+ , (2) anatase TiO2, (3) TiO2/Ru II , (4) the subtraction of signal (2) from (3). (b) IR spectra of: (1) [RuP] 2+ , (2) SiO2, (3) SiO2/Ru II , (4) the subtraction of signal (2) from (3).
anchoring group. For instance the CV of unprotected [RuP] 2+ in MeCN:H2O (95:5 v/v) mixture shows an irreversible reduction peak at -1.20 V (Figure II-13c) that is not present in the CV of protected [Ru-PO3Et2] 2+ complex (see Figure II-4b). In literature, when monolayers of n-alkane phosphonic acids are anchored on gold, a desorption peak was observed at -1.26 V vs SCE. 23 For [RuP] 2+ grafted on FTO, the reduction of phosphonate accompanied by an oxidation peak at -0.65 V shows better stability of the phosphonate group on metal oxide than gold.
Figure
Figure II-13. (a) CV of FTO/Ru II electrode, recorded in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1 . (b) Ipa-v plot of Ru III /Ru II oxidation peak shows a linear relationship, R 2 = 0.999. (c) CV of [Ru-PO3H2](PF6)2 in
Figure II- 14 .
14 Figure II-14. (a) CVs of SiO2/Ru II (solid line) and bare SiO2 (dotted line). (b) CVs of TiO2/Ru II (solid line) and bare TiO2 (dotted line) in the cathodic region. (c) CVs of TiO2/Ru II (solid line) and bare TiO2 (dotted line) in the anodic region. The CVs were recorded with a microcavity Pt electrode (d = 50 µm) in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1 .
Figure II- 15 .
15 Figure II-15. CVs of bare graphite electrode (a) and a TiO2 film deposited on graphite electrode (b) at pH 6. The CVs are taken from reference 25
Figure
Figure II-16. (a) Solid state UV-vis and (b) emission spectra of SiO2/Ru II and TiO2/Ru II NPs.
Figure II- 17 .
17 Figure II-17. Time-resolved emission spectra of (a) SiO2/Ru II , (b) TiO2/Ru II and (c) Ru II /TiO2/BPA (10% Ru II and 90% BPA as spacer) in MeCN under Ar. The samples were excited by 400 nm picosecond pulsed laser, emitted photons were recorded at 610 nm using a TCSPC photometer. The red curves represent the monoexponential fitting of the decays
Figure
Figure II-18. Overlap between the absorption (blue solid line) and emission (red broken line) spectra of
Section II. 2 .
2 2c. The transient spectrum of SiO2/Ru II* 20 ns after excitation at 532 nm is shown in Figure II-19a. Signal bleaching at 450 nm is observed which is similar to the TA spectrum of [Ru-PO3Et2] 2+ free complex (see Section II.2.2). The bleaching is assigned to the absorption of the ground state of [Ru(bpy)3] 2+ core which has been depopulated upon absorption of 532 nm photons. Attempts to record the full TA spectra after the laser excitation are not successful due to precipitation of the NPs over a time period longer than 15 minutes.
Figure
Figure II-19b shows the time profile of the absorbance change of SiO2/Ru II 20 ns after the laser excitation. The signal has been averaged from 40 measurements due to weak absorption of the NPs. The signal recovers to the initial value within 10 µs. The signal was fit with a monoexponential function to yield a lifetime of 3630 ± 480 ns. It is surprisingly four times longer than the lifetime of SiO2/Ru II* state recorded with the time-resolved emission spectroscopy (838 ns). However, this experiment is reproducible to give similar lifetimes regardless of the colloid concentration.
Figure II- 19 .
19 Figure II-19. (a) TA spectrum of SiO2/Ru II NPs in MeCN under Ar 20 ns after excitation by a nanosecond pulse laser at 532 nm. The data point at 530 nm has been removed to eliminate the scattering of excitation light by the NPs. (b) Signal bleaching at 450 nm (black) fit with a monoexponential function (red) and the fitting residual. The signal has been averaged from 40 measurements.
Figure II-20b presents the -ln(OD/ODmax) vs t plot and the fitting curve.
Figure II- 20 .
20 Figure II-20. (a) TA spectrum of TiO2/Ru II NPs in MeCN under Ar 20 ns after excitation by a nanosecond pulse laser at 532 nm. Inset shows the decay at 450 nm. (b) -ln(OD/ODmax) versus t plot (black line) obtained at 450 nm and fit with a power function (red line).
Figure II- 21
21 Figure II-21 presents its EPR signal, as well as the signal simulation of the trapped electrons on Ti 3+ anatase sites. The signal remains stable for a few minutes after the light is switched off. It is also reproducible when the sample is raised to room temperature, then frozen again and irradiated with the laser. The electron traps are characterized by two g values: g = 1.990 and g// = 1.959 as shown by the simulation in Figure II-21. The g values have been corrected with 2,2-diphenyl-1-picrylhydrazyl (DPPH) reference (g=2.0036). As a comparison, the EPR signal of naked anatase TiO2 NPs remains unchanged under the 455 nm irradiation. Therefore, the recorded signal is attributed to the trapped electrons on Ti 3+
Figure II-21 one can observe the presence of additional four EPR lines (1:3:3:1 ratio) separated by 23 G, which are marked by asterisks. The lines are assigned to CH3
Figure II- 21 .
21 Figure II-21. EPR signal of photo-induced charge separation on TiO2/Ru II colloid (10 mg/mL) in MeCN glass (red line) and the simulation of trapped electrons on Ti 3+ sites of anatase TiO2 (black line). Asterisks show the signal of CH3 radicals. The sample was irradiated in situ by a 455 nm LED. The spectrum was taken at 20 K with an X-band EPR spectrometer (f = 9.65208 GHz, 2 mW, 5 G modulation), accumulated for 30 minutes.
Chapter 2 we have reported the study of a [Ru(bpy)3] 2+ -based photosensitizer bearing a phosphonic group to be grafted onto the surface of TiO2 and SiO2 NPs. The phosphonate-derivatized precursor [Ru-PO3Et2] 2+ has been extensively characterized in MeCN solution using electrochemistry, UV-vis spectroscopy, luminescent emission spectroscopy and time-resolved emission spectroscopy. The results show that it retains excellent properties of the prototype [Ru(bpy)3] 2+ photosensitizer after ligand modifications. Functionalization of NPs with [RuP] 2+ dye has been proved to be more efficient on TiO2 than SiO2, reaching the [RuP] 2+ loading of 0.21 mmol/g TiO2 and 0.076 mmol/g SiO2 NPs. The successful grafting of [RuP] 2+ on SiO2 and TiO2 surfaces has been proved by the QCM-D and FT-IR experiments. After being anchored, the [RuP] 2+ complex shows a similar oxidation potential as the [Ru-PO3Et2] 2+ free complex in solution. The wavelength of emission maximum is almost unchanged. The kinetics of visible light-induced electron transfer from Ru II* units to TiO2 and energy transfer from Ru II* to Ru II immobilized on the surfaces of TiO2 and SiO2 have been studied in colloids with time-resolved emission spectroscopy and TAS.
Dans ce chapitre, les propriétés électrochimiques et photophysiques du complexe [Cr(ttpy)2] 3+ (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine) sont décrites. Différents états redox du complexe [Cr(ttpy)2] n+ (n = 3, 2, 1, 0) ont été étudiés par spectroscopie d'absorption UV-vis et spectroscopie par résonance paramagnétique électronique (RPE). Les calculs DFT confirment que toutes les étapes de réduction se produisent sur les ligands ttpy. Le complexe [Cr(ttpy)2] 3+ peut être doublement réduit sous la forme [Cr(ttpy)2] + par irradiation dans le visible en présence de triethanolamine (TEOA) comme donneur d'électron sacrificiel. La réaction est accélérée en ajoutant le [Ru(bpy)3] 2+ dans le milieu. La réaction de photoréduction à également lieu entre les nanoparticules hybride TiO2/Ru II et le complexe [Cr(ttpy)2] 3+ libre en solution en présence de TEOA. En outre, le complexe [Cr(ttpy)2] + doublement réduit présente une activité catalytique vis à vis de la réduction des protons en solution organique. Les complexes de [Cr(ttpy)2] 3+ seul ou avec le [Ru(bpy)3] 2+ ont aussi été immobilisées à la surface des nanoparticules de TiO2 pour former des systèmes de dyade hybride TiO2/[Cr2P] 3+ et de triade Ru II /TiO2/[Cr2P] 3+ . Contrairement aux systèmes précédents, la triade Ru II /TiO2/[Cr2P] 3+ ne permet pas de stocker des électrons sur les unités [Cr2P] 3+ sous irradiation continue en présence de TEOA. Cela peut être dû à une cinétique rapide du transfert d'électrons en retour entre les espèces transitoire [Cr2P] 2+ et Ru III greffées, à proximité en surface des nanoparticules.
The 4
4 Scheme III-1. Simplified Jablonski energy diagram for octahedral Cr(III) polypyridyl complexes. IC = internal conversion, ISC = intersystem crossing, BISC = back intersystem crossing, F = fluorescence, P = phosphorescence. Solid arrows indicate processes involving photon absorption or emission, while dotted arrows show processes that do not involve the photon absorption and emission. Adapted from reference 2
Scheme III-2. Generic scheme for a photo-induced, sequential charge transfer process leading to multiple charge accumulation in D-PS-A triad. Solid blue and black arrows indicate transformations involving a photon or an electron, respectively. Dashed arrows indicate possible charge recombination pathways. Adapted from reference 9
Scheme III- 4 .
4 Scheme III-4. Multiple oxidative equivalents accumulated on single molecular site for photocatalytic water oxidation: (a) [Rua-Rub-OH2] 4+ PS-catalyst assembly immobilized on mesoporous TiO2 thin film electrode mentioned in reference 12; (b) Rua/TiO2/Rub-OH2 PS/TiO2/catalyst assembly (Rua:Rub = 6.4:1 % mol) mentioned in reference 14.
Scheme III-6. Structure of Ru II /TiO2/Co III (Ru:Co = 3:1 % mol) mentioned in reference 18
Scheme III- 7 .
7 Scheme III-7. Synthesis procedure for [Cr(ttpy)2] 3+ complex
Figure III- 1 .
1 Figure III-1. CVs of [Cr(ttpy)2] 3+ (1 mM, black line) and ttpy ligand (1mM, green line) recorded in MeCN + 0.1 M TBAPF6. WE = C disk (d = 3 mm), v = 100 mV.s -1
Figure III- 2
2 Figure III-2 shows the and spin numbers of these complexes. In all the cases the spin is ~3 while added electrons are sequentially localized on the two tpy ligands. Therefore, the authors concluded that the oxidation state of Cr remains +3. For the neutral, triply reduced complex [Cr(tpy)2] 0 (Figure III-2d), since the Cr III is not reduced, the third electron is thought to be delocalized over two reduced (tpy -) ligand. This could be a reason for a relatively facile reduction step from [Cr(tpy)2] + to [Cr(tpy)2] (E1/2 = -1.36 V). When the fourth electron is introduced (Figure III-2e), all the electrons are localized on the tpy ligands.However, the two ligands differ in spin state: one singlet (tpy 2-) 2-and one triplet (tpy ) 2-.
Figure III- 2 .
2 Figure III-2. Calculated Mulliken spin density distribution in [Cr(tpy)2] n+ complexes: (a) n = 3, (b) n = 2, (c) n = 1, (d) n = 0 and (e) n = -1. Adapted from reference 5
Figure III- 3 .
3 Figure III-3. UV-vis absorption spectra of various oxidation state of [Cr(ttpy)2] n+ : (a) n=3, (b) n=2, (c) n=1, (d) n=0. Spectra were recorded by exhaustive electrolysis of [Cr(ttpy)2] 3+ in (a, b and c) MeCN + 0.1 M TBABF4 or (d) MeCN + 0.1 M TBAClO4 under Ar at suitable potentials
Figure III- 4 .
4 Figure III-4. Calculated electronic transitions for [Cr(ttpy)2] 2+ singly reduced complex. GS = ground state For the [Cr(ttpy)2] + doubly reduced complex, [Cr III (ttpy -)2] 2+ , calculations have been focused on the transition at 767 nm, which can be assigned to the experimental absorption peak at 723 nm. Electronic transitions at this wavelength are shown in Figure III-5. The calculated transition is a combination between processes (a) and (b) with equal contribution. In the [Cr(ttpy)2] + ground state, the two electrons are localized on the two tpy ligands and equally distributed between the tpy (a) and tolyl (b) parts. The process (a) corresponds to a fully LMCT transition: (tpy) d. The process (b) corresponds to the intraligand CT
Figure III- 5 .
5 Figure III-5. Calculated electronic transitions for [Cr(ttpy)2] + doubly reduced complex.
Figure III- 6 .
6 Figure III-6. UV-vis absorption spectrum (solid line) and emission spectrum (dotted line) of [Cr(ttpy)2] 3+ complex in MeCN under Ar. The redox potential of excited state E(Cr III* /Cr II ) can be estimated by assuming an energy diagram similar to that of [Cr(diamine)3] 3+ complexes as shown in literature. 2 The diagram is shown in Scheme III-8.
4 Figure III- 7 .
47 Figure III-7. Decay of [Cr(ttpy)2] 3+* excited state recorded at 770 nm in MeCN under Ar after excitation at 400
Figure
Figure III-8. (a) TA spectra of [Cr(ttpy)2] 3+ complex in MeCN under Ar at different time intervals after laser excitation at 355 nm. Arrows indicate temporal spectral changes. (b) TA decay at 550 nm. Excitation was done by nanosecond laser pulses at 355 nm. Red line represents a monoexponential fitting with lifetime of (295 ± 4) ns.
original complex [Cr(ttpy)2] 3+ is reduced. All samples have been prepared by exhaustive electrolysis of [Cr(ttpy)2] 3+ (1 mM) in MeCN + 0.1 M TBAPF6 solution under Ar at suitable potentials: -0.7 V for n = 2, -1.1 V for n = 1, -1.6 V for n = 0. The spectra of [Cr(ttpy)2] 3+ and [Cr(ttpy)2] + detected with an X-band EPR spectrometer are shown as black lines in Figure III-9.
Figure III- 9 .
9 Figure III-9. EPR spectra of [Cr(ttpy)2] n+ at various oxidation states: (a) n = 3, (b) n = 1 which were produced by exhaustive electrolysis of 1 mM [Cr(ttpy)2] 3+ in MeCN + 0.1 M TBAPF6. They were recorded with an X-band EPR spectrometer (f 9.6553 GHz) with following parameters: (a) 20 mW, 10 K, modulation: 10 G, accumulation time: 50 mins; (b) 0.02 mW, 20 K, modulation: 2 G, accumulation time: 70 mins. Red line represents the simulation of [Cr(ttpy)2] + complex.
order to study the reactivity of [Cr(ttpy)2] 3+* excited state, we have carried out the exhaustive photoreduction experiment using an excess amount of TEOA (300 equivalents) under continuous visible irradiation. Triethanolamine (TEOA) is known as a sacrificial electron-donating quencher. Changes in UV-vis absorption spectra during the irradiation period are shown in Figure III-10.
Figure
Figure III-10. UV-vis absorption spectra of [Cr(ttpy)2] 3+ complex in Ar-saturated MeCN solution containing 300 equivalents of TEOA during 400 s (a) and 200 min (b). They were taken at different time intervals under continuous visible irradiation (400 nm < < 750 nm). Arrows indicate temporal spectral changes during the irradiation. Inset in graph (a) shows temporal changes in the absorbance at 800 nm, which represents the formation of [Cr(ttpy)2] 2+ species. Inset in graph (b) shows temporal changes in the absorbance at 570 nm, which represents the formation of [Cr(ttpy)2] + species. During the first 400 s (Figure III-10a), the peaks at 800 nm and 500 nm are gradually formed and increased while the band at ~ 370 nm decreases. These changes corresponds to the [Cr(ttpy)2] 3+ [Cr(ttpy)2] 2+ reduction. The inset shows temporal change in absorbance at 800 nm. A plateau is observed after ~250 s, indicating a complete chemical transformation.Taking into account the absorbance of the newly formed [Cr(ttpy)2] 2+ at 800 nm as its characteristic peak, we estimate the transformation to occur with a quantitative yield.Meanwhile, a blank experiment has also been carried out in dark, which shows no changes in the absorption spectrum of [Cr(ttpy)2] 3+ .
Upon the TEOA addition, the emission maximum of [Cr(ttpy)2] 3+* is gradually reduced (Figure III-11a) under continuous light excitation at 360 nm. The peak position and shape remain unchanged. When a picosecond pulsed laser was used to excite the sample, the lifetime of [Cr(ttpy)2] 3+* was found to decline. A plot of 0/ vs [TEOA] is shown in Figure III-11b. The experimental data is fit with the Stern-Volmer equation 25 :
where 0
0 and are the lifetime of [Cr(ttpy)2] 3+* in the absence and presence of TEOA quencher respectively (0 = 295 ns from the TAS result), KSV the Stern-Volmer constant and kq the quenching constant. The linear fitting is shown as the red line in Figure III-11b.
Figure
Figure III-11. (a) Quenching of the emission of [Cr(ttpy)2] 3+ (1.3 × 10 -5 M) excited state upon increasing concentration of TEOA in Ar-purged MeCN solution. The sample was excited at 360 nm. (b) Stern-Volmer plot for the quenching of [Cr(ttpy)2] 3+* by TEOA. The sample was excited by a 400 nm picosecond pulsed laser and emitted photons were recorded at 770 nm.As predicted by thermodynamics, TEOA reductively quenches the [Cr(ttpy)2] 3+* excited state. Since the UV-vis absorption spectrum of TEOA and the emission spectrum of [Cr(ttpy)2] 3+* do not overlap, the quenching pathway should be only via electron transfer.From the Stern-Volmer plot one can extract KSV = 850 M -1 and hence kq = 3 × 10 9 M -1 .s -1 . The high quenching constant indicates that TEOA is a good quencher for the [Cr(ttpy)2] 3+* state.
[
Ru(bpy)3] 2+ PS as it possesses great properties of a PS: high absorbance in visible region ( = 13000 M -1 .cm -1 at 450 nm), strong emission and long lifetime ( = 608 ns in MeCN under Ar, see Chapter 2). Moreover, its photochemical and redox properties have been well established. 27 The photoreduction experiment of [Cr(ttpy)2] 3+ has been performed in MeCN solution under Ar in the presence of [Ru(bpy)3] 2+ as additional PS and TEOA as sacrificial electron donor. Temporal UV-vis spectral changes are shown in Figure III-12a during the irradiation period. First, the 800 nm band is formed (black arrow) associated with a peak at 500 nm, which is characteristic of [Cr(ttpy)2] 2+ formation. This step seems to finish within 10 s where the next absorption spectrum is taken, showing a very fast rate. Without the [Ru(bpy)3] 2+ PS the [Cr(ttpy)2] 2+ formation is accomplished only after 250 s under the same irradiation conditions (Figure III-10a).
1
MLCT absorption maximum of [Ru(bpy)3] 2+ PS, remains almost unaffected. The experiment clearly demonstrates that [Cr(ttpy)2] 3+ complex can be doubly reduced to [Cr(ttpy)2] + in the reaction conditions, and the [Ru(bpy)3] 2+ PS has been well regenerated by the sacrificial agent TEOA. Prolonged irradiation does not lead to the formation of [Cr(ttpy)2] 0 species.
Figure
Figure III-12b presents the temporal absorbance change at 720 nm (red line), which is the UV-vis signature of the [Cr(ttpy)2] + species. A plateau has been reached after about 4000 s, signaling the double reduction reaction has been completed with 100 % conversion for the [Cr(ttpy)2] 3+ to [Cr(ttpy)2] + transformation. Without the [Ru(bpy)3] 2+ PS, the reaction has been completed after ~ 10000 s with 80 % conversion. Therefore, the use of this PS accelerates the photoreduction reaction as expected.
Figure
Figure III-12. (a) UV-vis absorption spectra of a mixture of [Ru(bpy)3] 2+ (4 × 10 -5 M), [Cr(ttpy)2] 3+ (2 × 10 -5 M) and TEOA (1.5 × 10 -2 M) in Ar-saturated MeCN solution. The spectra were taken every 100 s under continuous irradiation at 400 nm < < 750 nm. Arrows indicate the formation of the [Cr(ttpy)2] + species. (b) Comparison of temporal changes in the absorbance at 720 nm in the presence (red line) and absence (black line) of [Ru(bpy)3] 2+ complex.As the irradiation light is between 400 and 750 nm where both [Ru(bpy)3] 2+ and [Cr(ttpy)2] 3+ complexes absorb the light, hence both are excited. The thermodynamically possible electron transfer reactions between the two complexes are as follows:
irradiation has been monitored by spectrophotometry. Changes in UV-vis absorption spectra are shown in Figure III-13.
Figure
Figure III-13. (a) UV-vis absorption spectra of a mixture of [Cr(ttpy)2] 3+ and PPh3 (1000 equivalents) in Arsaturated MeCN solution. The spectra were taken every 60 s under irradiation at 400 nm < < 750 nm. Arrow shows the disappearance of the [Cr(ttpy)2] 3+ complex. (b) Zoom-in from 400 nm to 900 nm to show the formation of the [Cr(ttpy)2] 2+ complex indicated by arrows.During the irradiation, the absorption maximum of [Cr(ttpy)2] 3+ complex at 370 nm gradually decreases accompanied by an increase of a peak at ~ 500 nm and a broad band centered at 800 nm. These changes correspond to the formation of the [Cr(ttpy)2] 2+ species.The experiment can be explained by the following reaction:
[
Cr(ttpy)2] 2+* by PPh3. Therefore, in our study the reaction 1 between [Ru(bpy)3] 2+* and [Cr(ttpy)2] 3+ is probably the main quenching mechanism in our study as it is more favorable than reaction 2 in terms of both thermodynamics and kinetics.As another approach to confirm the reaction 1, we have conducted a flash photolysis experiment for the mixture of [Cr(ttpy)2] 3+ and [Ru(bpy)3] 2+ complexes. Transient absorption (TA) spectral changes are shown in Figure III-14a after the sample has been excited at 532 nm where only [Ru(bpy)3] 2+ species absorb. A negative OD value is observed at ~ 450 nm which corresponds to the absorption maximum of [Ru(bpy)3] 2+ ground state. A concomitant positive OD band centered at ~ 520 nm is recorded, which can be due to the absorption of the photogenerated [Cr(ttpy)2] 2+ complex (see Figure III-3). After 1700 ns all the OD values return to zero. Therefore, the spectral changes are attributed to the back electron transfer reaction between the transient [Cr(ttpy)2] 2+ and [Ru(bpy)3] 3+ species leading to [Cr(ttpy)2] 3+ and [Ru(bpy)3] 2+ .
FigureFigure
Figure III-14. (a) TA spectra of [Cr(ttpy)2] 3+ and [Ru(bpy)3] 2+ mixture (molar ratio = 1:1) in MeCN solution under Ar at different time intervals after the sample was excited by a 532 nm nanosecond pulsed laser. (b) Plot of 1/OD vs at 520 nm (red), fit with a linear function (black).
Pathway 1: [Ru(bpy)3] 2+* state is first quenched by [Cr(ttpy)2] 3+ (oxidative quenching) -Pathway 2: [Ru(bpy)3] 2+* state first quenched by TEOA (reductive quenching)(Note : E(Ru 3+ /Ru 2+ ) = 0.94 V, E(Ru 2+ /Ru + ) = -1.67 V, E(Ru 2+* /Ru + ) = 0.37 V, E(Ru 3+ /Ru 2+* ) = -1.10 V, E(Ru 3+ /Ru 2+* ) = -1.10 V, E(Cr 3+ /Cr 2+ ) = -0.47 V, E(Cr 2+ /Cr + ) = -0.85 V, Epa(TEOA + /TEOA) = 0.42 V vs Ag/AgNO3 0.01 M). For the formation of [Cr(ttpy)2] 2+ , the two proposed pathways are concerned with whether the first quenching step of [Ru(bpy)3] 2+* excited state is by [Cr(ttpy)2] 3+ (pathway 1)or TEOA (pathway 2). Comparing the step 2 of these pathways, it is clear that the quenching reaction by [Cr(ttpy)2] 3+ is thermodynamically more favourable as the quenching reaction by TEOA is slightly endergonic. The quenching rate of [Ru(bpy)3] 2+* by [Cr(ttpy)2] 3+ is very high (kq = 3 × 10 9 M -1 .s -1 ) as mentioned above. Meanwhile, the reductive quenching of [Ru(bpy)3] 2+* by TEOA is not detected.32 Together with our calculated G values, we conclude that the pathway 1 is more favorable than pathway 2 in terms of both thermodynamics and kinetics.In the pathway 1, after the formation of [Cr(ttpy)2] 2+ , the back electron transfer reaction between [Cr(ttpy)2] 2+ and [Ru(bpy)3] 3+ (step 3, G3 = -1.41 eV) is even more favorable than the quenching reaction (G2 = -0.63 eV). This back electron transfer step has been proved to be very fast by the flash photolysis experiment mentioned above. Since we observe the net formation of [Cr(ttpy)2] 2+ and then [Cr(ttpy)2] + , we assume that TEOA efficiently scavenges the [Ru(bpy)3] 3+ molecules (step 4) to prevent this back electron transfer reaction. In literature, the reduction rate of [Ru(bpy)3] 3+ by TEOA is between 6 × 10 6 M -1 .s-1, 33 or 2 × 10 7
photoreduction of [Cr(ttpy)2] 3+ occurs via the pathway 1 where the first reduction step is the reductive quenching of [Ru(bpy)3] 2+* by [Cr(ttpy)2] 3+ . The back electron transfer reaction between photogenerated species [Ru(bpy)3] 3+ and [Cr(ttpy)2] 2+ is short circuited by the excess amount of TEOA, which efficiently reduces [Ru(bpy)3] 3+ to regenerate the [Ru(bpy)3] 2+ PS.
Figure III- 16 .
16 Figure III-16. (a) CV of 1 mM [Cr(ttpy)2] 3+ in the presence of TFA (40 equivalents, 40 mM) (red line) and CV of 40 mM TFA alone (grey line). (b) CVs of 1 mM [Cr(ttpy)2] 3+ upon the addition of TFA. Experiments were carried out in MeCN + 0.1 M TBAPF6 solution under Ar. WE = C disk (d = 3 mm), v = 100 mV.s -1 In order to study the change in electrochemical behavior of the [Cr(ttpy)2] 3+ complex in more detail, we recorded its CVs in the presence of different amounts of TFA (Figure III-16b). The first and second redox waves remain unchanged, suggesting completely reversible reduction until the [Cr(ttpy)2] + complex. After that, two new irreversible reduction waves is formed around -1.0 V and -1.2 V, which may indicate the formation of an intermediate for the electrocatalytic process. The new peak is associated with a complete disappearance of the third redox wave. When the amount of TFA is more than 20 equivalents, the new reduction peak at -1.0 V seems to merge with the reduction peak of Cr 2+ /Cr + couple.
Figure III- 17 .
17 Figure III-17. Electrocatalytic proton reduction by [Cr(ttpy)2] 3+ (0.5 mM) in the presence of TFA (50 eq) as proton source at the bias of -1.20 V. Solution: 0.1 M TBAPF6 in MeCN, WE = C plate (8 cm 2 ).
)2] + complex by exhaustive electrolysis of the [Cr(ttpy)2] 3+ complex in MeCN, then added different amount of TFA. The samples were characterized by EPR spectroscopy and UV-vis spectroscopy (Figure III-18). Upon the addition of 1 equivalent of TFA, the EPR signal of [Cr(ttpy)2] + is gradually reduced by half. Up to 50 equivalents of TFA the [Cr(ttpy)2] + EPR signal completely disappears. No other signals have been found in other region of the X-band spectrum. The result simply suggests the consumption of [Cr(ttpy)2] + species to form EPRsilent species. It is important to note that the [Cr(ttpy)2] 2+ in solution is EPR silent although its spin number is S = 1. The changes in the UV-vis spectra of [Cr(ttpy)2] + species upon the TFA addition correspond to the formation of [Cr(ttpy)2] 2+ . The EPR and UV-vis spectra suggest that during the reduction of H + to H2, the [Cr(ttpy)2] + species are oxidized to [Cr(ttpy)2] 2+ rather than [Cr(ttpy)2] 3+ .
Figure III- 18 .
18 Figure III-18. (a) Changes in the EPR spectrum of [Cr(ttpy)2] + upon addition of TFA: no TFA (black), 0.5 eq TFA (red), 1 eq TFA (blue) and 50 eq TFA (dashed line). (b) Changes in the UV-vis spectrum of [Cr(ttpy)2] + upon addition of TFA.
Figure III- 19 .
19 Figure III-19. HYSCORE spectra of [Cr(ttpy)2] + complex (1 mM): (a) in the absence of TFA, (b) in the presence of 1 equivalent of TFA, recorded with delay d1 = 136 nsBased on the results mentioned above, the mechanism for this electrocatalytic proton reduction process is still not clear. As the HYSCORE spectra of [Cr(ttpy)2] + complex in the presence and absence of TFA are identical, formation of a hybrid complex [Cr III (H)(ttpy -)] 2+
Scheme III-10. Synthesis route for [Cr(ttpy-PO3Et2)2] 3+ and [Cr2P] 3+ complexes
Figure III-20 presents their CVs in comparison with the CV of the ttpy ligand alone. Their redox potentials are collected in Table III-5 in comparison with the redox potentials of ttpy ligand and [Cr(ttpy)2] 3+ complex.
Figure III- 20 .
20 Figure III-20. CVs of ttpy (green), ttpy-PO3Et2 (blue) and [Cr(ttpy-PO3Et2)2] 3+ (red) in MeCN + 0.1 M TBAPF6 under Ar. WE = C disk (d = 3 mm), v = 100 mV.s -1
Figure III-21a presents the CV of ITO/[Cr2P] 3+ modified surface. The half-wave potentials are collected in Table III-6 in comparison with the redox potentials of the [Cr(ttpy-PO3Et2)2] 3+ precursor.
Figure III- 21 .
21 Figure III-21. (a) CV of ITO/[Cr2P] 3+ modified electrode (area = 1.2 cm 2 ) recorded in MeCN + 0.1 M TBAPF6, v = 100 mV.s -1 . (b) Ipav plot is fit with a linear function (R 2 = 0.9999).
Therefore, we employed a microcavity Pt electrode (d = 50 µm) to study the electrochemical properties of TiO2/[Cr2P] 3+ in powder form. Figure III-22 shows the CVs of the TiO2/[Cr2P] 3+ NPs and bare TiO2 NPs where the potential was swept from 0 to -2 V in anhydrous MeCN solution. The first reduction step to form [Cr2P] 2+ at around -0.4 V is not observed. It could be due to the fact that the [Cr2P] 3+ molecules bearing two phosphonic acid groups are anchored on two TiO2 NPs, thus are buried inside the TiO2 matrix, making it difficult to be in contact with the Pt electrode.As the redox potentials of grafted [Cr2P] 3+ on TiO2 NPs are difficult to measure, the potentials of ITO/[Cr2P] 3+ are taken for further studies.
Figure III- 22 . 1 III. 3 . 3 .
22133 Figure III-22. CVs of TiO2/[Cr2P] 3+ NPs (solid line) and TiO2 (dashed line) recorded in solid state using a microcavity Pt electrode in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1
Figure III- 23 .
23 Figure III-23. Solid state UV-vis absorption spectra of TiO2/[Cr2P] 3+ (black line) and TiO2 NPs (dashed line) The modified TiO2/[Cr2P] 3+ NPs shows absorption maximum at 580 nm whereas the bare TiO2 NPs do not absorb in the visible range. The precursor [Cr2P] 3+ complex in solid state also exhibits this absorption maximum, which is not present in the spectra of [Cr(ttpy)2] 3+ complex or ttpy-PO3H2 ligand (Figure III-24). Therefore we attribute the 580 nm absorption peak to the immobilized [Cr2P] 3+ complex on TiO2.
Figure III- 24 .III. 3 . 4 .
2434 Figure III-24. Normalized UV-vis absorption spectra of [Cr2P] 3+ complex (black), [Cr(ttpy)2] 3+ complex (blue) and ttpy-PO3H2 ligand (red) recorded in solid state by mixing them with KBrThe TiO2/[Cr2P] 3+ NPs do not emit light in the 600 -900 nm range after photoexcitation at 580 nm or 450 nm where the anchored [Cr2P] 3+ complex shows some degree of absorption but not TiO2 scaffold. The weak luminescence of [Cr(ttpy)2] 3+* excited state seems to be completely quenched after being grafted onto TiO2 NPs.
Figure III-25b shows its EPR response. No signals are observed at other regions of this X-band spectrum. It can be seen that although the broad EPR line is still observed, sub-structures of individual [Cr2P] 3+ sites start to be detectable. A sharp line at 3440 G and two maxima at ~ 3200 G and 2960 G are similarto the signal of [Cr(ttpy)2] 3+ free complex. However, the highest absorption peak of the free complex at ~ 1400 G is not observed in the NPs.
Figure III- 25 . 2 )
252 Figure III-25. EPR spectra of TiO2/[Cr2P] 3+ colloid in MeCN (C = 6 g.L -1 ) with (a) 100 % [Cr2P] 3+ and (b) 2 % [Cr2P] 3+ + 98 % BPA spacers. They were taken with an X-band EPR spectrometer (f = 9.6538 GHz, 10 G modulation, 20 mW) at 10 K during 84 s.
Figure III-26 presents the absorption spectrum of Ru 2+ /TiO2/[Cr2P] 3+ triad, compared with those of bare TiO2 NPs and TiO2/[Cr2P] 3+ dyad.
Figure III- 26 .
26 Figure III-26. Solid state UV-vis spectra of TiO2/[Cr2P] 3+ (black line), Ru 2+ /TiO2/[Cr2P] 3+ (Ru:Cr:BPA = 20:2:78, % mol) (red line) and TiO2 NPs (dashed line)The double peak at around 450 nm is attributed to the 1 MLCT absorption of [Ru(bpy)3] 2+ -type PS as seen in Chapter 2. The small absorption maximum at 580 nm is ascribed to the grafted [Cr2P] 3+ species as shown in Section III.3. Only changes in the intensities of the two peaks are observed when the molar ratio of Ru:Cr is changed. The presence of both complexes on the NPs after thorough washing is an evidence for the successful co-grafting process.
photoexcitation at 450 nm. Figure III-27 shows its spectrum in comparison with the spectrum of TiO2/Ru 2+ dyad. Both spectra have the same broad, featureless shape, thus it is attributed to the 3 MLCT emission of [RuP] 2+ PS. The peak position of the triad (624 nm) is slightly blueshifted to that of the dyad (630 nm). The blue-shift has been shown to be due to the separation of the Ru 2+ PS by BPA lateral spacers. No emission at 770 nm by [Cr2P] 3+* excited state has been observed.
Figure III- 27 .
27 Figure III-27. Normalized emission spectra of Ru 2+ /TiO2/[Cr2P] 3+ triad (Ru:Cr:BPA = 20:2:78, % mol) (red line) and TiO2/Ru 2+ dyad (black line) recorded in MeCN under Arc) Transient absorption spectroscopyWe initially aimed to use TA technique to record TA spectra of Ru 2+ /TiO2/[Cr2P] 3+ at different time intervals after laser excitation. In order to avoid as much light scattering as possible and maintain a sufficiently stable colloid, the triad concentration was kept very low at 0.04 g.L -1 in MeCN solution. Consequently, the full TA spectra need to be accumulated over at least 20 mins to obtain a good signal-to-noise ratio. During this time, however, the NPs begin to precipitate. Therefore, we only focus on the signal decay at 450 nm, which has been ascribed to the charge recombination between (e -)TiO2 and Ru 3+ species (see Chapter 2). The decay (Figure III-28a) is very similar in shape as the same decay of TiO2/Ru 2+ dyad (Figure II-20), although in this triad it seems to recover to zero faster than in the dyad.
Figure III- 28 .
28 Figure III-28. TA measurement for Ru 2+ /TiO2/[Cr2P] 3+ triad (Ru:Cr:BPA = 20:2:78, % mol, C = 0.04 g.L -1 ) recorded in MeCN under Ar 20 ns after excitation at 532 nm: (a) Signal decay at 450 nm, (b) -ln(OD/ODmax) versus t plot (black line) obtained at 450 nm and fit with a power function (red line). Similar to Chapter 2, as the charge recombination process should take into account the diffusion of both trapped electrons and oxidized PS, its kinetics should be fit with the Kohlrausch -Williams -Watts (KWW)'s stretched exponential equations (Equations III-7 and III-8). 44 Figure III-28b shows the fitting (red line) according to the KWW model. The KWW fitting parameter is close to that obtained with TiO2/Ru 2+ dyad and far from 1, which justifies the stretch model.
behavior of [Cr2P] 3+ and [RuP] 2+ complexes co-grafted on ITO electrode prior to study the Ru 2+ /TiO2/[Cr2P] 3+ triad. An ITO surface was immersed into a solution containing both the complexes with the same concentration of 1 mM for 20 hours in dark, then washed thoroughly before measured with cyclic voltammetry. Its CV is shown in Figure III-29a. In the anodic part, a reversible wave at E1/2 = 0.90 V is attributed to the Ru 3+ /Ru 2+ couple. In the cathodic part, three successive reversible waves are detected, corresponding to the reduction of Cr 3+ to Cr 2+ , Cr + and Cr 0 respectively. This CV can be considered as the superimposition of ITO/[RuP] 2+ and ITO/[Cr2P] 3+ CVs. Based on the Ru 3+ /Ru 2+ wave at ~ 0.9 V and the Cr 3+ /Cr 2+ wave at ~ -0.48 V, the surface coverage of the complexes is calculated to be (2.0 ± 0.2) × 10 -12 mol.cm -2 for [RuP] 2+ and (5.0 ± 0.5) × 10 -12 mol.cm -2 for [Cr2P] 3+ . The higher loading of [Cr2P] 3+ compared with [RuP] 2+ is probably due to the presence of two phosphonic anchoring group on [Cr2P] 3+ while [RuP] 2+ has only one group. Also based on the two aforementioned redox waves, a plot of Ipav (Figure III-29b) has been obtained for both Ru and Cr species. Linear fittings are well achieved for both species, proving the two complexes have been successfully grafted on ITO.
Figure
Figure III-29. (a) CV of ITO/{Ru 2+ +[Cr2P] 3+ } electrode recorded in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1 . (b) Ipav plot based on the Ru 3+ /Ru 2+ (red squares) and Cr 3+ /Cr 2+ (blue circles) peaks, together with the linear fitting (R 2 = 0.999 for Ru 3+ /Ru 2+ and 0.992 for Cr 3+ /Cr 2+ ).
Figure III- 30 .
30 Figure III-30. (a) UV-vis spectra of a mixture of [Cr(ttpy)2] 3+ free complex (C = 57 µM) and TiO2/Ru 2+ NPs (C = 0.25 g.L -1 ) in Ar-saturated MeCN solution containing 300 equivalents of TEOA. The spectra were taken every 300 s under irradiation at 400 nm < < 750 nm. They have been corrected to exclude the light scattering effect of NPs. Arrows indicate the spectral changes during the experiment. (b) Evolution of [Cr(ttpy)2] + complex (measured at 723 nm) during the experiment. By tracking the 723 nm peak, one can calculate the concentration of the [Cr(ttpy)2] + complex as a function of irradiation time (Figure III-30b). After 3000 s the conversion from [Cr(ttpy)2] 3+ to [Cr(ttpy)2] + is only 60 %. As a comparison, when [Ru(bpy)3] 2+ is used instead of TiO2/Ru II , the conversion reaches ~ 90 % after the same time. The concentration of Ru 2+ PS in both cases is comparable: ~ 5.3 × 10 -5 M in this dyad sample and 4 × 10 -5 M for
Scheme III-13. Energy diagram of Ru 2+ /TiO2/[Cr2P] 3+ triad and the expected photo-induced electron transfer cascade: (1) photo-excitation of Ru 2+ dye, (2) electron injection to the CB of TiO2, (3) sequential electron transfer to reduce [Cr2P] 3+ to [Cr2P] 2+ and then [Cr2P] + , (4) regeneration of Ru 2+ PS by TEOA, (5) back electron transfer between (e -)TiO2 and Ru 3+ species, and (6) back electron transfer between [Cr2P] 2+ or [Cr2P] + and Ru 3+ species. Dashed rectangle surrounds the triad components.
Figure III-31 shows the EPR signals of the triad in MeCN solution before and after irradiation.
Figure III- 31 .
31 Figure III-31. EPR spectra of Ru 2+ /TiO2/[Cr2P] 3+ NPs (20% Ru, 2% Cr and 78 % BPA) before and after in situ irradiation by a 455 nm LED, together with simulation of trapped electrons in anatase TiO2 NPs. The spectra were recorded with an X-band EPR spectrometer (f = 9.65 GHz, 20 mW) at 10 K. The asterisks mark peaks of CH3 radicals.
Scheme IV- 1 .
1 Scheme IV-1. Photocatalytic CO2 reduction using Ru 2+ /ZrO2/Ni 2+ NPs mentioned in reference 11. AA = ascorbic acid Since [Re(bpy)(CO)3Cl] complex can act as both PS and catalyst, Reisner and coworkers have studied the photocatalytic CO2 reduction using a related complex [Re(bpy)(CO)3(3-picoline)] + bearing two phosphonic acid groups (denoted as [ReP] + ) which was immobilized on TiO2 NPs. 12 The chemical structure of TiO2/[ReP] + NPs and the proposed catalytic mechanism are shown in Scheme IV-2. Under irradiation the [ReP] + photocatalyst was reduced by triethanolamine (TEOA), a sacrificial electron donor, to form [ReP] which could react with CO2. Optimal loading of the Re complexes on TiO2 NPs was found to be 0.02 mmol.g -1 . In DMF/TEOA mixed solvent, TiO2/[ReP] + NPs showed TONmax (CO) ~ 50 after 24 hours of visible irradiation. In comparison, in the absence of TiO2, the [Re(bpy)(CO)3(3-picoline)] + homogeneous complex could only produce a small amount of CO (TONmax ~ 6) during the same period. The mixture of TiO2 and [Re(bpy)(CO)3Cl] catalyst without phosphonic acid groups in solution also produced the same amount of CO as the Re catalyst alone. By transient absorption measurements, the authors attributed the enhanced photocatalytic activity of TiO2/[ReP] + to the better stabilization of the catalytically active species [ReP] by TiO2 surface: the lifetime of TiO2/[ReP] was estimated to be more than 1 swhile that of reduced [Re(bpy)(CO)3(3-picoline)] in solution was around 60 ms. The longer lifetime of this intermediate species allowed for higher chances to react with CO2 and to undergo the required second reduction step to release CO. However, the second reduction step was not observed by TAS. In this study, TiO2 was considered inactive in the electron transfer mechanism; it only acted as a scaffold to stabilize the intermediate.
Scheme IV-3. Photocatalytic CO2 reduction using Dye/TiO2/Re I NPs mentioned in references 14 and 15The search for other complexes that can replace the prototypical but expensive [Re(bpy)(CO)3Cl] complex is of great interest. Analogous complexes, [Mn(bpy)(CO)3Br] and [Mn(dmbpy)(CO)3Br] (dmbpy = 4,4'-dimethyl-2,2'-bipyridine) (Scheme IV-4a and b), were first reported by Chardon-Noblat, Deronzier and co-workers in our CIRe group for electrocatalytic CO2 reduction.16 In MeCN solution and 5 % H2O as proton source, the complexes produced CO as the only reduction product with TONmax = 13 for [Mn(bpy)(CO)3Br] and TONmax = 34 for [Mn(dmbpy)(CO)3Br]. Recently, Ishitani et al.17 reported the photocatalytic CO2 reduction using [Mn(bpy)(CO)3Br] as catalyst, [Ru(dmbpy)3] 2+ as PS and 1-benzyl-1,4-dihydronicotinamide (BNAH) as sacrificial electron donor in DMF/TEOA (4/1, v/v) mixed solvent. Under continuous irradiation at 480 nm to selectively excite the PS, HCOOH was produced with high TONmax (149) and selectivity (85 %) over CO and H2. The activation pathways of the catalyst in the electrocatalytic or photocatalytic studies seem to have a great influence on the selectivity of the CO2 reduction reaction.
Scheme IV- 5 .
5 Scheme IV-5. Structure of (a) TiO2/Mn I dyad and (b) Ru II /TiO2/Mn I triad
7 . 2 Figure IV- 1 .
721 Figure IV-1. CVs of [Mn(ttpy)(CO)3Br] complex (C = 1 mM) recorded with C disk electrode (d = 3 mm) in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1 : (a) reductive scanning first, (b) oxidative scanning first
Figure IV- 2 .
2 Figure IV-2. UV-vis absorption spectra of [Mn(ttpy)(CO)3Br] complex in MeCN solution in air.
[
Mn(ttpy)(CO)3Br] catalyst and H2O as proton source. First, the CVs of the complex recorded in neat MeCN + 0.1 M TBAPF6 under Ar (black line) and in MeCN + 5 % H2O under CO2 (red line) are shown in Figure IV-3. We can see that the current is improved at both reduction peak, with the catalytic current occurring on the second reduction. It suggests that the electrocatalytic CO2 reduction may occur at both potentials, i.e. a mix of one-electron and two-electron mechanisms. A similar tendency has already been observed for [Mn(bpy)(CO)3Br] 22 , [Mn(bpy)(CO)3(MeCN)] 21 and [Mn(phen)(CO)3Br] 18 complexes.However, when [Mn(ttpy)(CO)3(MeCN)] was used, only increase in the current magnitude at the second reduction peak (-1.62 V) was reported.21
Figure IV- 3 .
3 Figure IV-3. CV of [Mn(ttpy)(CO)3Br] complex (C = 1 mM) in MeCN + 0.1 M TBAPF6 + H2O (5 %, v/v) under CO2 (red line), in comparison with the CV recorded under Ar in neat MeCN (black line), v = 100 mV.s -1 , WE = C disk (d = 3 mm) IV.2.5. Photocatalytic CO2 reduction in presence of [Ru(bpy)3] 2+
Figure IV- 4
4 shows the IR spectra of the parent complex [Mn(ttpy)(CO)3Br] (black line), bare TiO2 NPs (blue line) and TiO2/Mn I dyad (red line) in solid state in the CO vibration region. It can be seen that the peak at 2022 cm -1 due to symmetric CO vibration remains unchanged, while the two other peaks seem to merge with each other to form a broad band centered at ~ 1930 cm -1 . The merging of two antisymmetric CO stretching bands after the Mn I complex has been grafted on TiO2 surface is consistent with literature for Re 12 -and Mn 19 -tricarbonyl bipyridine complexes immobilized on TiO2 NPs. The experiments prove that the Mn complex has been successfully grafted.
Figure IV- 4 .
4 Figure IV-4. Solid state FT-IR spectra of [Mn(ttpy)(CO)3Br] complex (black line), bare TiO2 NPs (blue line) and TiO2/Mn I NPs (red line) in the CO stretching region.
Pt electrode. Due to different scale of current intensity, the CV is split into two parts corresponding to the anodic region (Figure IV-6a) and cathodic region (Figure IV-6b). For comparison, the CVs of TiO2/ttpy-C3-PO3H2 NPs are also shown in Figure IV-7.
Figure IV- 6 .
6 Figure IV-6. CVs of TiO2/[MnP] NPs (solid lines) recorded in solid state using a microcavity Pt electrode in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1 : (a) anodic region, (b) cathodic region. Dashed line shows the CV of bare TiO2 NPs recorded under the same conditions.
Figure IV- 7 .
7 Figure IV-7. CVs of TiO2/ttpy-(CH2)3-PO3H2 NPs recorded in solid state using a microcavity Pt electrode in MeCN + 0.1 M TBAPF6 under Ar, v = 100 mV.s -1 : (a) anodic region, (b) cathodic region In the anodic region of TiO2/Mn I (Figure IV-6a), only a broad, irreversible oxidation peak centered at Eox = 0.86 V is observed. Bare TiO2 NPs are not conductive in anodic potential scanning, and TiO2/ttpy-C3-PO3H2 NPs also do not show any oxidation peaks as expected (Figure IV-7a). Therefore, the anodic peak at 0.86 V is assigned to the oxidation of
( 17 )
17 fac [Mn I ( 2 -ttpy-C3-PO3H2)(CO)3Br] fac [Mn II ( 2 -ttpy-C3-PO3H2)(CO)3Br] + + e - (18) fac [Mn II ( 2 -ttpy-C3-PO3H2)(CO)3Br] + + MeCN fac [Mn II ( 2 -ttpy-C3-PO3H2)(CO)3(MeCN)] 2+ + Br -
Figure IV- 8 ,Figure IV- 8 .
88 Figure IV-8, in comparison with TiO2 absorption spectrum. In solid state, TiO2/Mn I NPs do not exhibit the absorption maximum at 420 nm like the free Mn I complex in solution. It only shows the extension of the absorption band into the visible region.When both Ru II and Mn I complexes are grafted onto TiO2 with molar ratio of Mn:Ru = 1:10, the absorption maximum at around 450 nm is attributed to the 1 MLCT absorption band of Ru II PS. The absorption band of Mn I overlaps to that of Ru II , which is much more intense in the visible region. A small peak at around 575 nm has also been detected, which is not present in the UV-vis spectrum of TiO2/Ru II dyad (Figure II-16) or the [Mn( 2 -ttpy)(CO)3Br] complex (Figure IV-2).
recorded in colloidal solution following excitation at 450 nm. The TiO2/Mn I dyad does not show any emissions in the range between 500 and 850 nm. The emission spectrum of Ru II /TiO2/Mn I triad is shown in Figure IV-9 with a maximum of emission at 618 nm. It totally resembles the emission spectrum of TiO2/Ru II (Figure II-16), hence it is attributed to the 3 MLCT luminescence of Ru II* complex.
Figure IV- 9 .
9 Figure IV-9. Emission spectrum of Ru II /TiO2/Mn I colloidal solution after excitation at 450 nm.
The positive OD has also been observed in the TiO2/Ru II case and ascribed to the transient absorption of (e -)TiO2. Hence, the decay corresponds to the charge recombination between Ru III and (e -)TiO2. Since the charge recombination between these species should take into account the diffusion of both trapped electrons and oxidized PS, the Kohlrausch -Williams -Watts (KWW)'s stretched exponential equation 31 is used to extract the kinetics of charge recombination, as previously described in Section II.3.6. Figure IV-10b shows the data fit by the KWW's equation. The KWW parameters (KWW and KWW) are collected in Table IV-3.
Figure IV- 10 .
10 Figure IV-10. (a) TA decay at 450 nm for Ru II /TiO2/Mn I triad (Ru:Mn = 10:1); (b) -ln(OD/ODmax) versus t plot (black dots) obtained at 450 nm and fit with a power function (red line). The experiment was carried out in MeCN under Ar following excitation at 532 nm by a nanosecond pulsed laser.
Note: TON is calculated against the amount of Mn I catalyst Our photocatalytic CO2 reduction results underline the importance of anchoring both PS and molecular catalyst on TiO2 surface in terms of product selectivity. A previous study by Perutz, Reisner and co-workers 12 concerning the immobilization of [Re(bpy)(CO)3(3picoline)](PF6) catalyst on TiO2 NPs has also emphasized this importance in terms of the amount of the reduction product. The authors showed that the TiO2/Re I catalyst produces more CO than the free Re I complex in solution with the same concentration: TONmax (CO) = 48 for TiO2/Re I compared with TONmax (CO) = 6 for the homogeneous Re I in solution.In a slightly different approach, another study by Kubiak, Cohen et al.29 also showed enhanced selectivity towards HCOOH when [Mn(dcbpy)(CO)3Br] (dcbpy = 5,5'dicarboxylic-2,2'-bipyridine) complex was anchored on a metal-organic framework called UiO-67. In the presence of [Ru(dmb)3] 2+ (dmb = 4,4'-dimethyl-2,2'-bipyridine) as PS, BNAH as sacrificial electron donor and DMF/TEOA mixed solvent, the UiO-67/Mn I photocatalyst produced TONmax (HCOOH) = 110 and TONmax (CO) = 5 after 18 hours of irradiation at 470 nm, corresponding to the selectivity of 95 % for HCOOH. Meanwhile, under the same conditions, the free [Mn(dcbpy)(CO)3Br] catalyst mixed with [Ru(dmb)3] 2+ PS in solution only produced TONmax (HCOOH) = 57 and TONmax (CO) = 5 after 18 hours, corresponding to the selectivity of 92 % for HCOOH. A mechanism was proposed in this work to explain the formation of HCOOH as the main product (Scheme IV-12). It was based on the formation of singly reduced, coordinately unsaturated [Mn 0 (bpy)(CO)3] and, subsequently, [(bpy)(CO)3Mn I -OC(O)H] adduct as intermediate species.
(
Note: E(Ru III /Ru II ) = 0.92 V; E(Ru II /Ru I ) = -1.67 V; E(Ru III /Ru II* ) = -1.10 V; E(Ru II* /Ru I ) = 0.31 V; E(Ti 4+ /Ti 3+ on TiO2) = -1.0 V; E(Mn I /Mn 0 grafted on TiO2) = -1.11 V; Eox(BNAH + /BNAH) = 0.27 V vs Ag/AgNO3 0.01 M).
complex [Mn 0 (ttpy-C3-PO3H2)(CO)2] anchored on TiO2 NPs in a similar way to what has been proposed by Kubiak, Cohen et al. 29 for UiO-67/Mn I system (Scheme IV-Before mimicking the conditions applied in the photocatalytic CO2 reduction, we first investigated the magnetic behavior of the sacrificial electron donor BNAH alone in MeCN solution. As expected, the sample does not show any EPR signal when the measurement was performed in dark. Under continuous irradiation at 455 nm, it shows a thin EPR line centered at g ~ 2 if the sample has been prepared under O2 (Figure IV-11, red line). If the sample has been prepared under Ar, no EPR signals have been recorded (Figure IV-11, black line). Therefore, the organic radical BNAH + is obtained by photooxidation of BNAH under visible irradiation and in the presence of O2. This experiment shows the importance of the complete removal of O2, otherwise this g ~ 2 line may interfere with the desired signals of other transient species.
Figure IV- 11 .
11 Figure IV-11. EPR spectra of BNAH (0.1 M) in MeCN prepared under O2 (red line) and under Ar (black line), irradiated in situ by a 455 nm LED. The spectra were recorded with an X-band EPR spectrometer (f = 9.655 GHz, 2 mW, 5 G modulation) at 10 K, accumulated for 10 mins.
1 M) in dark and under 455 nm irradiation. The EPR tube containing this sample has been carefully prepared in dark under Ar before being frozen in liquid nitrogen. As a comparison, we also prepared another tube containing TiO2/Ru II dyad instead of the triad, and the same amount of BNAH and MeCN as for the first tube. The Ru II content in both samples is approximately equal. As expected, both samples are EPR-silent when they are measured in dark since Mn I (3d 6 ) and Ru II (4d 6 ) are diamagnetic. Under 455 nm irradiation, photo-induced paramagnetic species can be detected by EPR. Their EPR signals under 455 nm irradiation (Figure IV-12) have been accumulated over the same time period.
Figure IV- 12 .
12 Figure IV-12. EPR spectra of TiO2/Ru II dyad (black) and Ru II /TiO2/Mn I triad (red) colloidal in MeCN (CNPs = 6 g.L -1 ) and 0.1 M BNAH, irradiated in situ by a 455 nm LED. The samples were prepared in an Ar glovebox.Both spectra were recorded with an X-band EPR spectrometer (f = 9.654 GHz, 2 mW, 5 G modulation) at 10 K, accumulated for 55 mins.
Figure
Figure IV-13a shows the field-swept spectra of BNAH + radicals alone prepared under O2 (red) and the radicals in the presence of Ru II /TiO2/Mn I triad prepared under Ar (black). Both samples were irradiated in situ at 455 nm. The two spectra are very similar, indicating that most of the signal in the triad + BNAH sample is due to BNAH + radicals. In addition, HYSCORE spectrum of the triad + BNAH sample was also recorded (Figure IV-13b). Spots on the diagonal line centered at 15 MHz are attributed to proton hyperfine coupling of the BNAH + radicals, while a weak spot at 3.6 MHz is assigned to distant Mn atoms. No Mn hyperfine couplings were observed. It is however not in contradiction with the calculations of hyperfine interactions by Dr. Jean-Marie Mouesca (CEA Grenoble / INAC / SyMMES) for
Figure
Figure IV-13. (a) Pulsed EPR field-swept spectra of BNAH + radicals alone (red) and Ru II /TiO2/Mn I triad + BNAH + (black). (b) HYSCORE spectrum of Ru II /TiO2/Mn I triad + BNAH + mixture. The samples were irradiated in situ by a 455 nm LED.
Scheme V- 1 .
1 Scheme V-1. (a) Mechanism of pyrrole oxidative polymerization. (b) In situ photopolymerization of pyrrole on mesoporous TiO2 film. Adapted from reference 1. The LUMO level of pyrrole cannot be determined electrochemically.Well-defined nanostructured composites of TiO2 NPs and various polymers such as polystyrene (PS), poly(methyl methacrylate) (PMMA) and poly(N-isopropylacrylamide) (PNIPAM) have been reported by Wang et al.2 following a surface-initiated photopolymerization method. TiO2 NPs were first modified with 3-(trimethoxysilyl)propyl methacrylate (MPS) which not only formed Si-O-Ti linkages with TiO2 but also possessed terminal C=C bonds. Under UV light electrons and holes are created on the CB and VB of TiO2, respectively. The holes are then transferred to the C=C bonds to generate radicals which can initiate the oxidative polymerization process. The covalent bonding between MPS and TiO2 ensures that the polymer is bound to the TiO2 surface. However, the polymer degradation due to photocatalytically active TiO2 under UV light has not been mentioned. Scheme V-2 summarizes this mechanism. The TiO2/polymer core/shell NPs with precise, tunable architecture are versatile building blocks for numerous potential applications in
1
1 collects the redox potentials of [Ru-pyr] 2+ and [Ru(bpy)3] 2+ for comparison.
Figure V- 1 .a
1 Figure V-1. CV of [Ru-pyr] 2+ (1 mM) in MeCN + 0.1 M TBAP under Ar. WE = C disk (3 mm), v = 100 mV/s
Figure V- 2 .Figure V- 3
23 Figure V-2. Electropolymerization of [Ru-pyr] 2+ (0.6 mM) on an ITO electrode (v = 100 mV/s, in MeCN + TBAP 0.1 M under Ar). (a) 15 CVs obtained during the polymerization, with vertical arrows indicating the changes in its CV. (b) CV of the ITO/poly(Ru-pyr) electrode after transferred to a complex-free solution. V.2.3. Photophysical properties Figure V-3 presents the UV-vis absorption and emission spectra of [Ru-pyr] 2+ complex compared with those of [Ru(bpy)3] 2+ reference. The absorption maxima at 460 nm and 286 nm of [Ru-pyr] 2+ are assigned to the singlet metal-to-ligand charge transfer ( 1 MLCT) and ligand-centered (LC) transition, respectively. The extinction coefficient of [Ru-pyr] 2+ at 460
Figure V- 3 .
3 Figure V-3. (a) UV-vis absorption spectra and (b) emission spectra of [Ru(bpy)3] 2+ (black) and [Ru-pyr] 2+ (red) complexes in MeCN under Ar. Emission spectra were recorded after 450 nm light excitation. Similar to Section II.2.2, using [Ru(bpy)3] 2+ as reference ( = 0.062) 11 , the emission quantum yield of [Ru-pyr] 2+ is calculated as 0.043 (see Experimental Section for the calculation), which is about 70 % of that of [Ru(bpy)3] 2+ .
Figure V- 4 .
4 Figure V-4. Time-resolved emission decay of [Ru-pyr] 2+* excited state in MeCN under Ar. The sample was excited at 400 nm using a picosecond pulsed laser, and the emitted photons were collected at 610 nm using a TCSPC photometer. The red curve represent the monoexponential fitting of the decays Table V-2. Summary of the photophysical properties of [Ru-pyr] 2+ and [Ru(bpy)3] 2+ complex
7 .
7 estimated that E (Ru II* /Ru I ) = 0.31 V vs Ag/Ag + 0.01 M. Taking into account the oxidation potential of the pyrrole monomers (E ~ 0.84 V), the photo-induced electron transfer reaction between [Ru(bpy)3] 2+* and pyrrole moieties would be endergonic by around 0.52 eV. TiO2/[Ru-pyr] 2+ NPs is schematically summarized in Scheme V-Similar to Chapter 3, a bpy ligand bearing a phosphonic group (denoted bpy-PO3H2) was prepared in the first step. The use of TMSBr as deprotecting agent of the -P(O)(OEt)2 group lead to the formation of HBr by-product, which could not quantitatively separated from the desired product. Our attempts to use this ligand for the complexation with [Ru(bpy-pyr)2]Cl2 (where bpy-pyr = 4-methyl-4'-[11-(1H-pyrrol-1-yl)undecyl]-2,2'-bipyridine) to form [Ru(bpy-pyr)2(bpy-PO3H2)] 2+ were not successful due to the H + in HBr initiating the oxidative polymerization of pyrrole moieties. The polypyrrole film could be easily observed in a NMR tube.Therefore, we employed a stepwise approach as we did in Chapter 3 for the synthesis of TiO2/[Ru-pyr] 2+ NPs. First, the bpy-PO3H2 ligand was chemically adsorbed onto anatase TiO2 colloidal aqueous solution at room temperature, followed by alternate centrifugationwashing steps with copious amount of water to remove unadsorbed ligands and HBr from the solid phase. In the second step, [Ru(bpy-pyr)2]Cl2 was added to a suspension of TiO2/bpy NPs in ethanol / acetone (8/2 v/v) solution under Ar, and the reaction mixture was refluxed overnight. The modified NPs were then separated and washed by alternate centrifugationwashing steps in ethanol / acetone mixture, and dried in an oven to yield a dark red powder.The complexation between TiO2/bpy and [Ru(bpy-pyr)2]Cl2 was quantitatively achieved as the [Ru(bpy-pyr)2]Cl2 complex cannot be detected by UV-vis spectroscopy in the washing solution after the complexation. The loading of [Ru-pyr] 2+ on anatase TiO2 is estimated as 0.20 mmol.g -1 (about 3800 molecules per NP or 2 molecules per nm 2 ) by measuring the UVvis absorbance of supernatant solution after each centrifugation step, which is similar to that obtained for TiO2/Ru II (see Section II.3.3).
Figure V- 5 .
5 Figure V-5. CV of TiO2/Ru II (black) and TiO2/[Ru-pyr] 2+ (red) NPs recorded in solid state with a microcavity Pt electrode (d = 50 µm) in MeCN + TBAPF6 0.1 M under Ar, v = 100 mV/s. V.3.3. Photophysical properties UV-vis-NIR spectroscopy Figure V-6a shows the UV-vis-NIR spectrum of TiO2/[Ru-pyr] 2+ NPs in solid state using an integrating sphere. A broad peak at ~ 460 nm was recorded, which is similar to the initial [Ru-pyr] 2+ complex. The peak is attributed to the 1 MLCT absorption band of [Ru(bpy)3] 2+ core. The LC transition is not clearly observed due to the strong absorption of TiO2 at wavelengths below 390 nm. In colloidal solution of MeCN under Ar, TiO2/[Ru-pyr] 2+ NPs display a broad, featureless emission spectrum (Figure V-6b) after light excitation at 450 nm. The emission spectrum is similar in shape as those obtained with [Ru-pyr] 2+ and TiO2/Ru II colloid, thus it can be reasonably assigned to the emission of the [Ru(bpy)3] 2+* excited state. The emission maximum at 625 nm is slightly red shifted compared to the initial complex [Ru-pyr] 2+ (621 nm). A similar red shift after immobilizing the Ru complex on TiO2 has been described for TiO2/Ru II (see Section II.3.6).
Figure V- 6 . 3 . 6 . 4 ,
6364 Figure V-6. (a) Solid state UV-vis-NIR spectrum of TiO2/[Ru-pyr] 2+ NPs (solid line) and TiO2 NPs (dashed line) meaured in solid state. (b) Emission spectrum of TiO2/[Ru-pyr] 2+ colloid in MeCN under Ar, excited at 450 nm.
Figure V- 7 .
7 Figure V-7. Time-resolved emission decay of TiO2/[Ru-pyr] 2+* NPs in MeCN under Ar, excited at 400 nm by a picosecond pulse laser and recorded at 610 nm. Red line shows a biexponential fitting of the decay.
Scheme V-8. Schematic illustration of the photopolymerization process
Figure V- 8 .
8 Figure V-8. Solid state UV-vis absorption spectra of TiO2/[Ru-pyr] 2+ (black line), TiO2/poly(Ru-pyr) (red line) and naked TiO2 NPs (dashed line) b) Emission spectroscopy The changes in emission spectrum of TiO2/[Ru-pyr] 2+ during the photopolymerization process are shown in Figure V-9. After light excitation at 450 nm, all the samples after 0 h, 1 h and 2 h of irradiation exhibit a similarly broad signal indicative of [Ru(bpy)3] 2+* state.Emission intensity of the nanocomposite decreases by around 50 % after 1 hour and 60 % after 2 hours of photopolymerization. The emission maximum is slightly red-shifted from 625 nm to 631 nm after the two hours.
Figure V- 9 .
9 Figure V-9. Steady-state emission spectra of TiO2/[Ru-pyr] 2+ colloid (C = 0.05 g.L -1 ) in MeCN after 0h, 1h and 2h irradiation at 450 nm (black, red and blue lines respectively). Signals were recorded under an Ar atmosphere.
Figure V -
V Figure V-10. Time-resolved emission decays of TiO2/[Ru-pyr] 2+* after (a) 0h, (b) 1h and (c) 2h irradiation at 450 nm. Red curves represent the biexponential fitting of the decays.
Figure V -
V Figure V-11. EPR signal of the photo-induced polymerization of TiO2/[Ru-pyr] 2+ colloid (6 g.L -1 ) in MeCN glass (black line), together with the simulation of trapped electrons on Ti 3+ sites of anatase TiO2 (red line) and the simulation of polypyrrole (blue line). The sample was irradiated in situ by a 455 nm LED from the cavity window. The spectrum was accumulated for 1 hour at 20 K with an X-band EPR spectrometer (f = 9.6532 GHz, 0.2 mW, 5 G modulation)
4 .
4 Scheme V-10. Formation of pyrrole=O cations
A and B are constants in the monoexponential model, n the refractive index of dispersant, 0 the laser wavelength used in the DLS measurement and the scattering angle. The DLS measurements have been conducted at 23 0 C in MeCN solution for samples before and after 1 or 2 hours of photopolymerization. Their correlograms are shown Figure V-12. All the signals are well fit with a monoexponential function indicating monodispersecolloids. The hydrodynamic diameter is thus determined to be 354 nm, 537 nm and 437 nm for the colloids after 0, 1 and 2 hours of irradiation, respectively. The result corresponds to a 50 % increase in size after 1 hour and 25 % increase in size after 2 hours of irradiation compared to the initial size. It is expected that the size of NPs also undergoes a similar change as the hydrodynamic diameter. Such large increases are expected to be significantly higher than the experimental uncertainties. It is a bit surprising that the average size of the NPs increases during the first hour of irradiation and decreases after. In any cases, the result suggests that pyrrole polymerization may mainly occur between the [Ru-pyr] 2+ species on the same particle rather than connecting the particles together, since in the latter case the diameter should be increased much more significantly.
Figure V -
V Figure V-12. DLS correlograms of TiO2/poly(Ru-pyr) colloids in MeCN after 0 h (red), 1h (green) and 2 h (blue) of photopolymerization.
Figure V -
V Figure V-13. XPS survey spectra of (a) TiO2/Ru II , (b) TiO2/[Ru-pyr] 2+ and (c) TiO2/poly(Ru-pyr) NPs
Figure V -
V Figure V-14. Ti2p XPS core level spectra of (a) TiO2/Ru II , (b) TiO2/[Ru-pyr] 2+ and (c) TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines
Figure V -
V Figure V-15 presents the Ru3d and C1s spectra of the three samples. The Ru3d spectra are fit with two Gaussian lines which peak at 280.7 eV and 284.8 eV corresponding to the 5/2 and 3/2 states, respectively. 19 The part of the Ru3d3/2 spectrum that overlaps with the C1s spectrum has been subtracted prior to C1s peak fitting. The C1s spectra are not symmetric, which are then decomposed into three Gaussian lines at around 284.4 eV, 285.7 eV and 287.5 eV. The three component lines are assigned to C1s core signal in C-C, C-N and contaminated C=O/COO bonds, respectively. They remain almost unchanged in both peak position and FWHM during the photopolymerization.
Figure V- 15 .
15 Figure V-15. C1s and Ru3d (5/2 and 3/2) XPS core level spectra of (a) TiO2/Ru II , (b) TiO2/[Ru-pyr] 2+ and (c) TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines
Figure V -
V Figure V-16 presents the N1s XPS spectra of the three samples. In TiO2/Ru II , the spectrum is clearly fit with one Gaussian line peaked at 399.8 eV and FWHM = 1.62 eV. It is reasonably assigned to the =N-group in bpy ligands as they are all alike. The peak is in accordance with literature concerning the N atoms in bpy ligands being coordinated with Rh(I) metal center. 20 Deconvoluting the N1s spectra of TiO2/[Ru-pyr] 2+ and TiO2/poly(Ru-
Figure V -
V Figure V-16. N1s XPS core level spectra of (a) TiO2/Ru II , (b) TiO2/[Ru-pyr] 2+ and (c) TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines The P2p XPS spectra of the three samples are shown in Figure V-17. The spectrum of TiO2/Ru II is deconvoluted into two lines which peak at 132.4 eV and 133.3 eV, which are assigned to the P2p3/2 and P2p1/2 states, respectively. Introduction of the pyrrole moieties has minor influence on the peak positions (132.2 and 133.1 eV) while the FWHM remains the same. The peak position and FWHM of both P2p3/2 and P2p1/2 states are unchanged during the photopolymerization. It is hence concluded that the phosphonate anchoring groups are unaffected by the photopolymerization.
Figure V -
V Figure V-17. P2p XPS core level spectra of (a) TiO2/Ru II , (b) TiO2/[Ru-pyr] 2+ and (c) TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines
Figure V -
V Figure V-18 shows the O1s XPS spectra of the three samples. The spectrum of TiO2/Ru II can be decomposed into three Gaussian lines which peak at 529.1 eV, 530.3 eV and 531.6 eV. The most intense line at the lowest binding energy is assigned to O 2-anions in TiO2
Figure V -
V Figure V-18. O1s XPS core level spectra of (a) TiO2/Ru II , (b) TiO2/[Ru-pyr] 2+ and (c) TiO2/poly(Ru-pyr). Right column: (a1), (b1) and (c1) are zoom-in spectra in the 534 -528 eV range of the spectra (a), (b) and (c), respectively. The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines Quantification of percentage atomic concentrations was achieved by integrating the areas under corresponding XPS peaks and correcting the areas with relative sensitivity factors.
Figure V -
V Figure V-19. Atomic concentration of the elements in TiO2/Ru II , TiO2/[Ru-pyr] 2+ and TiO2/poly(Ru-pyr) NPs. Solid lines are associated with the left y-axis while dashed lines are linked to the right y-axis. Error = ± 0.1 %
(i) Photo-EPD strategy: TiO2/[Ru-pyr] 2+ precursor was suspended in air-saturated MeCN solution (C = 0.75 g.L -1 ) and irradiated at 450 nm during 2 hours. The photopolymerization lead to TiO2/poly(Ru-pyr) nanocomposite. Afterwards it was transferred to an Ar-saturated MeCN solution (C = 0.3 g.L -1 ) where a potential of 120 V was applied between two FTO electrodes placed 0.5 cm apart in 1 hour for the EPD. This EPD bias is equivalent to an electric field of 24 × 10 3 V.m -1 .(ii) EPD-Photo strategy: TiO2/[Ru-pyr] 2+ precursor was first deposited on a FTO electrode by EPD (Eapplied = 24 × 10 3 V/m) in 1 hour. All the parameters and deposition conditions were kept the same as in the Photo-EPD strategy. The electrode was then washed and immersed into an air-saturated MeCN solution. Surface photopolymerization was achieved by irradiating the electrode at 450 nm. After certain irradiation time intervals the electrode was transferred to an electrochemical cell containing MeCN + 0.1 M TBAPF6 in order to characterize the photopolymerization process by cyclic voltammetry.
Figure V- 20 .
20 Figure V-20. CVs of FTO/TiO2/[Ru-pyr] 2+ electrode recorded in MeCN + 0.1 M TBAPF6 under Ar. The FTO/TiO2/[Ru-pyr] 2+ modified electrode was therefore subjected to a photopolymerization experiment. It was irradiated at 450 nm in an air-saturated MeCN solution. After certain irradiation time, it was transferred to another electrochemical cell containing 0.1 M TBAPF6 electrolyte in MeCN to record CVs (Figure V-21). After 3 minutes of irradiation, the oxidation peak at ~ 0.9 V due to Ru III /Ru II and pyrrole monomers already decreases in intensity, suggesting the consumption of pyrrole monomers. The emergence of a new oxidative peak at 0.43 V is ascribable to polypyrrole oxidation, which is in accordance with literature. 25 Increasing the irradiation time shows almost no changes in the CVs.
Figure V -
V Figure V-21. CV of a FTO/TiO2/[Ru-pyr] 2+ electrode under 450 nm irradiation, recorded in MeCN + 0.1 M TBAPF6. Irradiation was performed with a Xe lamp + UV filter + 450-nm bandpass filter. Light power was estimated to be 0.2 W.cm -2 . The UV-vis-NIR spectrum of FTO/TiO2/poly(Ru-pyr) surface is shown in Figure V-22.The absorption of FTO substrate has been subtracted. The broad peak at 420 nm, assigned to
Figure V -
V Figure V-22. UV-vis-NIR absorption spectrum of FTO/TiO2/poly(Ru-pyr) modified electrode after photopolymerization. Absorption of FTO bare electrode has been subtracted. V.5.3 X-ray photoelectron spectroscopy of FTO/TiO2/poly(Ru-pyr) electrode
Figure V -
V Figure V-23. XPS survey spectrum of FTO/TiO2/poly(Ru-pyr) electrode fabricated with the Photo-EPD method High resolution XPS characterizations are then focused on the regions corresponding to the elements of interest.Figure V-24 presents their Ti2p spectra. They are well fit with two
Figure V-24 presents their Ti2p spectra. They are well fit with twoGaussian lines which peak at 457.9 eV and 463.7 eV. The lines are assigned to Ti2p3/2 and Ti2p1/2 states, 19 respectively, of surface Ti 4+ ions. Similar peak position and shape for the two samples shows that Ti 4+ remains intact after the EPD process.
Figure V -
V Figure V-24. Ti2p XPS core level spectra of (a) TiO2/poly(Ru-pyr) and (b) FTO/TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines
Figure V -
V Figure V-25 presents the C1s and Ru3d XPS spectra of the nanocomposite and the film. The Ru3d peak of FTO/TiO2/poly(Ru-pyr) can be deconvoluted into two Gaussian lines which peak at 280.1 eV and 284.3 eV. They are ascribed to the Ru3d5/2 and Ru3d3/2 states, respectively. 19 Both peak positions are shifted ~ 0.5 eV to lower binding energies compared to the Ru3d5/2 and Ru3d3/2 peaks of TiO2/poly(Ru-pyr) NPs (280.7 and 284.8 eV). The line shape remains unchanged, indicating the coordination sphere of [Ru(bpy)3] 2+ is conserved after the EPD process. The part of Ru3d3/2 state at 284.3 eV that overlaps with the C1s signal has been subtracted before the deconvolution of the C1s peak. The C1s peak is then decomposed into three Gaussian lines at 284.2 eV, 285.6 eV and 287.5 eV, which are similar to the TiO2/poly(Ru-pyr) nanocomposite. The three lines are thus assigned to C-C, C-N and COO bonds. The strong increase in COO species may stem from impurities on the FTO substrate.
Figure V- 25 .
25 Figure V-25. C1s and Ru3d (5/2 and 3/2) XPS core level spectra of (a) TiO2/poly(Ru-pyr) and (b) FTO/TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines
Figure V -
V Figure V-26. N1s XPS core level spectra of (a) TiO2/poly(Ru-pyr) and (b) FTO/TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines
Figure V -
V Figure V-27 displays the O1s XPS spectra of the two samples. On FTO/TiO2/poly(Ru-pyr) the O1s is evidently more asymmetric on the higher binding energy side. The spectrum is decomposed into three lines at 529.4 eV, 530.4 eV and 531.7 eV. Meanwhile the spectrum of TiO2/poly(Ru-pyr) can only be deconvoluted into two lines at 529.2 eV and 530.6 eV, which are ascribed to O 2-in TiO2 and O atoms in phosphonate group, respectively. Hence for the film, the two lines at lower binding energies is probably contributed by not only O 2-(TiO2) and O (phosphonate) but also O 2-(SnO2) which cannot be distinguished from the formers.
Figure V -
V Figure V-27. O1s XPS core level spectra of (a) TiO2/poly(Ru-pyr) and (b) FTO/TiO2/poly(Ru-pyr). The red lines represent the fitting obtained by the sum of the Gaussian peaks in blue lines Quantitative determination of percentage atomic concentration of some elements in TiO2/poly(Ru-pyr) NPs and FTO/TiO2/poly(Ru-pyr) thin film is collected in Table V-7. After the EPD of the nanocomposite onto FTO, Ru, N (bpy), N (pyrrole) and C (C-N bonds)
V. 5 . 4
54 Photo method (Figure V-29) were studied for comparison. Micrographs taken from a large scale (Figures a) show aggregates with several micrometers in diameter in both surfaces. EPD-Photo electrode shows additional much smaller particles surrounding the big aggregates.It should be noted that these small particles are also present on the Photo-EPD electrode but their amount is drastically reduced. In both electrodes, only a part of the FTO electrode is covered by the nanocomposite. This observation is in agreement with the XPS survey spectrum of FTO/TiO2/poly(Ru-pyr) electrode which shows a significant signal of Sn 4+ cations.
Figure V -
V Figure V-28. SEM micrographs of FTO/TiO2/poly(Ru-pyr) electrode prepared by the Photo-EPD strategy. (a) and (b) Topography of the surface recorded by an ETD detector. (c) Chemical contrast recorded by a CBS detector.
Figure V -
V Figure V-29. SEM micrographs of FTO/TiO2/poly(Ru-pyr) electrode prepared by the EPD-Photo strategy. (a) and (b) Topography of the surface recorded by an ETD detector. (c) Chemical contrast recorded by a CBS detector.
Figure V -
V Figure V-30 shows the AFM images of the electrodes.
Figure V - 6 V. 5 . 5
V655 Figure V-30. AFM micrographs of (a) blank FTO, (b) FTO/TiO2/Ru II , (c) FTO/TiO2/poly(Ru-pyr) prepared by the Photo-EPD strategy, and (d) FTO/TiO2/poly(Ru-pyr) prepared by the EPD-Photo strategy. Surface RMS roughness are 7.4, 19, 8.8 and 5.5 nm, respectively. In the absence of polypyrrole, the FTO/TiO2/Ru II electrode shows a very rough surface with large aggregates (Figure V-30b), where its root-mean-square (RMS) roughness strikingly increases from 7.4 nm for blank FTO (Figure V-30a) to 19 nm. In the presence of polypyrrole, both FTO/TiO2/poly(Ru-pyr) electrodes made by the Photo-EPD and EPD-Photo methods exhibit RMS roughness comparable to the blank FTO (8.8 and 5.5 nm for the Photo-EPD electrode and EPD-Photo electrode, respectively). The EPD-Photo electrode also shows relatively uniform aggregates with diameter of ~ 100 nm and a more homogeneous surface than the Photo-EPD one. Therefore, the polypyrrole is proved to allow for better nanostructuration of the NPs as evidenced by smaller, more homogeneously distributed
Figure V- 31 .
31 Figure V-31. Electron transfer mechanism for the photocurrent generation using FTO/TiO2/Ru II or FTO/TiO2/poly(Ru II -pyr) electrodes under visible irradiation. CB and VB are conduction band and valence band of TiO2, respectively The photocurrent generation of the electrodes is presented in Figure V-32a. Both FTO/TiO2/poly(Ru-pyr) electrodes show significantly enhanced photocurrents (Photo-EPD:
Figure V -
V Figure V-32. (a) Photocurrent generation and (b) photocurrent stability of various electrodes. Experiments were conducted in MeCN + 0.1 M TBAPF6 solution under visible irradiation, in the presence of 15 mM TEOA as sacrificial electron donor. Eapplied = 0.12 V vs Ag/Ag + 0.01 M. Light power was estimated as 1.3 W.cm -2 . The notions (1) and (2) represent the Photo-EPD strategy and EPD-Photo strategy respectively.
Film
photopolymerization on the surface (EPD-Photo method). The presence of non-homogeneous distribution of Ru II species and micrometer-sized aggregates on the electrodes are confirmed by SEM images in both cases. AFM images show that the presence of polypyrrole allows for better nanostructuration of the NPs than without the polypyrrole. Both Photo-EPD and EPD-Photo electrodes exhibit enhanced photocurrent under visible irradiation compared with FTO/TiO2/Ru II where polypyrrole is absent. This observation proves the importance of the polypyrrole coating layer to increase the conductivity and charge injection rate to TiO2 semiconducting particles. The current magnitude produced by the Photo-EPD electrode is 30 % greater than that of the EPD-Photo electrode. It probably indicates a more efficient deposition process of the NPs already embedded in a polypyrrole network and/or faster charge injection kinetics of the electrode made by the Photo-EPD method. Meanwhile, the electrode made by the EPD-Photo may only have the polypyrrole occurring at the outermost layer.
1 )Scheme 1 :
11 Scheme 1: Structures of BIH and BI(OH)H electron donorsAnother perspective from this thesis is the incorporation of the [Mn(ttpy)(CO)3Br] catalyst to the TiO2/poly(Ru-pyr) nanocomposite, then this hybrid system can be deposited on a FTO electrode. This electrode can be employed for the photoelectrocatalytic reduction of CO2. Similarly, [Cr(ttpy)2] 3+ can also be incorporated in the TiO2/poly(Ru-pyr)
where I is the integrated area under the emission peak, Abs is the absorbance at the excitation wavelength, n is the refractive index of the solvent, and the subscript ref indicates a reference with known luminescent quantum yield. The reference is chosen to be [Ru(bpy)3](PF6)2 with ref = 0.062 in MeCN under Ar.1
photocatalysis was analyzed using a Shimadzu liquid chromatograph LC-10AS equipped with an Alltech Select degassing system, a Perkin Elmer Series 200 UV-vis detector and a Perkin Elmer polypore H column (220 mm, 10 µm pore size, 20 µL injection volume), with the column eluent containing 10 mM H2SO4 in H2O. Both chromatographs were checked by injecting an aqueous solution of HCOONa (0.01 M) prior to the experiments.
1 H NMR (400 MHz, CD3CN): ppm) = 8.49 (d, J = 7.9 Hz, 4H), 8.35 (s, 2H), 8.06 -8.01 (m, 4H), 7.73 (t, J = 5.0 Hz, 4H), 7.53 (d, J = 5.8 Hz , 2H), 7.41 -7.35 (m, 4H), 7.23 (d, J = 5.1 Hz , 2H), 2.53 (s, 6H). Electrochemistry (WE = C disk, RE = Ag/AgNO3 0.01 M, electrolyte = MeCN + 0.1 M TBAPF6, v = 100 mV.s -1 ): E1/2 (V) = 0.91, -1.66, -1.86, -2.12. bpy-PO3Et2. The synthesis of this ligand requires two steps. In the first step, lithium diisopropylamine (LDA) was prepared by dissolving diisopropylamine (DIPA, 630 l, 1.29 mmol) in 3 mL THF under Ar in an oven-dried flask. The solution was cooled to -40 0 C, stirred for 30 minutes before slowly adding n-butyllithium solution (2.5 M in hexane, 0.58 mL, 1.29 mmol). The LDA solution was kept stirring for one hour while the temperature was raised to 0 0 C, forming a pale green solution. In another oven-dried flask, a solution of 4,4'-the glove box and compressed air was bubbled through for 30 mins upon which a color change from purple to dark yellow was observed. The solution exposed to air was left stirring for 12 h. The bright yellow suspension was filtered and the resulting yellow solid recrystallized from a solution of H2O and EtOH and dried under high vacuum to give the product as a dark yellow solid (41 mg, 38 %). Electrochemistry (WE = C disk, RE = Ag/AgNO3 0.01 M, electrolyte = MeCN + 0.1 M TBAPF6, v = 100 mV.s -1 ): E1/2 (V) = -0.34, -0.77, -1.30, -2.21. 1 H NMR cannot be done because the product is paramagnetic.
1 H
1 NMR cannot be done because the product is paramagnetic. UV-vis: λmax (MeCN) = 370 nm (ε = 36500 M -1 .cm -1 ). Electrochemistry: CVs cannot be done directly due to its poor solubility in any solvent except for DMSO. Instead, its redox potentials were studied by grafting [Cr2P] 3+ on a ITO electrode and used it as WE: E1/2 (V) = -0.42, -0.79, -1.32. (WE: area = 1.2 cm 2 , RE = Ag/AgNO3 0.01 M, electrolyte = MeCN + 0.1 M TBAPF6, v = 100 mV.s -1 ).
potentials of [Cr1P] 3+ were studied by grafting it on a ITO electrode and used it as WE: E1/2 (V) = -0.44, -0.81, -1.34. (WE: area = 1.2 cm 2 , RE = Ag/AgNO3 0.01 M, electrolyte = MeCN + 0.1 M TBAPF6, v = 100 mV.s -1 ).
(300 MHz, CDCl3): ppm) = 9.85 (s, 1H),7.80 (d, J = 8.7 Hz, 2H), 6.96 (d, J = 8.7 Hz, 2H), 4.10 (m, 6H), 2.18-1.86 (m, 4H), 1.30 (t, J = 7.0 Hz, 6H). 13 C NMR (75 MHz, CDCl3): ppm) =
P NMR (121 MHz, CDCl3): ppm) = 31.2. MS-APCI: m/z = 301 [M+H] +
P NMR (162 MHz, CDCl3): ppm) = 31.6. MS-APCI: m/z = 504 [M+H] +
1 H
1 NMR (400 MHz, [D6]DMSO): ppm) = 9.02 (d, J = 8.0 Hz, 2H), 8.94 (d, J = 4.8 Hz, 2H), 8.86 (s, 2H), 8.44-8.40 (m, 2H), 8.05 (d, J = 8.7 Hz, 2H), 7.87-7.84 (m, 2H), 7.19 (d, J = 8.8 Hz, 2H), 4.16 (t, J = 6.3 Hz, 2H), 2.02-1.92 (m, 2H), 1.76-1.68 (m, 2H). 13 C NMR (100 MHz, [D6]DMSO): ppm) = 160.
P NMR (162 MHz, [D6]DMSO): ppm) = 25.9. MS-MALDITOF: m/z = 448 [M+H] + dmbpy-pyr. The synthesis of this ligand consists of two steps. In the first step, a lithium diisopropylamine (LDA) was prepared by dissolving diisopropylamine (DIPA, 630 l, 4.07 mmol) in 5 mL THF under Ar. The solution was cooled to -20 0 C, stirred for 30 minutes before slowly adding n-butyllithium solution (2.5 M in hexane, 1.85 mL, 4.07 mmol). The LDA solution was kept stirring for one hour. In another oven-dried flask, a solution of 4,4'dimethyl-2,2'-dipyridine (dmbpy, 500 mg, 2.714 mmol) in 20 mL dried THF was prepared under Ar and cooled to -70 0 C. The LDA solution was added dropwise to the dmbpy solution at -70 0 C, and the mixture was stirred for 1 hour. In the second step, 1-(10bromodecyl)pyrrole (Br-pyr, 776 mg, 2.714 mmol) was added to the mixture, then the temperature was slowly raised to RT overnight. The reaction was stopped by adding water, then extracting with CH2Cl2 (x3), dried under vacuum, and the product was separated by chromatography (CH2Cl2/Ethyl acetate, 98/2 v/v) to yield a white powder (270 mg, 26 %).
J = 8.4 Hz, 2H),2.50 (s, 3H). FT-IR (KBr pellet, C=O stretching): (cm -1 ) =2022, 1948, 1916.
washed in pure ethanol by centrifugation(1 hour, 5000 rpm). They were subsequently dried in an oven at 80 0 C overnight and calcined at 450 0 C for 2 hours to yield a white powder (276 mg). Titania NPs with a mixed anataserutile (80% -20%) phase were achieved by calcination of the anatase TiO2 NPs at 700 0 C instead of 450 0 C for 2 hours. TiO2/Ru II and SiO2/Ru II . In a typical experiment, a suspension of the TiO2 NPs (anatase, 50 mg) or SiO2 (d < 20 nm, 50 mg) in ethanol/acetone (8/2, v/v) was sonicated for 20 minutes. Subsequently, the [RuP] 2+ complex (14.2 mg, 14 mol) was added to the suspension, and the only [Cr2P] 3+ . In a typical experiment for 10% spacer + 90% [Cr2P] 3+ , a mixture of BPA(1.35 mol) and [Cr2P] 3+ (1.5 mol) in 25 mL DMF/DMSO (8/2 v/v) solution was used to graft onto anatase TiO2 NPs (50 mg), yielding 54.3 mg product. The loading of [Cr2P] 3+ on TiO2 is estimated to be 0.03 mmol.g -1 (about 570 Cr III molecules per particle or 0.3 Cr III molecules per nm 2 ). Ru 2+ /TiO2/[Cr2P] 3+ (Ru:Cr = 1:1). A suspension of TiO2 (anatase, 50 mg) in DMF/DMSO (8/2, v/v) solvent and a mixture of [Cr2P] 3+ (5.7 mg, 5 mol) and [RuP] 2+ (5.1 mg, 5 mol) in the same solvent were sonicated for 20 minutes. They were then mixed together and stirred vigorously for 60 hours in dark. Afterwards, the NPs were separated by centrifugation, washed in DMF/DMSO, DMF and ethanol, respectively, and dried in an oven overnight to yield an orange powder (54.7 mg). Both complexes are expected to be grafted quantitatively. The loadings of [Cr2P] 3+ and [Ru-PO3H2] 2+ are estimated to be 0.10 mmol.g -1 (about 1600 molecules per particle or 1 molecule per nm 2 ). Ru 2+ /TiO2/[Cr2P] 3+ and BPA spacers. Similar to the Ru 2+ /TiO2/[Cr2P] 3+ triad but the [Cr2P] 3+ content was reduced. In a typical experiment for 20 % Ru, 2 % Cr and 78 % spacer, we used a mixture as follows: TiO2 (50 mg), [Cr2P] 3+ (0.3 mg, 0.3 µmol), [Ru-PO3H2] 2+ (3It has been synthesized following a two-step procedure. First, the ttpy-(CH2)3-PO3H2 ligand (4.5 mg, 10 µmol) and TiO2 anatase NPs (50 mg) were solubilized in 25 mL DMSO. The mixture was stirred at RT for 20 h in dark. Afterwards, the modified NPs were separated by centrifugation and washed multipletimes with DMSO and acetone to remove unadsorbed ligands. In the second step, the NPs were mixed with Mn(CO)5Br (2.8 mg, 10 µmol) in 25 mL acetone and refluxed at 45 0 C for 3 h in dark. Finally, the TiO2/Mn I NPs were centrifuged and washed thoroughly with acetone prior to be dried under vacuum to yield a pale yellow powder (40.7 mg). Loading of Mn I on TiO2 is estimated to be ~0.20 mmol.g -1 (about 3800 Mn I molecules per particle or 2 Mn I molecules per nm 2 ). FT-IR (KBr pellet, C=O stretching): (cm -1 ) = 2022, 1900It has been synthesized by stepwise grafting the Mn and Ru complexes on TiO2 anatase NPs (50 mg), respectively. The grafting procedure for Mn I is similar to that described for TiO2/Mn I NPs in the above paragraph, except that the contents of ttpy-(CH2)3-PO3H2 and Mn(CO)5Br have been reduced to 0.7 µmol (ie. 0.31 mg for ttpy-(CH2)3-PO3H2 and 0.19 mg for Mn(CO)5Br). After being washed thoroughly with acetone and dried under vacuum, the TiO2/Mn I NPs were mixed with [RuP] 2+(7.1 mg, 7 µmol) in 25 mL H2O. The mixture was stirred at RT for 20 h in dark. Afterwards, the modified NPs were separated by centrifugation and washed thoroughly with water, then acetone to remove water from the particles. Finally, the acetone solvent was evaporated to yield a pale orange powder (23.6 mg).TiO2/[Ru-pyr] 2+ . This modification of anatase TiO2 NPs requires two steps. In the first step, a suspension of the NPs (50 mg) in distilled water was sonicated for 20 minutes, before addition of bpy-PO3H2 (38 mol, 11.7 mg) and water to make a total volume of 25 mL. The mixture was vigorously stirred at RT overnight to functionalize the surface with bpy ligands. The NPs were then excessively washed with water and separated by centrifugation (4000 rpm, 45 minutes) until the solution turns from acidic to neutral pH, and no bpy-PO3H2 was spectroscopically detected from the supernatant solution. The particles were then washed two times with ethanol prior to their complexation. In the second step, a suspension of TiO2/bpy NPs in ethanol/acetone (8/2, v/v) was sonicated for 20 minutes, before the addition of [Ru(dmbpy-pyr)2]Cl2 (15.3 mol, 14.6 mg) and the mixed solvent to a total volume of 25 mL. The mixture was refluxed for 60 hours. The TiO2/[Ru-pyr] 2+ NPs were washed in ethanol/acetone and ethanol excessively, separated by centrifugation (4000 rpm, 45 minutes) and dried in an oven overnight to yield a dark red powder (44.4 mg). Loading of [Ru-pyr] 2+ is estimated to be ~0.20 mmol.g -1 (about 3800 molecules per particle or 2 molecules per nm 2 ). Electrochemistry (in MeCN + 0.1 M TBAPF6, WE = Pt microcavity electrode, v = 100 mV.s -1 ): E1/2 = 0.82 V (irreversible).
Ce mémoire vise à montrer l'intérêt de nanoparticules (NPs) de TiO2 comme plateforme pour immobiliser dans un environnement proche des complexes de coordination pouvant interagir par transfert d'électron photoinduit. Nous nous sommes intéressés à l'étude de nanomatériaux hybrides associant le complexe [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) comme photosensibilisateur aux complexes [Cr(ttpy)2] 3+ ou [Mn(ttpy)(CO)3Br] (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine) comme accepteurs d'électrons. Pour immobiliser les différents complexes à la surface du TiO2, une fonction acide phosphonique a été introduite sur une des bipyridines du centre [Ru(bpy)3] 2+ et sur la terpyridine des complexes [Cr(ttpy)2] 3+ . L'étude des processus de transferts de charges photo-induits sous irradiation en lumière visible sur le colloïde TiO2/Ru II montre que l'état à charges séparées (e -)TiO2/Ru III possède une longue durée de vie, ce qui rend possible l'utilisation des charges dans des réactions successives d'oxydation ou de réduction. Notamment l'irradiation du colloïde TiO2/Ru II en présence de [Cr(ttpy)2] 3+ et de triéthanolamine (TEOA) comme donneur d'électron sacrificiel permet la réduction à deux électrons du [Cr(ttpy)2] 3+ . Par la suite, le complexe [Cr(ttpy)2] 3+ est immobilisé sur les NPs de TiO2/Ru II pour former un assemblage Ru II /TiO2/Cr III au sein duquel les processus de transfert d'électrons photo-induits sont étudiés. De manière à proposer un système pour la réduction photocatalytique du CO2, le complexe [Mn(ttpy)(CO)3Br] a été co-immobilisé avec le [Ru(bpy)3] 2+ suivant une approche de chimie sur surface pour former le colloïde Ru II /TiO2/Mn I . Ce système présente une excellente sélectivité vis-à-vis du HCOOH comme seul produit de la photoréduction du CO2 en présence de 1-benzyl-1,4dihydronicotinamide (BNAH) comme donneur d'électron sacrificiel. Un système hybride associant le [Ru(bpy)3] 2+ portant des fonctions pyrroles et immobilisé sur TiO2 a également été synthétisé et étudié. Sous irradiation lumineuse, le transfert de charge (e -)TiO2/[Ru-pyr] 3+ permet d'induire la polymérisation du pyrrole. Le nanocomposite TiO2/poly(Ru-pyr) obtenu et déposé sur une électrode génère, en présence de TEOA, un photocourant anodique stable de plus de 10 μA.cm -2 . L'ensemble des résultats montre que les NPs de TiO2 peuvent être un moyen d'assembler des complexes dans un environnement proche en limitant les interactions à l'état fondamental, mais permettant des transferts d'électron photoinduits entre eux. Suivant les potentiels redox des différents composants, les transferts d'électron ont lieu soit via la nanoparticule soit en surface de celle-ci.
Table I - 1 .
I1 Summary of the photocatalytic reduction of CO2 using TiO2 NPs mentioned in reference 14
Catalyst * Crystal Particle Surface CO2 Bandgap Amount of CH4
phase size area adsorbed (eV) (µmol.h -1 .g -1 )
(nm) (m 2 .g -1 ) (µmol.g -1 )
JRC-TIO-2 Anatase 400 16 1 3.47 0.03
JRC-TIO-3 Rutile 30-50 51 17 3.32 0.02
JRC-TIO-4 Anatase 21 49 10 3.50 0.17
JRC-TIO-5 Rutile 640 3 0.4 3.09 0.04
* Japan TiO2 reference catalyst
3] 2+* excited state. The Rehm-Weller equations can be re-written as
follows:
E III (Ru /Ru ) II* E III (Ru /Ru ) II III (Ru /Ru ) II ,77 K em em 1240 ,RT () hc E e nm (Eq I-7)
E II* (Ru /Ru ) I II (Ru /Ru )+ I ,77 K em hc E e II I (Ru /Ru )+ em 1240 ,RT () E nm (Eq I-8)
4. Anchoring functional groups to graft a metal complex onto a surface
G * ( ) nF E D 1/2 ox 1/2 ( ) red E A or G * ( ) nF E D E A 1/2 1/2 ( ) ox red (Eq I-10)
where * 1/2 ()
ox ED or I.2.
Table I - 2 .
I2 Common anchoring groups for various surfaces
Surface Anchoring groups References
TiO2 Phosphonic acid R-PO3H2 26
Carboxylic acid R-COOH 27
Catechol Ph-(OH)2 27
Au Thiol R-SH 28
SiO2 Silane R3-SiH 29
Phosphonic acid R-PO3H2 30
ZrO2 Phosphonic acid R-PO3H2 26
Silane R-SiH3 31
ITO Phosphonic acid R-PO3H2 32
Carboxylic acid R-COOH 33
2+* was efficiently quenched by electron injection to TiO2 colloids with kinj > 5 × 10 7 s -1 and > 80% efficiency. Back electron transfer occurred in a microsecond time scale,
Argazzi et al. 39
reported a long-lived charge separated state when they irradiated TiO2/[Ru(bpy)2(dcb)] 2+ (Scheme I-2c) NPs in aqueous solution at pH 2. The emission of [Ru(bpy)2(dcb)]
The resulting TiO2/[Ru-pyr] 2+ NPs can be photopolymerized under visible light to form a nanocomposite TiO2/poly(Ru-pyr) where a polypyrrole network is formed around the TiO2 NPs. In this photopolymerization process, TiO2 acts as an electron acceptor for [Ru(bpy)3] 2+* excited state, and the electrons injected on TiO2 are scavenged by O2. Afterwards, the
Abstract
In this chapter we will present the electrochemical and photophysical properties of a
[Ru(bpy)3] 2+ photosensitizer (bpy = 2,2'-bipyridine) bearing a phosphonate group.
Afterwards, the phosphonate group is converted to phosphonic acid group to anchor on TiO2
nanocomposite is deposited onto an electrode for light-to-electricity energy conversion and SiO2 nanoparticles. Photo-induced charge and energy transfer processes in TiO2/Ru II and
application. SiO2/Ru II colloids under visible light have been studied by time-resolved emission
spectroscopy and transient absorption spectroscopy. The electron injection from [Ru(bpy)3] 2+*
excited state to the conduction band of TiO2 occurs in nanosecond time scale to form the
charge separated state (e -)TiO2/Ru III , whereas charge recombination is in millisecond time
scale. The trapped electrons on the Ti 3+ sites of TiO2 are evidenced by EPR spectroscopy
under continuous irradiation. The efficient, long-lived charge separated state (e -)TiO2/Ru III
CHAPTER II
PHOTO-INDUCED CHARGE TRANSFER
PROCESSES ON Ru(II) TRIS-BIPYRIDINE
SENSITIZED TiO 2 NANOPARTICLES
provides good opportunities to harvest the charges for subsequent redox reactions, which will be discussed in the remaining chapters.
Résumé
Dans ce chapitre, nous présentons les propriétés électrochimiques et photophysiques d'un nouveau complexe du [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) portant une fonction d'ancrage de type phosphonate sur une des bipyridines. Par la suite, la fonction phosphonate est hydrolysée en acide phosphonique pour permettre l'immobilisation des complexes du ruthénium sur des nanoparticules de TiO2 et SiO2. Les processus de transfert de charge et d'énergie induits par irradiation lumineuse dans les systèmes hydrides colloïdes TiO2/Ru II et SiO2/Ru II sous lumière visible ont été étudiés par spectroscopie d'émission résolus en temps et spectroscopie d'absorption transitoire en régime nanoseconde. L'injection d'électrons de l'état
Table II -
II
em, , ns (%) b
nm a
[Ru(bpy)3] 2+ 286 [75000] 451 [13000] 608 757 ± 9 0.062
[Ru(bpy)2(dmbpy)] 2+ 286 [82000] 454 [14000] 614 986 ± 12 0.070
[Ru-PO3Et2] 2+ 286 [80000] 454 [14100] 615 895 ± 10 0.068
1. Summary of photophysical properties of [Ru(bpy)3] 2+ , [Ru(bpy)2(dmbpy)] 2+ and [Ru-PO3Et2] 2+ : MLCT absorption peak abs, peak of luminescent emission em, lifetime of the photoexcited Ru 2+* species and the emission quantum yield . LCCT abs, nm [, M -1 .cm -1 ] 1 MLCT abs, nm [, M -1 .cm -1 ] a Excitation at 450 nm b Excitation by a picosecond pulsed laser at 400 nm All measurements were conducted in MeCN solution under Ar at room temperature.
Table II -
II is the Planck constant, c the speed of light, em,77 K and em,RT the wavelength at emission maximum at 77 K and room temperature, respectively. The redox potential of Ru III /Ru II* is determined to be -1.10 V vs Ag/AgNO3 0.01 M for both [Ru-PO3Et2] 2+ and [Ru(bpy)3] 2+ complexes. It can be concluded that [Ru-PO3Et2] 2+ complex retains beneficial electrochemical and photoredox properties of the [Ru(bpy)3] 2+ complex.
where h II.2.4. Electron paramagnetic resonance spectroscopy
The electron paramagnetic resonance (EPR) spectroscopy is a convenient technique for
qualitative and quantitative studies of paramagnetic species, notably free radicals and
transition metal complexes. [Ru(bpy)3] n+ (n = 3, 2, 1) complexes were chosen as a reference
for subsequent studies of the [Ru-PO3Et2] 2+ grafted on SiO2 or TiO2 NPs. The complexes
were electrochemically synthesized by exhaustive electrolysis of [Ru(bpy)3] 2+ in MeCN + 0.1
featured at 2590 G corresponding to g = 2.695 In literature the
EPR signal of [Ru(bpy)3](PF6)3 at 77 K is characterized with g = 2.65 and g// = 1.14. 9 The
latter g component is not observed in our experiment as it is broad and weak in intensity. The
signal is typical for a change in the oxidation state of a transition metal. It is then ascribed to a
Ru II Ru III oxidation. Eox Ered (1) Ered (2) Ered (3)
[Ru(bpy)] 2+,5 When singly reduced to [Ru(bpy)3] + , the complex shows a much narrower signal 0.94 -1.67 -1.83 -2.07
[Ru(bpy)2(dmbpy)] 2+ (Figure II-5b) featured by g = 2.002 and g// = 1.984, which is ascribed to an electron 0.91 (0.06) -1.66 (0.06) -1.86 (0.06) -2.12 (0.08)
[Ru-PO3Et2] 2+ localized on a bpy ligand, [Ru II (bpy -)(bpy)2] + . The signals are also in line with previous 0.92 (0.06) -1.67 (0.06) -1.86 (0.06) -2.12 (0.08)
studies. 8-10
In this study, [Ru-PO3Et2] 2+ is used as a photosensitizer to inject an electron to the
conduction band (CB) of TiO2 NPs under visible irradiation, forming (e -)TiO2 and [RuP] 3+ .
Therefore, the redox potential of [RuP] III/II* is of great interest to study the photo-induced
electron transfer process. From the redox potentials of the ground state and the emission
spectra of the complexes, one can estimate the redox potential of Ru III/II* by using the
E III (Ru /Ru ) II* E III (Ru /Ru ) II III (Ru /Ru ) II ,77 K 1240 ,RT em em hc E e (Eq II-1)
2. Summary of electrochemical properties of [Ru(bpy)3] 2+ , [Ru(bpy)2(dmbpy)] 2+ and [Ru-PO3Et2] 2+ complexes E1/2 (Ep), V simplified Rehm-Weller equation (Equation II-1): M TBAPF6 solution under Ar. The applied potentials were 1.1 V to form [Ru(bpy)3] 3+ and -1.65 V to form [Ru(bpy)3] + . EPR signals of [Ru(bpy)3] n+ (n = 3, 1) complexes are presented in Figure II-5. The starting complex [Ru(bpy)3] 2+ possesses a low spin d 6 electronic configuration, 5,8 making it EPR silent. When oxidized to [Ru(bpy)3] 3+ , it is transformed to a d 5 configuration and showing a broad signal over ~ 2000 G. The signal of [Ru(bpy)3] 3+ (Figure II-5a) is
Table II -3 summarizes
II the [RuP] 2+ dye loading values on commercially available SiO2 and anatase TiO2 NPs. The loadings per particle and per nm 2 are roughly estimated by assuming the diameter of TiO2 and SiO2 as 25 nm and 20 nm, respectively. For more detail the reader is kindly referred to the Experimental Section. The loadings per nm 2 for both kinds of NPs are in agreement with the loadings obtained with flat surfaces of SiO2 and TiO2
described in Section II.3.2. Table II-3. Loading
of [RuP] 2+ dye on commercially available SiO2 and TiO2 (anatase) NPs.
Loading (mmol.g -1 ) Molecules per particle Molecules per nm 2
SiO2/Ru II NPs 0.076 460 0.5
TiO2/Ru II NPs 0.21 4000 2.0
II.3.4. Infrared spectroscopy
Figure II-12 presents the FT-IR spectra of SiO2/Ru II (a) and TiO2/Ru II NPs (b), in
comparison with the phosphonic-derivatized dye, [RuP] 2+ , and naked SiO2 and TiO2 NPs. The
complex spectrum shows the P=O stretch at 1250 cm -1 , which disappears in the spectrum of
TiO2/Ru II . A shoulder at ~1150 cm -1 for TiO2/Ru II (shown as a peak in the subtracted
spectrum 4) is attributed to the bidentate surface linkage of -P(O)(O -)2.
Figure II-13a presents the
III /Ru II and Ru II /Ru I potentials.
[Ru-PO3Et2] 2+ complex in MeCN solution (
CV of the FTO/Ru II electrode. Reversible waves at E1/2 = 0.93 V and -1.76 V (E = 60 mV)
are attributed to Ru III /Ru II and Ru II /Ru I couples, which is in accordance with the protected
see Section II.2.3). A linear relationship between the
anodic peak current Ipa vs scan rate v (Figure II-13b) for the Ru III /Ru II peak proves that the complex has been successfully grafted onto the electrode.
on TiO2 and SiO2 NPs
Our attempts to characterize the electrochemical properties of SiO2/Ru II and TiO2/Ru II
in colloidal solution with C, Pt disk electrodes or C sponge electrode were unsuccessful due to
low conductivity of TiO2 and SiO2 and low coverage of the redox active species on the
electrode surface. Therefore, SiO2/Ru II and TiO2/Ru II NPs were examined in powder form by
cyclic voltammetry using a microcavity Pt working electrode (d = 50 µm). In the anodic
portion, the SiO2/Ru II exhibits a reversible redox wave at E1/2 = 0.92 V (Ep = 60 mV)
(Figure II-14a), whereas the TiO2/Ru II shows a quasi-reversible wave at E1/2 = 0.98 V (Ep =
130 mV) (Figure II-14c). The waves can be attributed to the Ru III /Ru II couple. The redox
potential of Ru II complex anchored on SiO2 is similar to that of [Ru-PO3Et2] 2+ free complex in
MeCN and FTO/Ru II electrode, while there is a shift to a more positive oxidation potential
when the complex is anchored on TiO2. Positive shifts of the redox potential of Ru III /Ru II
complexes grafted on a nanocrystalline TiO2 thin film versus free complexes in solution have
also been reported for phosphonate 3 -and carboxylate 24 -derivatized complexes.
In the cathodic portion of SiO2/Ru II NPs (Figure II-14a), the Ru II /Ru I peak at ca. -1.7 V
of the precursor [Ru-PO3Et2] 2+ is not observed. In the case of TiO2/Ru II (Figure II-14b), the
direct charge injection to the CB of TiO2 starts at about -1.0 V vs Ag/Ag + 0.01 M. A similar
CV for TiO2 film deposited onto a graphite electrode (Figure II-15)
Figure II-16a presents
their absorption after subtracting the
absorbance of KBr + NPs. The peaks at ~450 nm of both functionalized NPs can be attributed
to the 1 MLCT of [Ru(bpy)3] 2+ , as the [Ru-PO3Et2] 2+ shows a similar peak. The LCCT peak at
286 nm is evidenced for the SiO2/Ru II NPs. For TiO2/Ru II NPs, the absorption overlap
between Ru II dye and TiO2 in the UV region makes it impossible to observe the LCCT at 286
nm for the dye. In fact, the absorption of TiO2 NPs starts at ~392 nm (dotted line). This
absorption of TiO2 allows for the bandgap calculation: E g 1240 1240 3.2 (eV) 392 g . The
value is in accordance with the bandgap of bulk anatase TiO2, since quantum confinement
effects which cause larger bandgaps only take place where the diameter of the particles is
below 1 nm. 26
The emission of grafted Ru II* excited state was normalized and shown
in Figure II-16b.
b) Time-resolved emission spectroscopy
In order to gain more insight into this emission shift, a mixture of [RuP] 2+ (10%) and
benzylphosphonic acid (BPA, 90%) was immobilized onto TiO2 NPs. The BPA acts as a
lateral spacer to separate the Ru II species on the particles. The resulting NPs, denoted as
[TiO2/Ru II + spacers], display an emission peak at 615 nm (Figure II-16b) similar to that of
SiO2/Ru II and [Ru-PO3Et2] 2+ . Hence, the red-shift emission of TiO2/Ru II NPs could be
attributed to the enhanced energy transfer between neighboring Ru II* and Ru II species.
Our attempts to measure the luminescence quantum yield of SiO2/Ru II and TiO2/Ru II
colloids in MeCN were not successful. In order to measure it correctly, the absorbance of Ru II
species grafted on the NPs at 450 nm needs to be approximately equal to that of
[Ru(bpy)3](PF6)2 reference. However, the light scattering, which gradually raises the
absorbance as the probe light is swept from long to short wavelengths, strongly affects the
Ru II absorbance at 450 nm. Mathematical attempts to subtract this light scattering effect from
the real absorption of Ru II species are not efficient. The effect makes it impossible to correctly
determine the amount of Ru II anchored on SiO2 and TiO2 NPs, thus the quantum yields are
not measured.
The time-resolved emission spectra of SiO2/Ru II* and TiO2/Ru II* were recorded after
excitation by a 400 nm pulsed laser. The decays of the Ru II* excited states are presented in
Figure II-17. The signal of SiO2/Ru II NPs (black line) can be fit with a monoexponential
decay to give a lifetime of 838 ns, which is comparable to that of the [Ru-PO3Et2] 2+ complex
in Ar-saturated MeCN solution (907 ns). The signal of TiO2/Ru II (red line), however, can only
be fit with a biexponential function to give two lifetime values: 366 ns and 32 ns. The
fractional amplitudes of these lifetimes are 12% and 88%, respectively. The [TiO2/Ru II +
spacers] NPs also exhibit a biexponential decay and two lifetimes with corresponding
fractional amplitudes: 488 ns (2%) and 8 ns (98%). Hence, the short lifetime component of
TiO2/Ru II* NPs is attributed to the electron injection from the Ru II* to the CB of TiO2, which
is not observed for SiO2/Ru II NPs. The photo-induced electron transfer process from the Ru II*
state to TiO2 is not total, as evidenced by the presence of a second component in the decay.
This longer component still remains shorter than the excited state lifetime of [Ru-PO3Et2] 2+
in Table II-4.
Using SiO2/Ru II lifetime as a reference where there are no electron injections, the
kinetics of electron injection from Ru II* to TiO2 substrate, kinj, can be calculated by Equation
II-5:
k inj 2 1/ 11 SiO Ru (Eq II-5)
where 1 and 2 / SiO Ru (s) are the short lifetime of TiO2/Ru II* and the lifetime of SiO2/Ru II* ,
respectively. Similarly, the kinetics of energy quenching of Ru II* by neighboring Ru II , kq,
referenced to [Ru-PO3Et2] 2+ free complex in solution, can be estimated by Equation II-6:
k q 2 [] 32 11 2 Ru PO Et (Eq II-6)
where 2 and 2 [] Ru PO Et 32 (s) are the long lifetime of TiO2/Ru II* and the lifetime of [Ru-
PO3Et2] 2+ , respectively. The kinetics values, together with the photophysical properties of the
NPs, are collected
Table II -4. Summary
II of photophysical properties of SiO2/Ru II and TiO2/Ru II colloids in MeCN
1 MLCT abs, nm em, nm , ns (%) a kinj, s -1, b kq, s -1, c
[Ru-PO3Et2] 2+ 450 615 895 ± 10 - -
SiO2/Ru II NPs 450 612 838 ± 43 - 7.6 × 10 4
TiO2/Ru II NPs 450 627 366 ± 7 (12%) 3.0 × 10 7 1.6 × 10 6
32 ± 0.6 (88%)
Ru II /TiO2/BPA 450 615 488 ± 33 (2%) 1.2 × 10 8 9.5 × 10 5
(Ru:BPA = 1:9) 8 ± 0.4 (98%)
a Emission lifetime recorded at 610 nm after excitation by 400 nm nanosecond pulsed laser
b Kinetics of electron injection from Ru II* to TiO2
c Kinetics of energy transfer from Ru II* to a neighboring Ru II
As seen in Table II-4, the rate of energy transfer on SiO2/Ru II NPs is significantly
slower than on TiO2/Ru II due to lower loading of the [RuP] 2+ dye on SiO2 (0.076 mmol.g -1 )
Table II-5 summarizes the
kinetics of electron injection and kinetics of energy transfer for similar systems in the
aforementioned works.
Table II -5. Summary
II of kinj and kq values obtained in literature for similar systems
Table II -6. Forster
II resonance energy transfer: lifetime, efficiency and the distance between neighboring Ru II
species on TiO2 and SiO2 surfaces
, ns Efficiency r (Ru-Ru), nm
[Ru-PO3Et2] 2+ 895
SiO2/Ru II NPs 838 0.06 2.5
TiO2/Ru II NPs 366 0.60 1.8
Ru II /TiO2/BPA 488 0.45 2.0
It is noted that the energy quenching of SiO2/Ru II* is drastically lower than TiO2/Ru II* ,
which is explained by a greater distance between the Ru II sites. As expected, the addition of
BPA spacers reduces the FRET efficiency. It is however still substantially higher than that of
SiO2/Ru II . The Ru II loading on the surfaces is estimated to be 0.076 mmol.g -1 for SiO2/Ru II
and 0.02 mmol.g -1 for [TiO2/Ru II + spacers]. The higher Ru II loading but lower FRET
efficiency and greater Ru-Ru distance of SiO2/Ru II suggest then a more uniform distribution
of [RuP] 2+ on SiO2 than on TiO2.
It should also be mentioned that taking into account the calculated average distances,
the Dexter quenching mechanism cannot be totally excluded. Statistically speaking, some
pairs of complex molecules can be in contact and their orbitals may overlap to exchange
electrons. However,
the distances calculated by the FRET method are in line with the surface loading of Ru II species (see Section II.3.3): 0.5 molecules/nm 2 for SiO2/Ru II and 2 molecules/nm 2 for TiO2/Ru II . c) Transient absorption spectroscopy
TA measurements were performed in order to study the absorption spectrum of photo-
excited state of SiO2/Ru II* and TiO2/Ru II* and the kinetics of charge recombination. For that
reason we chose to work with MeCN solvent under Ar to avoid any photocatalytic reactions
involving the injected electrons on the CB of TiO2 or the oxidized Ru 3+ species. Diluted
suspensions of NPs in MeCN were required to reduce light scattering effect by the NPs.
Table III - 1 .
III1 Redox potentials of [Cr(ttpy)2] 2+ complex and ttpy ligand In order to get insight into the charge distribution during subsequent reduction steps, DFT calculations have been performed by Dr. Jean-Marie Mouesca (CEA Grenoble/ INAC/ SyMMES/ CAMPES). The GGA (Generalized Gradient Approximation) exchangecorrelation potential was used to obtain the Mulliken charge and spin population distribution over the [Cr(ttpy)2] n+ (n = 3, 2, 1, 0, -1) complexes. The spin populations on Cr atom compose of and spins in its d orbitals which are aligned with or against an external magnetic field, respectively. The spin number partly quantifies how much the electron(s) localized on the ttpy ligand will spread onto the Cr empty d orbitals. The Mulliken charge and spin populations on Cr of different oxidation states are summarized in TableIII-2.
E1/2 (Ep), V Ered (1) Ered (2) Ered (3) Ered (4)
[Cr(ttpy)2] 3+ -0.47 (0.07) -0.85 (0.07) -1.35 (0.07) -2.25 (0.07)
ttpy -2.35 (0.09) - - -
Table III - 2 .
III2 Mulliken charge and spin populations on Cr atoms of [Cr(ttpy)2] n+ complexes. GS = ground state,
ES = excited state
Complex oxidation state Spin state Mulliken (Cr) Charge population Mulliken (Cr) Spin population spin (Cr) population spin (Cr) population
+3 3/2 1.22 3.06 3.62 0.79
+2 1 1.17 2.50 3.41 1.08
+1 1/2 1.14 2.11 3.25 1.26
0 0 (GS) 1.12 1.10 2.80 1.77
1 (ES) 1.17 2.26 3.31 1.19
-1 1/2 (GS) 1.14 1.55 3.00 1.55
3/2 (ES) 1.18 2.48 3.41 1.08
Table III -
III The complex oxidation states have been converged from GGA solutions mentioned above before using the SAOP method to ensure the continuity between the GGA and SAOP calculations. The results are summarized in TableIII-4.
[Cr(ttpy)2] 3+ 370 [36500], 441 [3500], 475 [1500]
[Cr(ttpy)2] 2+ 500 [7700], 584 [2600], 800 [2800]
[Cr(ttpy)2] + 524 [20200], 560 [19500], 723 [5000]
[Cr(ttpy)2] 0 520 [37500], 571 [33600], 670 [19900], 776 [10500]
Time-Dependent (TD) DFT calculations have also been carried out by Dr. Jean-Marie
3. UV-vis extinction coefficients of [Cr(ttpy)2] n+ : n = 3, 2, 1, 0 Complex abs, nm [, M -1 .cm -1 ] Mouesca for the assignment of the UV-vis absorption peaks of the [Cr(ttpy)2] n+ complexes to corresponding electronic transitions. The SAOP (Statistical Average of Orbital Potentials) exchange-correlation potential 21 , which is especially suited for the TD-DFT calculations, has been used.
Table III - 4 .
III4 Experimental and calculated electronic transition energies for [Cr(ttpy)2] n+ (n = 2, 1) complexes, together with the assignment of their electronic transitions
Complex Experimental Experimental Calculated Assignment
abs (nm) energies (cm -1 ) energies (cm -1 )
[Cr(ttpy)2] 2+ 500 20000 18800
584 17120
800 (broad) 12500 13000 and 12100 LMCT
[Cr(ttpy)2] + 524 19100 20600
560 17900 17900
723 13800 13000 LMCT
For the [Cr(ttpy)2]
2+
singly reduced complex, [Cr III (ttpy)(ttpy -)] 2+ , calculations have been performed for transitions at 828 nm and 767 nm, which are in accordance with the broad absorption band centered at 800 nm. Electronic transitions at these wavelengths are shown in Figure III-4. It is surprising that in the ground state of [Cr(ttpy)2] 2+ , the electron is localized on the tolyl substituting group rather than the tpy part. The transition at 828 nm corresponds
is the Planck constant, c the speed of light, e the charge of an electron, em,77 K and em,RT (nm) the wavelength at emission maximum at 77 K and room temperature, respectively.
E III* (Cr /Cr ) II III (Cr /Cr ) II ,77 K em em 1240 ,RT () hc E e nm E III (Cr /Cr ) II (Eq III-1)
where h Taking into account E(Cr III /Cr II ) = -0.47 V, the redox potential E(Cr III* /Cr II ) is thus estimated
as 1.14 V, making the excited state Cr III* a strong oxidant. The result is in line with literature
for the same complex (E(Cr III* /Cr II ) = 1.06 V) 22 . A previous work 2 on [Cr(bpy)3] 3+ in aqueous
solution showed E(Cr III* /Cr II ) = 1.44 V vs NHE (about 0.89 V vs Ag/AgNO3 0.01 M), hence
the [Cr(bpy)3] 3+* is a weaker oxidant than [Cr(ttpy)2] 3+* .
Time-resolved emission spectroscopy was also carried out to study the emission decay
E em E E
III* state, E(Cr III* /Cr II ), as follows: III* II III II (Cr /Cr ) (Cr /Cr ) in nanosecond time scale. The decay recorded at 770 nm is shown in Figure III-7. It is well fit with a biexponential function (red curve), giving a short lifetime of 3 ns and a long lifetime
5. Photoreduction of [Cr(ttpy)2] 3+ in the presence of TEOA
Its EPR signal shows anisotropy of the g-values: g = (2.043, 2.043, 1.955). No clear evidence
for the hyperfine coupling to the 53 Cr (I = 3/2) naturally occurred isotope is observed. The g-
values are slightly smaller than that of [Cr(tpy)2] + . 5 It is important to note that the anisotropy
of the g-values of [Cr(tpy)2] + are more pronounced: g = (2.0537, 2.0471, 1.9603) than
[Cr(ttpy)2] + .
spectrum showing multiple signals spread over more than 6000 G. The spectrum of [Cr(ttpy)2] 3+ is in accordance with literature for an analogous [Cr(tpy)2] 3+ complex.
23
Amongst the redox states, the doubly reduced [Cr(ttpy)2] + complex is of great interest.
The EPR experiments clearly show that different oxidized states of [Cr(ttpy)2] n+ (n = 3, 2, 1) possess unique EPR signature. This method can be very helpful to identify the oxidation state of the Cr complex when it is photoreduced by TiO2 or TiO2/Ru II NPs, which will be discussed in Sections III.3 and III.4.
III.2.
Quenching of [Cr(ttpy)2] 3+* state by TEOA studied by Stern-Volmer experiment
To get insight into the quenching mechanism, we conducted a Stern-Volmer
experiment. In terms of thermodynamics, the oxidizing power of [Cr(ttpy)2] 3+* excited state
(E(Cr 3+* /Cr 2+ ) = 1.14 V, see Section III.2.3b) is strong enough to oxidize TEOA
(E(TEOA + /TEOA) = 0.42 V vs Ag/AgNO3 0.01 M) 24 as follows:
[Cr(ttpy)2] 3+ + h [Cr(ttpy)2] 3+*
[Cr(ttpy)2] 3+* + TEOA [Cr(ttpy)2] 2+ + TEOA +
TEOA
+ degraded products
The photoinduced electron transfer reaction between Cr 3+* and TEOA is largely exergonic (G = -0.72 eV). Without light this reaction is not favourable since the oxidizing power of [Cr(ttpy)2] 3+ ground state (E(Cr 3+ /Cr 2+ ) = -0.47 V) is much lower than the redox potential of TEOA.
6. Photoreduction of [Cr(ttpy)2] 3+ in the presence of [Ru(bpy)3] 2+ and TEOA
deactivation pathway for [Cr(ttpy)2] 3+* excited state involving static quenching due to a
nucleotide-bound Cr(III) complex. In our study, the quenching of [Cr(ttpy)2] 3+* by TEOA
seems purely dynamic.
To summarize, the photoreduction of [Cr(ttpy)2] 3+ by TEOA leads to the formation of
[Cr(ttpy)2] + after two electron transfer steps under continuous irradiation. The reaction occurs
through the formation of the transient [Cr(ttpy)2] 2+ species, suggesting that this species can
also be photoactive. On a thermodynamic point of view, [Cr(ttpy)2]
3
with thynine base. Moreover, at high nucleotide concentrations, the Stern-Volmer plot shows an upward curvation indicating an additional + can reduce any species with a potential higher than -0.85 Vvs Ag/AgNO3 0.01 M (around -0.30 V vs NHE).
III.2.
Table III -5. Redox
III
potentials of ttpy and ttpy-PO3Et2 ligands, [Cr(ttpy)2] 3+ and [Cr(ttpy-PO3Et2)2]
3+ complexes E1/2 (Ep), V Ered (1) Ered (2) Ered (3) Ered (4)
ttpy -2.35 (0.09) - - -
ttpy-PO3Et2 -2.13 (0.07) - - -
[Cr(ttpy)2] 3+ -0.47 (0.07) -0.85 (0.07) -1.35 (0.07) -2.25 (0.07)
[Cr(ttpy-PO3Et2)2] 3+ -0.34 (0.05) -0.77 (0.06) -1.30 (0.08) -2.21 (0.08)
complex, showing four successive one-electron reduction processes centered at E1/2 = -0.34, -0.77, -1.30 and -2.21 V. Based on the electrochemical properties of [Cr(ttpy)2] 3+ complex, these half-wave potentials are thus attributed to [Cr(ttpy)2] 3+ /[Cr(ttpy)2] 2+ ,
[Cr(ttpy)2] 2+ /[Cr(ttpy)2] + , [Cr(ttpy)2] + /[Cr(ttpy)2] 0 and [Cr(ttpy)2] 0 /[Cr(ttpy)2] -redox couples,
respectively. The added electrons are localized on the ligands while the oxidation state of Cr
remains +3. These potentials are slightly positively shifted compared with those obtained for
[Cr(ttpy)2] 3+ complex. This shift is a consequence of the electron withdrawing phosphonate
group making it easier to reduce. The fourth reduction process of [Cr(ttpy-PO3Et2)2] 3+
Table III - 6 .
III6 Redox potentials of ITO/[Cr2P] 3+ electrode and [Cr(ttpy-PO3Et2)2] 3+ complex in MeCN + 0.1 M
TBAPF6 under Ar
E1/2 (Ep), V Ered (1) Ered (2) Ered (3)
ITO/[Cr2P] 3+ -0.42 (0.02) -0.79 (0.03) -1.32 (0.03)
[Cr(ttpy-PO3Et2)2] 3+ -0.34 (0.05) -0.77 (0.06) -1.30 (0.08)
The CV of ITO/[Cr2P] 3+ electrode shows three successive one-electron reduction steps,
which are centered at E1/2 = -0.42, -0.79 and -1.32 V. These values are similar to those
obtained for the [Cr(ttpy-PO3Et2)2] 3+ complex, therefore the three redox waves are attributed
to [Cr2P] 3+ /[Cr2P] 2+ , [Cr2P] 2+ /[Cr2P] + and [Cr2P] + /[Cr2P] 0 couples, respectively. No signs of
phosphonic desorption are detected. The surface coverage is estimated at (6.0 ± 0.1) × 10 -11
been proved as follows: 43
I p 22 4 nF RT vA (Eq III-6)
mol.cm -2 (see Experimental Section for the calculation). This relatively low value is in the same range with previously reported values for complexes bearing ttpy phosphonic ligands anchored on ITO surface.
[39][40][41][42]
For a kinetically controlled electron transfer process between a grafted redox species and the electrode, a linear relationship between peak current intensity Ip and scan rate v has
Table III - 7 .
III7 KWW fitting parameters (KWW and KWW) and charge recombination rate kcr between (e -)TiO2 and Ru 3+ species in Ru 2+ /TiO2/[Cr2P] 3+ triad (Ru:Cr:BPA = 20:2:78, % mol), in comparison with TiO2/Ru 2+ dyad
KWW (µs) KWW kcr (s -1 ) cr = (kcr) -1 (ms)
Ru 2+ /TiO2/[Cr2P] 3+ 1400 ± 150 0.46 ± 0.01 300 ± 30 3.3
TiO2/Ru 2+ 3770 ± 800 0.51 ± 0.01 140 ± 30 7.1
At first, the two-fold increase in charge recombination kinetics of Ru 2+ /TiO2/[Cr2P] 3+
Cr2P] 3+ and [RuP] 2+ co-immobilized on TiO2 NPs
Our attempts to record the CVs of Ru 2+ /TiO2/[Cr2P] 3+ NPs with a microcavity Pt
electrode have not been successful as for the TiO2/[Cr2P] 3+ dyad. Two kinds of triads have
been measured: (i) Ru:Cr = 1:1 (% mol) without BPA spacers, and (ii) Ru:Cr:BPA = 20:2:78
(% mol). The very low surface coverage as shown with ITO electrode (about 10 times lower
for the Ru-Cr mixture compared with Cr or Ru alone) makes it difficult to record the CV.
Therefore, the redox potential of grafted [Cr2P] 3+ /[Cr2P] 2+ couple is assumed to be
similar to that of ITO/[Cr2P] 3+ electrode (-0.42 V) for subsequent studies in this chapter.
III.4.4. Photoreduction of [Cr(ttpy)2] 3+ free complex by TiO2/Ru 2+ NPs
In a systematic study, we first focus on the photoreduction of [Cr(ttpy)2] 3+ free complex
by TiO2/Ru 2+ dyad before studying the Ru 2+ /TiO2/[Cr2P] 3+ triad. We conducted UV-vis
absorption measurements with similar experimental conditions as in Section III.2.6 where
[Ru(bpy)3]
). b) [2+ PS was used instead of TiO2/Ru 2+ NPs. The absorption spectra during the irradiation time are shown in Figure III-30a. It is important to note that the TiO2/Ru 2+ colloid alone does not show the 1 MLCT absorption band of [Ru(bpy)3] 2+ with a maximum at 450 nm as in the solid state. Only a shoulder emerges at the foot of the TiO2 absorption band. The first reduction step to form [Cr(ttpy)2] 2+ complex is usually characterized by a broad band centered at 800 nm; however it is not observed in this case as for the photoreduction experiment of [Cr(ttpy)2] 3+ by [Ru(bpy)3] 2+ in solution (see Section III.2.6). The second reduction step to form [Cr(ttpy)2] + complex is clearly evidenced by the spectral changes indicated by black arrows: the evolution of two absorption maxima at 500 nm and 580 nm, accompanied by the characteristic peak at 723 nm. The experiment proves that the two-electron reduction of [Cr(ttpy)2] 3+ to [Cr(ttpy)2] + is achievable with immobilized Ru 2+ PS on TiO2.
Cr 3+ complex in Ru 2+ /TiO2/[Cr2P] 3+ triad a) Energy diagram
We first examine the energy levels associated with the electronic components in
Ru 2+ /TiO2/[Cr2P] 3+ triad. Scheme III-13 describes the redox potentials of Ru 3+ /Ru 2+ ,
Ru 3+ /Ru 2+* , Cr 3+ /Cr 2+ , Cr 2+ /Cr + and the CB of TiO2, which are relevant to the photo-induced
electron transfer cascade in this study. Under visible irradiation, Ru 2+ PS is excited (step 1)
and can inject electrons to the CB of TiO2 (step 2). The electron injection can occur from
either
1
MLCT state ("hot injection") or
3
MLCT relaxed state of the Ru 2+ PS. This injection step has been described in detail in Chapter 2. Afterwards, in terms of thermodynamics, the electrons on the CB of TiO2 can reduce [Cr2P] 3+ complex twice to form [Cr2P] + (step 3).
Regeneration of Ru 2+ PS is achieved by sacrificial reductant TEOA (step 4). Possible back
2+ is EPR-silent despite its S = 1 spin state. Therefore, the EPR signal may correspond to either Ru 3+ /(e -)TiO2/[Cr2P] 3+ or Ru 3+ /(e -)TiO2/[Cr2P] 2+ transient species. Low temperature could inhibit the reduction of [Cr2P] 3+ ; however it allows for the detection of transient species. At room temperature the trapped electrons on TiO2 will not be observable. The use of TEOA as sacrificial electron donor to compete with the back
electron transfer pathways is vital for multi-electron accumulation on [Cr2P] 3+ sites. 16
Nevertheless, it may pose problems in this EPR experiment because its oxidized species,
TEOA + , may cause a high radical signal at g ~ 2 as well.
In summary, despite the possible formation of [Cr(ttpy)2] + complex in solution by
TiO2/Ru II NPs under visible irradiation and in the presence of TEOA, the quantitative
photoreduction of grafted [Cr2P] 3+ in Ru 2+ /TiO2/[Cr2P] 3+ triad is not observed. It could be
due to the difficulty to short-circuit the back electron transfer processes between
photogenerated Ru 3+ and [Cr2P] 2+ complexes on the TiO2 surface by TEOA. Thermodynamic
considerations have already shown that the back electron transfer between [Cr(ttpy)2] 2+ and
[Ru(bpy)2]
[Cr2P] + (S = 1/2 ) at g ~ 2 is not detected. Even if the singly reduced [Cr2P] 2+ species is formed, it is not expected to be EPR-active in this X-band frequency as the analogous [Cr(ttpy)2] 3+ in solution is remarkably more favorable than the reduction of [Ru(bpy)3] 3+ by TEOA (-1.41 eV compared with -0.52 eV), which is a necessary step to further reduce [Cr(ttpy)2] 2+ species to [Cr(ttpy)2] + . When both [RuP] 2+ and [Cr2P] 3+ complexes are grafted on TiO2 with a molar ratio of 10:1, the chance that a [Cr2P] 3+ site is surrounded by [RuP] 2+
Chapter 3 the electrochemical and photophysical behaviors of [Cr(ttpy)2] 3+ complex have been extensively studied using electrochemistry, steady-state and transient absorption and emission spectroscopies. The complex shows rich electrochemical properties with five RuP] 3+ and [Cr2P] 2+ transient species to regenerate [RuP] 2+ and [Cr2P] 3+
reversible redox states, [Cr(ttpy)2] n+ (n = 3, 2, 1, 0, -1). DFT calculations demonstrate that all
the added electrons are localized on the ttpy ligands, thus the oxidation state of Cr always
remains +3. The four complexes corresponding to n = 3, 2, 1, 0 exhibit different fingerprints
in UV-vis absorption and EPR spectroscopy, which is convenient to identify them in a
reaction mixture.
The [Cr(ttpy)2] 3+ complex has found applications in photo-induced multiple electron
storage and electrocatalysis in homogeneous environments. In the presence of another PS like
[Ru(bpy)3] 2+ and a sacrificial electron donor like TEOA, the [Cr(ttpy)2] 3+ can be doubly
reduced with 100 % conversion. In addition, the [Cr(ttpy)2] 3+ complex also shows
electrocatalytic activity towards H2 production in non-aqueous solution. It is, however, not a
very good electrocatalyst due to possible decoordination of ttpy ligands under a prolonged
bias.
Immobilization of [Cr(ttpy)2] 3+ on ITO electrode and TiO2 NPs was achieved by
functionalizing the complex with two phosphonic anchoring group, which is denoted as
[Cr2P] 3+ . The ITO/[Cr2P] 3+ electrode exhibits three successive reduction waves at potentials
comparable to those of [Cr(ttpy)2] 3+ , proving that the Cr complex retains rich electrochemical
properties after being grafted. In contrast to the [Cr(ttpy)2] 3+ complex in solution,
TiO2/[Cr2P] 3+ NPs show no emission after being photo-excited. The evidence for successful
grafting on TiO2 is a broad EPR line centered at g ~ 2, which is characteristic of closely
packed paramagnetic species. The broad EPR line can still be observed at lower [Cr2P] 3+
concentrations, suggesting that the [Cr2P] 3+ sites likely stay close to each other on the TiO2
surface.
Finally, both [RuP] 2+ PS and [Cr2P] 3+ complex have been immobilized on ITO
electrode and TiO2 NPs to study photo-induced charge transfer events and the possibility of
accumulating multiple electrons on [Cr2P] 3+ sites. The electrochemical behavior of
ITO/{[RuP] 2+ +[Cr2P] 3+ } electrode is a superimposition of ITO/[RuP] 2+ and ITO/[Cr2P] 3+
signals, thus proving the successful grafting process. A similar superimposition is also observed in the UV-vis absorption spectrum of Ru 2+ /TiO2/[Cr2P] 3+ triad, which comprises of TiO2/Ru 2+ and TiO2/[Cr2P] 3+ spectra. However, the multiple charge storage on [Cr2P] 3+ sites is not achieved. It may be due to accelerated kinetics of the back electron transfer reaction between grafted [
Table IV - 1 .
IV1 CO2 reduction potentials vs NHE in aqueous solution, 1 atm gas pressure and 1 M solutes 3
Reactions E 0 (V) vs SHE E 0 (V) vs Ag/AgNO3 0.01 M *
CO2 + e - CO2 - -1.90 -2.44
CO2 + 2H + + 2e - HCOOH -0.61 -1.15
CO2 + 2H + + 2e - CO + H2O -0.53 -1.07
2CO2 + 2H + + 2e - H2C2O4 -0.49 -1.03
* Calculated by subtracting 0.54 V from E 0 vs SHE
4
Table IV -
IV 2 presents the results obtained with different Mn:Ru ratio. For the experiments to be comparable, we only changed the catalyst amount while keeping the Ru PS amount fixed so that light absorbance by the PS remains the same in all the series. The results were obtained after 16 hours of irradiation at 470 40 nm and corresponded to the maximum of production.
Table IV -
IV
Ratio of Mn:Ru a HCOOH n (µmol) TONmax b n (µmol) CO TONmax b TONtotal c
1:1 9.8 14 5.6 8 22
1:5 3.9 28 1.4 10 38
1:10 2.2 31 1.3 19 50
5:1 71.4 20 18.9 5 25
10:1 88.9 13 35.0 5 17
2. Results of photocatalytic CO2 reduction using a mixture of [Mn(ttpy)(CO)3Br] catalyst and [Ru(bpy)3](PF6)2 PS in 5 ml DMF solution containing TEOA (1 M) and BNAH (0.1 M) under CO2. Irradiation was achieved by using a Xe lamp (0.3 mW.cm -2 , 5 cm apart), a UV-hot filter and a 470 40 nm bandpass filter. The results were obtained after 16 hours of irradiation. a Ratio of [Mn(ttpy)(CO)3Br] : [Ru(bpy)3] 2+ where Ru(II) concentration is unchanged (0.14 mM) b Turnover number (TON) versus Mn catalyst c TONtotal = TONmax (HCOOH) + TONmax (CO)
Table IV - 3 .
IV3 KWW fitting parameters (KWW and KWW) and charge recombination rate kcr between (e -)TiO2 and Ru 3+ species in Ru II /TiO2/Mn I triad (Ru:Mn = 10:1 % mol), in comparison with TiO2/Ru II dyad
KWW (µs) KWW kcr (s -1 )
Ru II /TiO2/Mn I 3870 ± 460 0.55 ± 0.01 150 ± 15
TiO2/Ru II 3770 ± 800 0.51 ± 0.01 140 ± 30
Table IV
IV
-4 summarizes the aforementioned results.
Table IV - 4 .
IV4 Results of photocatalytic CO2 reduction using Ru II /TiO2/Mn I triad, TiO2/Ru II dyad + free Mn I in solution, and SiO2/Ru II dyad + free Mn I complex in solution. The DMF solution contained TEOA (1 M) and BNAH (0.1 M). Irradiation was achieved by using a Xe lamp (0.3 mW.cm -2 , 5 cm apart), a UV-hot filter and a 470 40 nm bandpass filter. The results were obtained after 16 hours of irradiation.
HCOOH CO
System nRu (µmol) nMn (µmol) (µmol) n TONmax (µmol) n TONmax Total TON
Ru II /TiO2/Mn I 0.3 0.03 0.8 27 0 0 27
TiO2/Ru II + free Mn I 0.3 0.03 1.1 38 0.2 6 44
SiO2/Ru II + free Mn I 0.3 0.06 0.5 9 0.2 3 12
Ru II + Mn I in solution 0.7 0.07 2.2 31 1.3 19 50
As mentioned in the previous chapter, the redox potential of Ru II* /Ru I can be calculated from the redox potential of Ru III /Ru II and the emission maximum of Ru II* by the simplified
[Ru-pyr] 2+ 460 [13900] 286 [80000] 621 0.043 670 ± 32
[Ru(bpy)3] 2+ 450 [13000] 285 [87000] 608 0.062 11 757 ± 9
a Excitation by a picosecond pulsed laser at 400 nm
All measurements were conducted in MeCN solution under Ar at room temperature.
Rehm-Weller equation (Equation V-1):
E II* (Ru /Ru ) I E II (Ru /Ru )+ I ,77 K hc em e E II (Ru /Ru )+ I ,RT 1240 em
Table V - 3 .
V3 Summary of the photophysical properties of TiO2/[Ru-pyr] 2+ compared with [Ru-pyr] 2+ and
TiO2/Ru II
1 MLCT abs (nm) em (nm) (ns) kinj, s -1 kq, s -1
TiO2/[Ru-pyr] 2+ 460 a 625 365 ± 8 (14%) 2.4 × 10 7 1.2 × 10 6
40 ± 1 (86%)
[Ru-pyr] 2+ 460 621 670 - -
TiO2/Ru II 450 627 366 ± 7 (12%) 3.0 × 10 7 1.6 × 10 6
32 ± 0.6 (88%)
Table V -
V
4). As the
irradiation time is increased, both rates are enhanced but the injection rate increases faster
than the energy transfer rate (6 times compared with 3 times after 2 hours).
By assuming Forster resonance energy transfer (FRET) model and applying Equations
V-5, 6 and 7 (see Section II.3.6 for
more detail), the FRET efficiency and the average distance between Ru 2+ species, r(Ru-Ru), on surface was estimated. In this calculation the orientational factor was assumed to be 2/3, which is generally accepted for a species in solution able to rotate freely. Before irradiation, the distance is about 2.0 nm, which is in accordance with the estimated loading of ~ 2 [Ru-pyr] 2+ molecules per nm 2 of TiO2 surface mentioned above. During the irradiation, the distance is gradually reduced. All the relevant photophysical properties of the nanocomposite are summarized in TableV-4.
Table V - 4 .
V4 Summary of the photophysical properties of TiO2/poly(Ru-pyr) after 0h, 1h and 2h irradiation at 450
nm
em, nm , ns (%) kinj, s -1 kq, s -1 FRET r (Ru-Ru),
efficiency nm
0 h 625 365 ± 8 (14 %) 2.4 × 10 7 1.2 × 10 6 0.46 2.0
40 ± 1 (86 %)
1 h 625 251 ± 6 (9 %) 4.6 × 10 7 2.5 × 10 6 0.63 1.8
21 ± 0.4 (91 %)
2 h 631 178 ± 6 (2 %) 1.4 × 10 8 4.1 × 10 6 0.73 1.6
7 ± 0.1 (98 %)
From the Table V-4, one can notice that the electron injection from Ru 2+* to TiO2 is
enhanced in both kinetics and efficiency as the irradiation time is prolonged. After 2 hours of
photopolymerization, the rate is increased by 6 times and efficiency reaches 98 %. Since the
distance between [Ru-pyr] 2+ and TiO2 surface is fixed during the irradiation time and the
electron injection follows a tunneling mechanism, this rate increase suggests a more
conducting environment around the nanocomposite. Therefore the polypyrrole proves to be
useful to ameliorate the conductivity.
Table V - 5 .
V5 Chemical structures of monomer, dimer and trimer radical cations of pyrrole, pyrrole-CH3 and
pyrrole=O species
Pyrrole Pyrrole-CH3 Pyrrole=O
Monomer
cation
Dimer
cation
Trimer
cation
Table V -6. g
V -values of monomer, dimer and trimer radical cations of pyrrole calculated with DFT
g-value
Monomer cation 2.00242 2.00239 2.01124
Dimer cation 2.00254 2.00251 2.00399
Trimer cation 2.00257 2.00251 2.00294
Table V - 7 .
V7 Atomic concentration of some elements in TiO2/poly(Ru-pyr) NPs and FTO/TiO2/poly(Ru-pyr)
electrode. Error = ± 0.1 %
Atomic concentration (%) N(bpy) N(pyr) P Ru C(C-C) C(C-N) Ti
TiO2/poly(Ru-pyr) 1.2 1.6 0.9 0.2 17.5 6.4 19.6
FTO/ TiO2/poly(Ru-pyr) 1.1 1.7 0.5 0.2 23.9 6.2 4.9
The phosphonic acid group was used to graft the complexes on TiO2 NPs, thanks to the stability of the Ti-O-P linkages. The grafting process was notably followed by quartz crystal microbalance with energy dissipation. Under visible irradiation, [Ru(bpy)3] 2+ is excited and injects an electron to the conduction band (CB) of TiO2 to form a long-lived charge separated state (e -)TiO2/Ru 3+ , which allows for the utilization of the charges in subsequent charge transfer reactions. For example, adding [Cr(ttpy)2] 3+ complex in a solution containing the TiO2/Ru 2+ NPs and triethanolamine (TEOA) as a sacrificial electron donor generates the doubly reduced [Cr(ttpy)2] + species under visible light. The reaction proceeds stepwise through the formation of (e -)TiO2/Ru 3+ .
immobilize various complexes including [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) as a
photosensitizer (PS) and [Cr(ttpy)2] 3+ or [Mn(ttpy)(CO)3Br] (ttpy = 4'-(p-tolyl)-2,2':6',2"-
terpyridine) as an electron acceptor. However, when both [Cr(ttpy)2] 3+ and [Ru(bpy)3] 2+ complexes are grafted on TiO2, the back
electron transfer reaction between transient Ru 3+ and Cr 2+ to regenerate the initial species
Ru 2+ and Cr 3+ may occur very fast, so that a further reduction to [Cr(ttpy)2] + cannot be
obtained, even in the presence of TEOA. In contrast, (e -)TiO2 can doubly reduce [Cr(ttpy)2] 3+
This polymerization process cannot occur by exhaustive electrolysis since TiO2 is not conductive enough. Therefore, in this photopolymerization process TiO2 acts as an electron acceptor for Ru 2+* to form the transient Ru3+ , which has a lifetime long enough for the oxidation of pyrrole. After two hours of irradiation, the [Ru(bpy)3] 2+ PS does not show any degradation. As an application, the nanocomposite deposited on a FTO electrode can generate an anodic photocurrent two times higher than the electrode covered with TiO2/Ru II NPs under visible light and in the presence
[Ru(bpy)3] 2+ -based PS bearing two pyrrole moieties. Under visible irradiation and in the
presence of O2, the [Ru(bpy)3] 2+ PS in the TiO2/[Ru-pyr] 2+ hybrid NPs is excited, injects an
electron to TiO2 to form a charge separated state (e -)TiO2[Ru III -pyr] 3+ . The Ru III centers are
then reduced by pyrrole to generate Ru II and a pyrrole radical cation. This radical then initiates a photopolymerization process on TiO2 surface to form a nanocomposite TiO2/poly(Ru-pyr) where a polypyrrole network surrounds the NPs. Meanwhile, the injected electrons on TiO2 are efficiently scavenged by O2.
One should subtract 0.090 V from them to convert them against Fc + /Fc reference. Half-wave potential (E1/2) and peak-to-peak splitting (Ep) are
calculated as follows:
1/2 E 2 EE pa pc , p E E pa E pc
three-electrode cell at RT under a continuous Ar flow or in a glovebox. Electrolytes were used
without further purifications: tetrabutylammonium perchlorate (TBAP, Fluka),
tetrabutylammonium hexafluorophosphate (TBAPF6, Aldrich). A silver/silver nitrate in
CH3CN (Ag/AgNO3 0.01 M) and a platinum (Pt) coil were used as reference electrode and
counter electrode, respectively. A carbon disk electrode (CHI Instrument) or a home-made Pt
microcavity electrode was used as working electrode. Before experiment, the C disk electrode
was polished using diamond paste and cleaned with ethanol. Cyclic voltammograms (CV)
were recorded with a CHI 630 potentiostat (CH Instrument), or a Biologic SP300 potentiostat (Science Instruments). All reported potentials are against the Ag/AgNO3 0.01 M reference electrode, unless otherwise stated.
= 8.75-8.73 (m, 4H), 8.68 (d, J = 8.0 Hz, 2H), 7.92- 7.88 (m, 4H), 7.38-7.35 (m, 2H), 7.02 (d, J = 8.5 Hz, 2H), 4.17-4.09 (m, 6H), 2.19-2.09 (m, 2H), 2.02-1.94 (m, 2H), 1.34 (t, J = 7.0 Hz, 6H).
1 H NMR (400 MHz, CDCl3): ppm) 13 C NMR (100 MHz, CDCl3): ppm) = 159.
H NMR cannot be done because the product is paramagnetic. Electrochemistry: CVs cannot be done directly due to the poor background signal of DMSO. Instead, the redox
H NMR (400 MHz, CDCl3): (ppm) = 8.57 (t, J = 4.0 Hz,
2H), 8.28 (s, 2H), 7.17 (d, J = 8.0
Acknowledgements
Abstract
Herein we present the electrocatalytic and photocatalytic reduction of CO2 to CO and HCOOH using a Mn(I) tricarbonyl complex, [Mn(ttpy)(CO)3Br] (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine), as pre-catalyst and [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) as photosensitizer. The photocatalytic CO2 reduction experiments were performed under irradiation at 470 40 nm in DMF/TEOA (TEOA: triethanolamine, CTEOA = 1 M) solution mixture containing 1-benzyl-1,4-dihydronicotinamide (BNAH, 0.1 M) as sacrificial electron donor and different ratios of [Mn(ttpy)(CO)3Br]:[Ru(bpy)3] 2+ (Mn:Ru). The optimization of the Mn:Ru ratio shows that the 1:10 ratio offers the best performance in terms of total turnover number (TONmax (CO) = 19, TONmax (HCOOH) = 31) after 16 hours of irradiation. Afterwards, both complexes are grafted on TiO2 nanoparticles via phosphonic acid anchoring groups to form a triad structure (denoted as Ru II /TiO2/Mn I ) with a Mn:Ru ratio estimated around 1:10. Under similar experimental conditions, the Ru II /TiO2/Mn I nanoparticles produce HCOOH as the only product with TONmax (HCOOH) = 27. The enhanced selectivity towards HCOOH when using grafted complexes compared with the mixture of homogeneous complexes in solution could be related to the mechanism of the first reduction process of [Mn(ttpy)(CO)3Br] and hence the catalytically active species. We propose that a doubly reduced Mn -I complex is the active species in homogeneous solution, whereas a singly reduced Mn 0 complex is the active species in Ru II /TiO2/Mn I nanoparticles.
Résumé
Dans ce chapitre, nous présentons les résultats de la réduction électrocatalytique et photocatalytique du CO2 en CO et HCOOH en utilisant un complexe du Mn(I) tricarbonyl [Mn(ttpy)(CO)3Br] (ttpy = 4'-(p-tolyl)-2,2':6',2''-terpyridine) comme pré-catalyseur et le complexe [Ru(bpy)3] 2+ (bpy = 2,2'-bipyridine) comme photosensibilisateur. La réduction photocatalytique du CO2 sous irradiation à 470 40 nm dans le DMF/TEOA (TEOA: triethanolamine, CTEOA = 1 M) et contenant le 1-benzyl-1,4-dihydronicotinamide (BNAH, 0,1 M) en tant que donneur d'électrons sacrificiel a été menée en variant le rapport de concentration [Mn(ttpy)(CO)3Br] : [Ru(bpy)3] 2+ (Mn:Ru). Les résultats montrent qu'un rapport Mn:Ru de 1:10 offre les meilleures performances en termes de TON total: TONmax (CO) = 19, TONmax (HCOOH) = 31 après 16 heures d'irradiation. Les deux complexes [Mn(ttpy)(CO)3Br] et [Ru(bpy)3] 2+ ont été greffés sur des nanoparticules de TiO2 via des groupements d'ancrage du type acide phosphonique portés par les ligands polypyridiniques pour former une structure triade (Ru II /TiO2/Mn I ). Le ratio Mn:Ru a été fixé sur les nanoparticules, proche de 1:10. Dans des conditions similaires à celles de la réduction photocatalytique du CO2 en utilisant des complexes homogènes [Ru(bpy)3] 2+ et [Mn(ttpy)(CO)3Br] en solution, les nanoparticules hybrides Ru II /TiO2/Mn I produisent HCOOH comme seul produit avec un TONmax (HCOOH) = 27. La sélectivité accrue vers HCOOH lors de l'utilisation de Ru II /TiO2/Mn I par rapport au mélange des complexes en solution pourraient être liés au mécanisme du premier processus de réduction de [Mn(ttpy)(CO)3Br] et donc à l'espèce catalytiquement active. Nous proposons que le complexe Mn -I doublement réduit soit l'espèce active en solution homogène, alors qu'un complexe Mn 0 mono réduit pourrait être l'espèce active sur les nanoparticules Ru II /TiO2/Mn I .
IV.2. Mn(I) tolylterpyridine tricarbonyl complex in solution
IV.2.1. Synthesis
The synthesis of [Mn( 2 -ttpy)(CO)3Br] complex was performed following a previous publication by our CIRe group. 23 A simple one-step synthesis was carried out between Mn(CO)5Br and ttpy ligand in diethyl ether to yield [Mn( 2 -ttpy)(CO)3Br] as the only product (Scheme IV-6). The bidentate coordination was confirmed in the aforementioned study. The presence of three CO groups is proved by infrared (IR) spectroscopy in solid state showing three intense peaks at 2022, 1948, 1916 cm -1 . They correspond to symmetric (2022 cm -1 ) and antisymmetric (1948 and 1916 cm -1 ) CO stretching bands. The peaks are slightly blueshifted compared with the analogous complex [Mn( 2 -ttpy)(CO)3(MeCN)] + ( (CO) = 2043, 1962, 1939 cm -1 ) 23 where Br -is replaced by MeCN solvent molecule. The peak positions and the blue-shift tendency when Br -is substituted by MeCN have also been reported for the analogous [Mn( 2 -tpy)(CO)3Br] 22 and [Mn(phen-dione)(CO)3Br] 24 (phen-dione = 1,10phenanthroline-5,6-dione) complexes.
Scheme IV-6. Synthesis route to [Mn( 2 -ttpy)(CO)3Br] complex
IV.2.2. Electrochemical properties
The electrochemical properties of [Mn( 2 -ttpy)(CO)3Br] complex has been studied by cyclic voltammetry in MeCN + 0.1 M TBAPF6 solution under Ar, using a carbon disk electrode as working electrode (WE). Since the CVs are similar to those of [Mn( 2 -ttpy)(CO)3(MeCN)] [START_REF]mmol.g -1 (about 3800 Cr III molecules per particle or 2 Cr III molecules per nm 2 ). C6H5-CH2-PO3H2/TiO2/[Cr2P] 3+ (C6H5-CH2-PO3H2 = benzylphosphonic acid, BPA). Similar to the case of TiO2/[Cr2P] 3+ , but a mixture of [Cr2P] 3+ and BPA was used instead of References[END_REF]21 , [Mn(bpy)(CO)3Br] [START_REF]mmol.g -1 (about 3800 Cr III molecules per particle or 2 Cr III molecules per nm 2 ). C6H5-CH2-PO3H2/TiO2/[Cr2P] 3+ (C6H5-CH2-PO3H2 = benzylphosphonic acid, BPA). Similar to the case of TiO2/[Cr2P] 3+ , but a mixture of [Cr2P] 3+ and BPA was used instead of References[END_REF] and [Mn(phen)(CO)3Br] 18 complexes, our CV interpretation is based on the explanations in these previous works without further investigation.
In the anodic scanning (Figure IV-1a), two irreversible oxidation peaks are detected at 0.62 V and 0.78 V, which is in accordance with Marc Bourrez's study on the same complex. [START_REF]mmol.g -1 (about 3800 Cr III molecules per particle or 2 Cr III molecules per nm 2 ). C6H5-CH2-PO3H2/TiO2/[Cr2P] 3+ (C6H5-CH2-PO3H2 = benzylphosphonic acid, BPA). Similar to the case of TiO2/[Cr2P] 3+ , but a mixture of [Cr2P] 3+ and BPA was used instead of References[END_REF] The author attributed the two peaks to two oxidation steps as follows:
(1) fac [Mn I ( 2 -ttpy)(CO)3Br] fac [Mn II ( 2 -ttpy)(CO)3Br] + + e - Epa1 = 0.62 V Scheme IV-8. Proposed mechanism for the photocatalytic CO2 reduction using [Mn(ttpy)(CO)3Br] catalyst.
Adapted from reference 21
Upon the excitation at 450 nm, [Ru(bpy)3] 2+ PS is selectively excited and singly reduced by the sacrificial electron donor BNAH. Since the reduction power of BNAH, Eox = 0.27 V vs
Ag/AgNO3 0.01 M (converted from Eox = 0.57 V vs SCE in MeCN 5 ) is stronger than that of TEOA (Eox = 0.42 V 26 ), BNAH can reductively quench the Ru 2+* excited state (E(Ru II* /Ru I ) = 0.31 V). The [Ru(bpy)3] + , in turn, can reduce [Mn(ttpy)(CO)3Br] twice to form the catalytically active species [Mn -I (ttpy)(CO)2] -. In terms of energy, since the reduction power of Ru + is E(Ru II /Ru I ) = -1.67 V while the two reduction steps of the Mn complex occur at -1.44 V and -1.67 V, the Ru + complex can reduce the Mn complex twice. The doubly reduced species [Mn -I (ttpy)(CO)2] -has a free coordination site to form an adduct with CO2. It has been proposed that TEOA also participated to the formation of the Re-CO2-TEOA intermediate complex when a similar catalyst, [Re(bpy)(CO)3Cl], was used. 5 Finally,
IV.3.3. Electrochemical properties a) Mn I complex grafted on FTO electrode
Before studying the electrochemical properties of TiO2/Mn I NPs, we first examined the electrical communication between the grafted Mn I complex and a metal oxide electrode. FTO was chosen for this study because ITO showed significant degradation when the potential was swept to -2 V. The Mn I complex was deposited onto the FTO electrode following the same procedure as for TiO2/Mn I NPs synthesis. The CV of FTO/Mn I modified electrode is presented in Figure IV-5a. The first cathodic peak appears at Epc = -1.43 V, which is at the same potential as the first cathodic peak of the free complex in solution. However, on the reverse scan the associated anodic peak appears at a very different potential (Epa = -1.16 V) than the peak recorded for the homogeneous complex in solution (-0.89 V, Section IV.2.2), which has been assigned to the oxidation of the Mn 0 -Mn 0 dimer formed after the first reduction step. Since the Mn I complex is immobilized onto the FTO surface, the formation of Mn 0 -Mn 0 dimer should be prohibited. This assumption is supported by the lack of a second reduction process which is assigned to the reduction of the Mn 0 -Mn 0 dimer in solution at -1.67 V. Scanning to potentials more negative than -2 V leads to the degradation of the FTO surface itself.
Therefore, we tend to attribute the redox waves for FTO/Mn I electrode to the following processes:
Abstract
In this chapter, [Ru(bpy)3] 2+ photosensitizer has been functionalized with a pyrrole unit and a phosphonic acid group to anchor on TiO2 NPs to form a hybrid system called TiO2/[Ru-pyr] 2+ . The electrochemical and photophysical properties of this system have been studied. Photo-induced electron injection from [Ru(bpy)3] 2+* unit to TiO2 to yield (e -)TiO2/[Ru III -pyr] 3+ occurs in nanosecond time scale, which is followed by the reduction of Ru III by pyrrole moiety. A transient charge separated state (e -)TiO2/[Ru II -pyr + ] 3+ is then formed. In the presence of O2 as an electron scavenger for the injected electrons in TiO2 NPs, the positively charged pyrrole moieties induce oxidative polymerization to yield TiO2/poly(Ru-pyr) nanocomposite. Afterwards, the nanocomposite has been deposited onto a FTO electrode using an electrophoretic deposition process. These electrodes behave as efficient photoanode under visible irradiation in the presence of triethanolamine as sacrificial electron donor. The nanostructuration of the polypyrrole network enhances both the homogeneity of surface deposition and photocurrent intensity compared to the results obtained with a FTO/TiO2/Ru II electrode. In this chapter we will present the incorporation of a pyrrole-containing [Ru(bpy)3] 2+ complex covalently grafted on TiO2 NPs. The aim of this design is to prepare a photoactive hybrid nanocomposite capable of polymerization under visible light, thus avoiding possible detrimental effects of TiO2 on organic components under UV light. It is hence expected that the formation of polypyrrole will increase the conductivity of the resulting nanocomposite material after deposition on an electrode. The resulting hybrid system, called TiO2/[Ru-pyr] 2+ (Scheme V-5) has been designed following the Electron Acceptor -PS -Electron Donor architecture, in which the [Ru(bpy)3] 2+ PS is anchored on TiO2 surface to facilitate the photoinduced electron transfer process as presented in Chapter 2. In this structure TiO2 acts as the electron acceptor for [Ru(bpy)3] 2+* and pyrrole as electron donor to regenerate the PS by reducing the Ru III species. First, the synthesis, electrochemical and photophysical properties of the complex in solution will be discussed. Afterwards, grafting of the complex onto TiO2
Résumé
NPs together with its properties will be presented. The photopolymerization process will be addressed in detail via various characterization techniques. Finally, deposition of the nanocomposite onto a FTO surface and its photo-to-current energy conversion will be shown, highlighting the role of polypyrrole. The synthesis of bpy-pyr ligand 4 and [Ru(bpy-pyr)2Cl2] complex 7 (bpy-pyr = 4-methyl-4'-[11-(1H-pyrrol-1-yl)undecyl]-2,2'-bipyridine) have been done in a similar manner reported in literature with some modifications (see Experimental Section for more details).
Incorporating another bpy ligand was achieved by refluxing [Ru(bpy-pyr)2Cl2] and 4,4'dimethyl-2,2'-bipyridine (dmbpy, 1 equiv) in H2O/ethanol mixture (2/8 v/v) overnight under Ar. The as-synthesized product was allowed for anion exchange with excess amount of KPF6 to yield [Ru(dmbpy)(bpy-pyr)2](PF6)2 complex which is hereafter denoted as [Ru-pyr] 2+ (Scheme V-6). We chose the dmbpy ligand because in the next step a phosphonic group will be incorporated into this ligand as anchoring group to TiO2 NPs. Therefore the study on [Ru-pyr] 2+ complex can be compared with that grafted on TiO2 NPs. V, which is shifted by 100 mV compared with [Ru(bpy)3] 2+ , 8 is attributed to the oxidation of pyrrole and Ru III /Ru II couple, since their oxidation potentials are known to be indistinguisable. 4,6 The bpy-pyr ligand oxidation has been reported to be irreversible at potentials around 1 V. 9 In our case, the Ipa/Ipc ratio is larger than unity, showing that this oxidation is also not reversible. In the cathodic part, three reversible waves centered at E1/2 Scheme V-7. Synthesis procedure of TiO2/[Ru-pyr] 2+ NPs.
V.3.2. Electrochemical properties
The electrochemical properties of TiO2/[Ru-pyr] 2+ NPs were studied in solid state using a microcavity Pt electrode (d = 50 µm). to be consistent with the photopolymerization study mentioned above, although at 20 K it is not expected to react with other species. For g-value calculations, 2,2-diphenyl-1picrylhydrazyl (DPPH) (g = 2.0036) has been used as reference. Simulations of (e -)TiO2 and conducting species in polypyrrole are also shown in red and blue lines respectively. The simulation of trapped electrons in anatase TiO2 has been done according to published gvalues: g = 1.990 and g// = 1.960. 12 It should be noted that only the trapped electrons are observable by EPR spectroscopy, while CB electrons are EPR silent. 13 Similarly, the simulation of conducting species in polypyrrole chain (polarons) has been performed by g = 2.0037 (line width Hp-p = 9 G). It should be noted that this g-value is higher than those recorded by electrochemical or chemical synthesized polypyrrole films (g ~ 2.0025). 14,15 Comparing the signal and the simulations, we suggest the formation of a charge separated state (e -)TiO2/Ru II -pyr + . Under He atmosphere and at such a low temperature, there is no oxidative quencher for the electrons in TiO2, thus they are trapped at defect sites on TiO2 surface and detectable by EPR. The resulting [Ru(bpy)3] 3+ species is probably undetectable The two strategies are schematically depicted in Scheme V-11. The characterization of the photopolymerization of TiO2/poly(Ru-pyr) nanocomposite has been thoroughly discussed in Section V.4. This section focuses on the photopolymerization on FTO/TiO2/[Ru-pyr] 2+ surface following the EPD-Photo strategy. Comparative studies for the two strategies will also be mentioned in terms of the film characterization and their application as photoanode under visible irradiation. ). The wave at ~ 0.9 V, which is assigned to the oxidation of Ru II and pyrrole, is not completely reversible (Ipa/Ipc >> 1). It is similar to the [Ru-pyr] 2+ complex without phosphonic groups in solution (see Section V.2.2). If the electropolymerization occurred on the FTO/TiO2/[Ru-pyr] 2+ modified electrode, we would expect the emergence of a new oxidative peak around 0.5 V corresponding to the polypyrrole moieties, 25 and an increase in the peak intensity due to higher conductivity. The CVs in Photocurrent measurement: A standard electrochemical cell with modified surfaces as working electrode were used. A 250 W Xenon lamp was employed to irradiate the electrode, with a fixed distance of 4 cm between the surface and the head of the optical fiber. A CHI 600 potentiostat was run in chronoamperometry mode to record the photocurrents. During the experiment, the light was alternatively switched on and off by placing a metallic plate over the source outlet. The energy of incident light is estimated to be 1.0 W.cm -2 (without filter), or 1.0 mW.cm -2 (UV hot filter). removed from the glove box and bubbled through with air for 2 h which produced an orange suspension. The solid was separated by filtration and dried under high vacuum to afford desired product as an orange solid (0.66 g, 61 %).
Elemental Analysis (EA): Calc. for C44H28N6Cr: C, 52.99; H, 2.83; N, 8.43. Found: C, 52.17; H, 3.73; N, 8.64. UV-vis: λmax (MeCN) = 368 nm (ε = 36500 M -1 .cm -1 ). Electrochemistry (ii) Step 2: Synthesis of diethyl [4-(2,2':6',2''-terpyridin-4'-yl)
phenyl]phosphonate
To a degassed mixture of 4′-(4-bromophenyl)-2,2′:6′,2″-terpyridine (0.97 g, 2.50 mmol) and [1,1-bis-(diphenylphosphino)ferrocene]dichloropalladium(II) (0.10 g, 0.13 mmol) was added 25 mL anhydrous toluene then Et3N (0.28 g, 2.75 mmol) and diethylphosphite (0.38 g, 2.76 mmol). The orange solution was stirred for 48 h at 90 °C under an Ar atmosphere. The solvent was removed in vacuo and the residue recrystallized from MeCN three times and dried under high vacuum to give a purple solid (0.63 g, 56%) EA: Calc. for C25H24N3O3P: C, 67.40; H, 5.44; N, 9.43. Found: C, 67.71; H, 5.37; N, 9.74 [Ru(dmbpy-pyr)2]Cl2. To an oven-dried round-bottom flask was added RuCl3.3H2O (56 mg, 214 mol) and dried DMF (5 mL). The mixture was stirred and heated to 110 0 C, then the bpy-pyr ligand (167 mg, 429 mol, 2 equiv) was added. It was refluxed for 3 hours. After that, it was cooled to RT, then added 30 mL diethyl ether leading to the precipitation of the product. The solid was subsequently filtered, rinsed with 50 mL acetone/diethyl ether (1/4, v/v) to yield a dark purple solid (120 mg, yield: 59 %). (ttpy, 120 mg, 0.37 mmol) were dissolved in 30 mL diethyl ether. The resulting solution was refluxed for 3 h, and allowed to cool down to RT before the orange solid was filtered. Excess [Mn(CO)5Br] precursor was eliminated by stirring the solid in 50 mL diethyl ether for 30 mins, then the remaining solid was filtered and washed with diethyl ether. The solid product was dried under vacuum for 2 h, yielding a pale yellow powder (170 mg, 85 %). 7.96 (m, 1H), 7.75 (m, 1H), 7.65 (m, 1H), 7.42 |
01758038 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01758038/file/ARK2016_Gagliardini_Gouttefarde_Caro_DynamicFeasibleWorkspace.pdf | Lorenzo Gagliardini
email: lorenzo.gagliardini@irt-jules-verne.fr
Marc Gouttefarde
email: marc.gouttefarde@lirmm.fr
S Caro
Determination of a Dynamic Feasible Workspace for Cable-Driven Parallel Robots
Keywords: Cable-Driven Parallel Robots, Workspace Analysis, Dynamic Feasible Workspace
come L'archive ouverte pluridisciplinaire
Introduction
Several industries, e.g. the naval and renewable energy industries, are facing the necessity to manufacture novel products of large dimensions and complex shapes. In order to ease the manufacturing of such products, the IRT Jules Verne promoted the investigation of new technologies. In this context, the CAROCA project aims at investigating the performance of Cable Driven Parallel Robots (CDPRs) to manufacture large products in cluttered industrial environments [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Gagliardini | A reconfigurable cable-driven parallel robot for sandblasting and painting of large structures[END_REF]. CDPRs are a particular class of parallel robots whose moving platform is connected to the robot fixed base frame by a number of cables as illustrated in Fig. 1. CDPRs have several advantages such as a high payload-to-weight ratio, a potentially very large workspace, and possibly reconfiguration capabilities.
The equilibrium of the moving platform of a CDPR is classically investigated by analyzing the CDPR workspace. In serial and rigid-link parallel robots, the workspace is commonly defined as the set of end-effector poses where a number of kinematic constraints are satisfied. In CDPRs, the workspace is usually defined as the set of poses where the CDPR satisfies one or more conditions including the static or the dynamic equilibrium of the moving platform, with the additional constraint of non-negative cable tensions. Several workspaces and equilibrium conditions have been studied in the literature.
The first investigations focused on the static equilibrium and the Wrench Closure Workspace (WCW) of the moving platform, e.g. [START_REF] Fattah | Workspace and design analysis of cable-suspended planar parallel robots[END_REF][START_REF] Gouttefarde | Analysis of the wrench-closure workspace of planar parallel cable-driven mechanisms[END_REF][START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF][START_REF] Stump | Workspaces of cable-actuated parallel manipulators[END_REF][START_REF] Verhoeven | Advances in Robot Kinematics, chap. Estimating the controllable workspace of tendon-based Stewart platforms[END_REF]. Since cables can only pull on the moving platform, a pose belongs to the WCW if and only if any wrench can be applied by means of non-negative cable tensions. Feasible equilibria of the moving platform can also be analyzed using the Wrench Feasible Workspace (WFW) [START_REF] Bosscher | Wrench-feasible workspace generation for cabledriven robots[END_REF][START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Interval-analysis-based determination of the wrenchfeasible workspaceof parallel cable-driven robots[END_REF]. By definition, the WFW is the set of wrench feasible platform poses where a pose is wrench feasible when the cables can balance a given set of external moving platform wrenches while maintaining the cable tensions in between given lower and upper bounds. The Static Feasible Workspace (SFW) is a special case of the WFW, where the sole wrench induced by the moving platform weight has to be balanced [START_REF] Pusey | Design and workspace analysis of a 6-6 cablesuspended parallel robot[END_REF]. The lower cable tension bound, τ min , is defined in order to prevent the cables from becoming slack. The upper cable tension bound, τ max , is defined in order to prevent the CDPR from being damaged.
The dynamic equilibrium of the moving platform can be investigated by means of the Dynamic Feasible Workspace (DFW). By definition, the DFW is the set of dynamic feasible moving platform poses. A pose is said to be dynamic feasible if a prescribed set of moving platform accelerations is feasible, with cable tensions lying in between given lower and upper bounds. The concept of dynamic workspace has already been investigated in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF] for planar CDPRs. Barrette et al. solved the dynamic equations of a planar CDPR analytically, providing the possibility to compute the boundary of the DFW. This strategy cannot be directly applied to spatial CDPRs due to the complexity of their dynamic model. In 2014, Kozlov studied in [START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF] the possibility to investigate the DFW by using a tool developed by Guay et al. for the analysis of the WFW [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. However, the dynamic model proposed by Kozlov considers the moving platform as a point mass, neglecting centrifugal and Coriolis forces.
This paper deals with a more general definition of the DFW. With respect to the definitions proposed in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF][START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF], the DFW considered in the present paper takes into account: (i) The inertia of the moving platform; (ii) The external wrenches applied on the moving platform; (iii) The centrifugal and the Coriolis forces corresponding to a given moving platform twist. The Required Wrench Set (RWS), defined here as the set of wrenches that the cables have to apply on the moving platform in order to satisfy its dynamic equilibrium, is calculated as the sum of these three contributions to the dynamic equilibrium. Then, the corresponding DFW is computed by means of the algorithm presented in [START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF] to analyze the WFW.
Dynamic Model
The CDPR dynamic model considered in this paper consists of the dynamics of the moving platform. A dynamic model taking into account the dynamics of the winches could also be considered but is not used here due to space limitations. Additionally, assuming that the diameters of the cables and the pulleys are small, the dynamics of the pulleys and the cables is neglected.
The dynamic equilibrium of the moving platform is described by the following equation Wτ -
I p p -C ṗ + w e + w g = 0 ( 1
)
where W is the wrench matrix that maps the cable tension vector τ into a platform wrench, and
ṗ = ṫ ω p = ẗ α , (2)
where ṫ = [ṫ x , ṫy , ṫz ] T and ẗ = [ẗ x , ẗy , ẗz ] T are the vectors of the moving platform linear velocity and acceleration, respectively, while ω = [ω x , ω y , ω z ] T and α = [α x , α y , α z ] T are the vectors of the moving platform angular velocity and acceleration, respectively.
The external wrench w e is a 6-dimensional vector expressed in the fixed reference frame F b and takes the form
w e = f T e , m T e T = [ f x , f y , f z , m x , m y , m z ] T (3)
f x , f y and f z are the x, y and z components of the external force vector f e . m x , m y and m z are the x, y and z components of the external moment vector m e , respectively. The components of the external wrench w e are assumed to be bounded as follows
f min ≤ f x , f y , f z ≤ f max (4) m min ≤ m x , m y , m z ≤ m max (5)
According to ( 4) and ( 5), the set [w e ] r , called the Required External Wrench Set (REWS), that the cables have to balance is a hyper-rectangle. The Center of Mass (CoM) of the moving platform, G, may not coincide with the origin of the frame F p attached to the platform. The mass of the platform being denoted by M, the wrench w g due to the gravity acceleration g is defined as follows
w g = MI 3 M Ŝp g ( 6
)
where I 3 is the 3 × 3 identity matrix, MS p = R [Mx p , My p , Mz p ] T is the first momentum of the moving platform defined with respect to frame F b . The vector S p = [x p , y p , z p ] T defines the position of G in frame F p . M Ŝp is the skew-symmetric matrix associated to MS p . The matrix I p represents the spatial inertia of the platform
I p = MI 3 -M Ŝp M Ŝp I p (7)
where I p is the inertia tensor matrix of the moving platform, which can be computed by the Huygens-Steiner theorem from the moving platform inertia tensor, I g , defined with respect to the platform CoM
I p = RI g R T - M Ŝp M Ŝp M (8)
R is the rotation matrix defining the moving platform orientation and C is the matrix of the centrifugal and Coriolis wrenches, defined as
C ṗ = ω ωMS p ωI p ω ( 9
)
where ω is the skew-symmetric matrix associated to ω.
3 Dynamic Feasible Workspace
Standard Dynamic Feasible Workspace
Studies on the DFW have been realised by Barrette et al. in [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF]. The boundaries of the DFW have been computed for a generic planar CDPR developing the equations of its dynamic model. Since this method cannot be easily extended to spatial CDPRs, Kozlov proposed to use the method described in [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF] in order to compute the DFW of a fully constrained CDPR [START_REF] Kozlov | A graphical user interface for the design of cable-driven parallel robots[END_REF]. The proposed method takes into account the cable tension limits τ min and τ max in checking the feasibility of the dynamic equilibrium of the moving platform for the following bounded sets of accelerations ẗmin ≤ ẗ ≤ ẗmax (10)
α min ≤ α ≤ α max (11)
where ẗmin , ẗmax , α min , α max are the bounds on the moving platform linear and rotational accelerations. These required platform accelerations define the so-called Required Acceleration Set (RAS), [ p] r . The RAS can be projected into the wrench space by means of matrix I p , defined in [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF]. The set of wrenches [w d ] r generated by this linear mapping is defined as the Required Dynamic Wrench Set (RDWS). No external wrench is applied to the moving platform. Accordingly, the DFW is defined as follows Definition 1. A moving platform pose is said to be dynamic feasible when the moving platform of the CDPR can reach any acceleration included in [ p] r according to cable tension limits expressed by [τ] a . The Dynamic Feasible Workspace is then the set of dynamic feasible poses, [p] DFW .
[
p] DFW = (t, R) ∈ R 3 × SO(3) : ∀ p ∈ [ p] r , ∃τ ∈ [τ] a s.t. Wτ -A p = 0 (12)
In the definition above, the set of Admissible Cable Tensions (ACT) is defined as
[τ] a = {τ | τ min ≤ τ i ≤ τ max , i = 1, . . . , m} (13)
Improved Dynamic Feasible Workspace
The DFW described in the previous section has several limitations. The main drawback is associated to the fact that the proposed DFW takes into account neither the external wrenches applied to the moving platform nor its weight. Furthermore, the model used to verify the dynamic equilibrium of the moving platform neglects the Coriolis and the centrifugal wrenches associated to the CDPR dynamic model.
At a given moving platform pose, the cable tensions should compensate both the contribution associated to the REWS, [w e ] r , and the RDWS, [w d ] r . The components of the REWS are bounded according to (4) and ( 5) while the components of the RDWS are bounded according to [START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF] and [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF].
The dynamic equilibrium of the moving platform is described by [START_REF] Barrette | Determination of the dynamic workspace of cable-driven planar parallel mechanisms[END_REF], where C is related to the Coriolis and centrifugal forces of the moving platform and w g to its weight. These terms depend only on the pose and the twist of the moving platform. For given moving-platform pose and twist, these terms are constant.
Therefore, the DFW definition can be modified as follows.
Definition 2. A moving platform pose is said to be dynamic feasible when, for a given twist ṗ, the CDPR can balance any external wrench w e included in [w e ] r , while the moving platform can assume any acceleration p included in [ p] r . The Dynamic Feasible Workspace is the set of dynamic feasible poses, [p] DFW .
[p] DFW :
∀w e ∈ [w e ] r , ∀ p ∈ [ p] r , ∃τ ∈ [τ] a s.t. Wτ -I p p-C ṗ+w e +w g = 0 (14)
In this definition, we may note that the feasibility conditions are expressed according to three wrench space sets. The first set, [w d ] r , can be computed by projecting the vertices of [ p] r into the wrench space. For a 3-dimensional case study (6 DoF case), [ p] r consists of 64 vertices. The second component, [w e ] r , consists of 64 vertices as well. Considering a constant moving platform twist, the last component of the dynamic equilibrium, w c = {C ṗ + w g }, is a constant wrench. The composition of these sets generates a polytope, [w] r , defined as the Required Wrench Set (RWS).
[w] r can be computed as the convex hull of the Minkowski sum over [w e ] r , [w d ] r and w c , as illustrated in Fig. 2:
[w] r = [w e ] r ⊕ [w d ] r ⊕ w c (15)
Thus, Def. 2 can be rewritten as a function of [w] r .
Definition 3. A moving platform pose is said to be dynamic feasible when the CDPR can balance any wrench w included in [w] r . The Dynamic Feasible Workspace is the set of dynamic feasible poses, [p] DFW .
[p] DFW : ∀w ∈
[w] r , ∃τ ∈ [τ] a s.t. Wτ -I p p + w e + w c = 0 (16)
The mathematical representation in ( 16) is similar to the one describing the WFW. As a matter of fact, from a geometrical point of view, a moving platform pose will be dynamic feasible if
[w] r is fully included in [w] a [w] r ⊆ [w] a (17)
Consequently, the dynamic feasibility of a pose can be verified by means of the hyperplane shifting method [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Advances in Robot Kinematics, chap[END_REF][START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The distances between the facets of the avail- -100
N ≤ f x , f y , f z ≤ 100 N (19) -1 Nm ≤m x , m y , m z ≤ 1 Nm (20)
Similarly, the range of accelerations of the moving platform is limited according to the following inequalities:
-2 m/s 2 ≤ ẗx , ẗy , ẗz ≤ 2 m/s 2 (21) -0.1 rad/s 2 ≤α x , α y , α z ≤ 0.1 rad/s 2 (22)
For the foregoing conditions, the improved DFW of the CDPR covers the 47.96% of its volume. Figure 4(a) illustrates the improved DFW of the CDPR under study.
The results have been compared with respect to the dynamic feasibility conditions described by Def. 1. By considering only the weight and the inertia of the moving platform, the DFW covers the 63.27% of the volume occupied by the DFW, as shown in Fig. 4(b). Neglecting the effects of the external wrenches and the Coriolis forces, the volume of the DFW is 32% larger than the the volume of the improved DFW.
Similarly, by neglecting the inertia of the CDPR and taking into account only the external wrenches w e , the WFW occupies the 79.25% of the CDPR volume. By taking into account only the weight of the moving platform, the SFW covers 99.32% of the CDPR volume. These results are summarized in Tab. 1.
Conclusion
This paper introduced an improved dynamic feasible workspace for cable-driven parallel robots. This novel workspace takes into account: (i) The inertia of the moving platform; (ii) The external wrenches applied on the moving platform and (iii) The centrifugal and the Coriolis forces induced by a constant moving platform twist. As an illustrative example, the static, wrench-feasible, dynamic and improved dynamic workspaces of a spatial suspended cable-driven parallel robot, with the dimensions of a prototype developed in the framework of the IRT JV CAROCA project, are traced. It turns out that the IDFW of the CDPR under study is respectively 1.32 times, 1.65 times and 2.07 times smaller than its DFW, WFW and SFW.
Fig. 1
1 Fig. 1 Example of a CDPR design created in the framework of the IRT JV CAROCA project.
Fig. 2
2 Fig. 2 Computation of the RWS [w] r . Example of a planar CDPR with 3 actuators and 2 translational DoF.
Fig. 3
3 Fig.3Layout of the CoGiRo cable-suspended parallel robot[START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] with the size of the IRT JV CAROCA prototype.
Fig. 4
4 Fig. 4 (a) Improved DFW and (b) DFW of the CDPR under study covering 47.96% and 63.27% of its volume, respectively.
Table 1
1 Comparison of SFW , W FW , DFW and IDFW of the CDPR under study. Covered Volume of the CDPR 99.32% 79.25% 63.27% 47.95%
Workspace type SFW W FW DFW IDFW
Acknowledgements
This research work is part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, namely, STX, DCNS, AIRBUS and CNRS. |
00175806 | en | [
"chim.anal",
"sde.mcg.cpe",
"sdu.stu.gc"
] | 2024/03/05 22:32:10 | 1994 | https://hal.science/hal-00175806/file/1994OGPossiblerevised.pdf | Éric Lichtfouse
Sylvie Derenne
André Mariotti
Claude Largeau
Possible algal origin of long chain odd n-alkanes in immature sediments as revealed by distributions and carbon isotope ratios
Keywords: carbon-13, n-alkane, n-alkene, Pula oil shale, Botryococcus braunii, alga, plant, waxes, sediment
A Pliocene oil shale (Pula, Hungary), a C 3 plant Triticum aestivum and a C 4 plant Zea mays were compared using isotopic composition of bulk organic matter, along with distributions and individual carbon isotope ratios of n-alkanes from organic extracts. The microalga Botryococcus braunii (A race) was thus shown to be the main source of the predominant 27, 29 and 31 n-alkanes of Pula sediment. Therefore, the dominance of odd carbon-numbered n-alkanes in the range C 25 to C 35 in extracts from immature sediments shall not be systematically assigned to higher plant contribution but algal input has also to be tested via molecular isotope ratios. In fact, the long chain n-alkanes with an odd predominance observed in extracts of various immature sediments are likely to be derived, at least partially, from algae.
INTRODUCTION
n-Alkanes occur almost ubiquitously in soils, sediments, petroleums and coals [START_REF] Eglinton | Organic Geochemistry. The Organic Chemist's Approach[END_REF][START_REF] Morrisson | Soil Lipids. In Organic Geochemistry, Methods and Results[END_REF][START_REF] Albrecht | Biogenic substances in sediments and fossils[END_REF][START_REF] Tissot | Petroleum Formation and Occurrence[END_REF] and their distribution has been widely used for source organism identification and as a maturation parameter. Markedly different pathways can be implicated, however, in n-alkane formation. Indeed, such compounds are likely to represent a direct contribution of natural waxes in some immature sediments, whereas they appear to be mainly formed by thermal breakdown of kerogen in mature sediments and petroleums. In this connection, kerogen fractions derived from the selective preservation of resistant, highly aliphatic, biomacromolecules have been recently shown to be especially prolific materials for the catagenic production of n-alkanes (Largeau et al., 1986;[START_REF] Tegelaar | Possible origin of n-alkanes in high-wax crude oils[END_REF][START_REF] Leeuw | A review of macromolecular organic compounds that comprise living organisms and their role in kerogen, coal and petroleum formation[END_REF]. n-Alkanes can also arise by degradation of biological aliphatic precursors such as n-alcohols and n-carboxylic acids [START_REF] Tissot | Petroleum Formation and Occurrence[END_REF][START_REF] Lichtfouse | Tracing biogenic links of natural organic substances at the molecular level with stable carbon isotopes: n-alkanes and n-alkanoic acids from sediments[END_REF]and refs. therein).
n-Alkanes exhibiting a pronounced odd carbon-number predominance in the range C 25 to C 35 have often been used to indicate a terrestrial contribution in sediments because similar distributions are observed in leaf waxes of higher plants. However, n-alkenes also showing an odd predominance in the same range occur in various microalgae (Kolattukudy, 1976). Through diagenetic reduction, these algal constituents might thus represent an important source of sedimentary n-alkanes. Isotopic analysis at the molecular level is actually the sole method which could allow to discriminate between these two origins and thus provide direct information on contributing organisms. In the present study, this method was applied to the hydrocarbons isolated from the extract of an important, organic-rich, Pliocene deposit from Pula (Hungary). The bulk carbon isotope ratio of this oil shale was also determined. Parallel measurements were carried out, for comparative purposes, on Triticum aestivum, a C 3 plant, and Zea mays, a C 4 plant. The major aim of these isotopic studies was to specify the origin of the n-alkanes occurring in Pula extract, to derive information on the relative contributions of terrestrial and algal inputs to this sediment and, in a more general way, to test the suitability of odd, long chain, n-alkanes as markers of higher plant contribution.
EXPERIMENTAL
Detailed procedures are described elsewhere [START_REF] Metzger | An n-alkatriene and some n-alkadienes from the A Race of the green alga Botryococcus braunii[END_REF][START_REF] Lichtfouse | A molecular and isotopic study of the organic matter from the Paris Basin, France[END_REF]. Leaves from two plants of different mode of CO 2 fixation, T. aestivum and Z. mays, were collected in 1992 in an experimental field at Boigneville, France. The tested Pula sample contained about 45% of TOC; this oil shale, comprising immature type I kerogen, is a typical maar lake deposit [START_REF] Brukner-Wein | Organic Geochemistry of alginite deposited in a volcanic crater lake[END_REF]. n-Alkanes were isolated from Pula sediment extract and leaf extracts, then purified by successive column and thin layer chromatography. They were identified by gas chromatography-mass spectrometry and co-elution with standards. Bulk isotopic compositions were measured on a Carlo Erba NA 1500 elemental N and C analyser coupled to a VG Sira 10 mass spectrometer. Such measurements were carried out on Z. mays and T. aestivum leaves, on HCl-decarbonated Pula oil shale and on the kerogen isolated from this shale by a classical HCl-HF treatment. Isotopic analyses of individual n-alkanes were carried out under a continuous helium flow using an HP 5890 gas chromatograph coupled with a CuO furnace (850°C) and a cryogenic trap (-100°C) coupled with a VG Optima mass spectrometer, monitoring continuously ion currents at m/z = 44, 45 and 46. Carbon isotopic compositions are expressed in per mil. relative to the Pee Dee Belemnite standard: δ 13 C = [( 13 C/ 12 C sample -13 C/ 12 C std)/( 13 C/ 12 C std)] x 10 3 , where 13 C/ 12 C std = 0.0112372. Significant measurements of isotope ratios cannot be achieved when some overlapping of GC peaks occurs [START_REF] Lichtfouse | Enhanced resolution of organic compounds from sediments by isotopic gas chromatography-combustion-mass spectrometry[END_REF]. Accordingly, the isotopic composition of n-heptacosane from Pula sediment is not reported due to the presence of a nearly co-eluting C 27 monounsaturated hydrocarbon.
RESULTS AND DISCUSSION
Isotopic composition of bulk organic matter
Common higher plants can be classified into two isotopic categories according to their mode of CO 2 fixation [START_REF] Smith | Two categories of 13 C/ 12 C ratios for higher plants[END_REF]Epstein, 1971, O'Leary, 1981) To evaluate the contribution of terrestrial plants in Pula sediment, the bulk isotopic compositions of this oil shale, of a modern C 3 plant (wheat) and a modern C 4 plant (maize) were compared. However, such a comparison must take into account the difference of isotopic composition of atmospheric CO 2 between Pliocene and present. The present isotopic composition of CO 2 is 7.8‰ while the pre-industrial value was about 1-1.5‰ heavier and did not undergo significant changes during geological times, except sharp variations at era boundaries [START_REF] Shackleton | The carbon isotope record of the Cenozoic: history of organic carbon burial and of oxygen in the ocean and atmosphere[END_REF][START_REF] Schlanger | The Cenomanian-Turonian Oceanic Anoxic Event, I. Stratigraphy and distribution of organic carbon-rich beds and the marine δ 13 C excursion[END_REF][START_REF] Marino | Isotopic composition of atmospheric CO 2 inferred from carbon in C4 plant cellulose[END_REF][START_REF] Raymo | Response of deep ocean circulation to initiation of northern hemisphere glaciation (3-2 MA)[END_REF]. Accordingly, a corrective term of +1.5‰ was applied to modern isotopic values in order to allow for a significant comparison with Pula sediment (Table 1). Pula sediment -19.03* -28.7 -30.7 -28.1 -30.9 -28.8 -30.0 -28.8 ___________________________________________________________________________________________ * It is well documented that B. braunii fossilization takes place via the selective preservation of the non-hydrolysable aliphatic biopolymer building up the outer walls. In the living algae, this material is formed by polymerization of high molecular weight lipids comprising long alkyl chains. In the A race, the biosynthetic pathways implicated in the production of the resistant biopolymer and of n-alkadienes are related. Accordingly, relatively close isotopic ratios are expected for the above compounds. The large difference noted between the bulk isotope ratio of the HCl-decarbonated sediment, on one hand, and the n-alkanes of sediment extract, on the other hand, shall reflect the somewhat heterogenous nature of the sediment. Indeed, even in Torbanites, which are sedimentary rocks chiefly composed of fossil Botryococcus accumulation, micro-FTIR observations revealed the presence of an interstitial organo-mineral matrix which organic constituents are sharply different from those of Botryococcus colonies [START_REF] Landais | Chemical caracterization of torbanites by transmission micro-FTIR spectroscopy: origin and extent of compositional heterogeneities[END_REF]. In fact, further treatment of the decarbonated Pula sediment with HF-HCl results in a marked shift of the bulk isotope ratio to a lighter value: the kerogen isolated after elimination of the chemically labile constituents exhibits a bulk isotope ratio of -23.68‰. The difference still observed between this value and the average isotope ratios of the extracted n-alkanes should mainly reflect the occurrence, along with the predominant Botryococcus colonies observed by SEM, of non-hydrolysable materials derived from other sources.
The pronounced isotopic differences between the sediment and the two plants indicate that a large input of terrestrial material is unlikely (Table 1). Furthermore, the isotopic value of the Pliocene sediment (-19.03 ‰) falls into the corrected isotopic range of algae (-21.5‰ to -10.5‰). The major algal contribution thus suggested is consistent with the abundant presence of fossil remains of the colonial microalga Botryococcus braunii observed by scanning electron microscopy in Pula oil shale [START_REF] Solti | Prospection and utilization of alginite and oil shale in Hungary[END_REF].
n-Alkane distributions and isotopic compositions
The n-alkanes isolated from the extract of Pula sediment range from C 16 to C 37 with a strong predominance of C 27 , C 29 and C 31 homologues (Figure 1). A similar predominance is typically observed in higher plant waxes (Kolattukudy, 1976), as illustrated in Figure 1 in the case of wheat and maize leaves. It shall be noted, however, that one of the three races known to occur in B. braunii [START_REF] Metzger | Lipids and macromolecular lipids of the hydrocarbonrich microalga Botryococcus braunii. Chemical structure and biosynthesis -Geochemical and Biotechnological importance[END_REF] is characterized by an abundant production of odd carbon-numbered n-alkadienes with a very strong predominance of the C 27 , C 29 and C 31 homologues (Fig. 1). These dienes were identified as the heptacosa-1,18 Z-diene, the nonacosa-1,20 Z-diene and the hentriaconta-1,22 Z-diene, respectively [START_REF] Knights | Hydrocarbons from the green form of the freshwater alga Botryococcus braunii[END_REF][START_REF] Metzger | An n-alkatriene and some n-alkadienes from the A Race of the green alga Botryococcus braunii[END_REF]. Based on their distribution, the n-alkanes of Pula sediment extract could be therefore derived from higher plant waxes or from the reduction of B. braunii n-alkadienes. It shall be noted, however, that the relative abundance of C 27 , C 29 and C 31 n-alkanes from the sediment are close to the relative abundance of B. braunii n-alkadienes. Moreover, both distributions maximize at C 27 instead of C 29 for maize and wheat. The corrected isotopic compositions observed for the C 27 , C 29 , C 31 and C 33 n-alkanes from maize, averaging at -18.6‰, and wheat, averaging at -35.0‰ (Table 1 and Fig. 2) are similar to those previously observed for n-alkanes from various C 4 and C 3 higher plants, respectively [START_REF] Rieley | Sources of sedimentary lipids deduced from stable carbon-isotope analyses of individual compounds[END_REF][START_REF] Rieley | Gas chromatography/Isotope Ratio Mass Spectrometry of leaf wax n-alkanes from plants of differing carbon dioxide metabolisms[END_REF]. Markedly different ratios, averaging at -30.5‰, are obtained for the odd-carbon numbered n-alkanes of Pula sediment extract. Accordingly, based on these isotope ratios, the predominant occurrence of C 27 , C 29 and C 31 n-alkanes in Pula extract is unlikely to reflect a large input of terrestrial plant waxes. On the contrary, based both on their isotopic composition and distribution, such n-alkanes should indicate a major contribution of the alkadiene-producing race of B. braunii (A race). It can be noted that n-alkanes in the range C 25 to C 35 exhibiting an odd predominance and an isotopic composition in the range -30 to -27‰ have been shown to occur in extracts of various low maturity sediments [START_REF] Rieley | Sources of sedimentary lipids deduced from stable carbon-isotope analyses of individual compounds[END_REF][START_REF] Collister | An isotopic biogeochemical study of the Green River oil shale[END_REF][START_REF] Lichtfouse | Tracing biogenic links of natural organic substances at the molecular level with stable carbon isotopes: n-alkanes and n-alkanoic acids from sediments[END_REF][START_REF] Lichtfouse | A molecular and isotopic study of the organic matter from the Paris Basin, France[END_REF][START_REF] Collister | Partial resolution of sources of nalkanes in the saline portion of the Parachute Creek Member, Green River Formation (Piceance Creek Basin, Colorado)[END_REF]. Based on their isotope ratios and distributions, these n-alkanes should not be chiefly derived from terrestrial waxes but more likely from algal lipids.
CONCLUSION
A strong predominance of odd carbon-numbered n-alkanes in the range C 25 to C 35 in sediment extracts shall not be systematically assigned to a contribution of terrestrial plant waxes. In fact, as shown here in the case of Pula oil shale via isotopic and molecular studies, such a feature can be also associated with a major input of microalgae. In addition, the occurrence of long chain, odd n-alkanes, with isotope ratios from -30‰ to -27‰, previously observed in extracts of various immature sediments is likely to reflect an important algal contribution.
. C 3 plants incorporate CO 2 from the atmosphere by ribulose bisphosphate carboxylation (Calvin cycle) and show isotopic values ranging from -34‰ to -24‰. C 4 plants fix CO 2 by phosphoenol pyruvate carboxylation (Hatch-Slack cycle) and show isotopic values ranging from -19‰ to -6‰. Algae have intermediate values of -23‰ to -12‰.
Fig. 1 .
1 Fig. 1. Distribution of linear hydrocarbons in wheat and maize leaf waxes, extant B. braunii (A race) and Pula sediment extract. Relative concentrations were calculated by normalization of area obtained by gas chromatography using a flame ionization detector.
Fig. 2 .
2 Fig. 2. Carbon isotope composition of n-alkanes from maize and wheat leaf waxes and Pula sediment extract. Plant values are corrected to take into account the isotopic difference between ancient and modern atmospheric CO 2 (+1.5‰).
Table 1 .
1 Carbon isotope composition of bulk organic matter and n-alkanes from maize leaves, a C 4 plant, wheat leaves, a C 3 plant, and Pula sediment. Values in parentheses take into account the isotopic difference between ancient and modern atmospheric CO 2 (+1.5‰).
___________________________________________________________________________________________
Bulk organic n-alkanes (carbon number)
matter 27 28 29 30 31 32 33 34
Zea Mays (maize) -13.18 -19.1 -18.4 -20.6 -22.2
(-11.68) (-17.6) (-16.9) (-19.1) (-20.7)
Triticum aestivum (wheat) -29.61 -35.9 -36.5 -36.7 -36.7
(-28.11) (-34.4) (-35.0) (-35.2) (-35.2)
Acknowledgements-Drs. M. Hetényi and A. Brukner-Wein are gratefully acknowledged for providing a sample of Pula oil shale and general information on this material. We thank Dr. Hervé Bocherens for helpful discussions. We are indebted to G. Bardoux, M. Grably and C. Girardin for MS assistance. An anonymous reviewer is also acknowledged for helpful comments. |
01758077 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01758077/file/ARK2016_Platis_Rasheed_Cardou_Caro.pdf | Angelos Platis
Tahir Rasheed
Philippe Cardou
Stéphane Caro
Isotropic Design of the Spherical Wrist of a Cable-Driven Parallel Robot
Keywords: Parallel mechanism, cable-driven parallel robot, parallel spherical wrist, wrenches, dexterity
Because of their mechanical properties, parallel mechanisms are most appropriate for large payload to weight ratio or high-speed tasks. Cable driven parallel robots (CDPRs) are designed to offer a large translation workspace, and can retain the other advantages of parallel mechanisms. One of the main drawbacks of CD-PRs is their inability to reach wide ranges of end-effector orientations. In order to overcome this problem, we introduce a parallel spherical wrist (PSW) end-effector actuated by cable-driven omni-wheels. In this paper we mainly focus on the description of the proposed design and on the appropriate placement of the omni-wheels on the wrist to maximize the robot dexterity.
Introduction
Several applications could benefit from CDPRs endowed with large orientation workspaces, such as entertainment and manipulation and storage of large and heavy parts. This component of the workspace is relatively small in existing CDPR designs.To resolve this problem, a parallel spherical wrist (PSW) end-effector is introduced and connected in series with the translational 3-DOF CDPR to provide an unbounded singularity-free orientation workspace.
IRCCyN, École Centrale de Nantes, 1 rue de la Noë, 44321, Nantes, France, e-mail: {Angelos.Platis, Tahir.Rasheed}@eleves.ec-nantes.fr Laboratoire de robotique, Département de génie mécanique, Université Laval, Quebec City, QC, Canada. e-mail: pcardou@gmc.ulaval.ca CNRS-IRCCyN, 1 rue de la Noë, 44321, Nantes, France, e-mail: stephane.caro@irccyn.ecnantes.fr 1 This paper focuses on the kinematic design and analyis of a PSW actuated by the cables of a CDPR providing the robot independent translation and orientation workspaces. CDPRs are generally capable of providing a large 3-dofs translation workspace, normally needed four cables, which enable the user to control the point where all of them are concentrated [START_REF] Bahrami | Optimal design of a spatial four cable driven parallel manipulator[END_REF], [START_REF] Hadian | Kinematic isotropic configuration of spatial cable-driven parallel robots[END_REF].
Robots that can provide large orientation workspace have been developed using spherical wrist in the past few years that allows the end-effector to rotate with unlimited rolling, in addition to a limited pitch and yaw movements [START_REF] Bai | Modelling of a spherical robotic wrist with euler parameters[END_REF], [START_REF] Wu | Dynamic modeling and design optimization of a 3-dof spherical parallel manipulator[END_REF]. Eclipse II [START_REF] Kim | Eclipse-ii: a new parallel mechanism enabling continuous 360-degree spinning plus three-axis translational motions[END_REF] is an interesting robot that can provide unbounded 3-dofs translational motions, however its orientation workspace is constrained by structural interference and rotation limits of the spherical joints.
Several robots have been developed in the past having decoupled translation and rotational motions. One interesting concept of such a robot is that of the Atlas Motion Platform [START_REF] Hayes | Atlas motion platform: Full-scale prototype[END_REF] developed for simulation applications. Another robot with translation motions decoupled from orientation motions can be found in [START_REF] Yime | A novel 6-dof parallel robot with decoupled translation and rotation[END_REF]. The decoupled kinematics are obtained using a triple spherical joint in conjunction with a 3-UPS parallel robot.
In order to design a CDPR with a large orientation workspace, we introduce a parallel spherical wrist (PSW) end-effector actuated by cable-driven omni-wheels. In this paper we mainly focus on the description of the proposed design and on the appropriate placement of the omni-wheels on the wrist to maximize the robot dexterity.
Manipulator Architecture
The end-effector is a sphere supported by actuated omni-wheels as shown in Fig. 1. The wrist contians three passive ball joints at the bottom and three active omniwheels being driven through drums. Each cable makes several loops around each drum. Both ends are connected to two servo-actuated winches, which are fixed to the base. When two servo-actuated winches connected to the same cable turn in the same direction, the cable circulates and drives the drum and its associated omniwheel. When both servo-actuated winches turn in opposite directions, the length of the cable loop changes, and the sphere centre moves. To increase the translation workspace of the CDPR, another cable is attached, which has no participation in the omni-wheels rotation. The overall design of the manipulator is shown in Fig. 2.
We have in total three frames. First, the CDPR base frame (F 0 ), which is described by its center O 0 having coordinates x 0 , y 0 , z 0 . Second, the PSW base frame (F 1 ), which has its center O 1 at the geometric center of the sphere and has coordinates x 1 , y 1 , z 1 . Third, the spherical end-effector frame (F 2 ) is attached to the end-effector. Its centre O 2 coincides with that of the PSW base frame (O 2 ≡ O 1 ) and its coordinates are x 2 , y 2 , z 2 .
Exit points A i are the cable attachment points that link the cables to the base. All exit points are fixed and expressed in the CDPR reference frame F 0 . Anchor points B i are the platform attachment points. These points are not fixed as they depend to winch #1 to winch #2 actuated omni-wheel passive ball joint drum to winch #7 Fig. 1: Isotropic design of the parallel spherical wrist on the vector P, which is the vector that contains the pose of the moving platform expressed in the CDPR reference frame F 0 . The remaining part of the paper aims at finding the appropriate placement of the omni-wheels on the wrist to maximise the robot dexterity.
Kinematic Analysis of the Parallel Spherical Wrist
Parameterization
To simplify the parameterization of the parallel spherical wrist, some assumptions are made. First, all the omni-wheels are supposed to be normal the sphere. Second, the contact points of the omni-wheels with the sphere lie in the base of an inverted cone where its end is the geometrical center of the sphere parametrized by angle α.
x 0 y 0 z 0 O 0 F 0 x 1 y 1 z 1 z 2 x 2 y 2 O 1,2 F 1 F 2 A 1 A 2 A 3 A 4 B 1 B 2 B 3 B 4 1 2 3 4 5 6 7
Fig. 2: Concept idea of the manipulator Third, the three contact points form an equilateral triangle as shown in [START_REF] Hayes | Atlas motion platform: Full-scale prototype[END_REF][START_REF] Hayes | Atlas motion platform generalized kinematic model[END_REF]. Fourth, the angle between the tangent to the sphere and the actuation force produced by the ith actuated omni-wheel is named β i , i = 1, 2, 3, and β 1 = β 2 = β 3 = β . Figure 3 illustrates the sphere, one actuated omni-wheel and the main design variables of the parallel spherical wrist. Π i is the plane tangent to the sphere and passing through the contact point G i between the actuated omni-wheel and the sphere. ω i denotes the angular velocity vector of the ith actuated omni-wheel. s i is a unit vector along the tangent line T that is tangent to the base of the cone and coplanar to plane Π i . w i is a unit vector normal to s i . f ai depicts the transmission force lying in plane Π i due to the actuated omni-wheel. α is the angle defining the altitude of contact points G i (α ∈ [0, π]). β is the angle between the unit vectors s and
v i (β ∈ [-Π 2 , Π 2
]). As the contact points G i are the corners of an equilateral triangle, the angle between the contact point G 1 and the contact points G 2 and G 3 is equal to γ. R is the radius of the sphere. r i is radius of the i th actuated omni-wheel. φi is the angular velocity of the omni-wheel. u i , v i , n i are unit vectors at point G i and i, j, k are unit vectors along x 2 , y 2 , z 2 respectively.
In order to analyze the kinematic performance of the parallel spherical wrist, an equivalent parallel robot (Fig. 4) having six virtual legs is presented, each leg having a spherical, a prismatic and another spherical joints connected in series. Three legs have an actuated prismatic joint (green), whereas the other three legs have a locked prismatic joints (red). Here, the kinematics of the spherical wrist is analyzed with screw theory and an equivalent parallel robot represented in Fig. 4.
Kinematic Modeling
Fig. 4(a) represents the three actuation forces f ai , i = 1, 2, 3 and the three constraint forces f ci , i = 1, 2, 3 exerted by the actuated omni-wheels on the sphere. The three constraint forces intersect at the geometric center of the sphere and prevent the latter from translating. The three actuation forces generated by the three actuated omniwheels allow us to control the three-dof rotational motions of the sphere. Fig. 4(b) depicts a virtual leg corresponding to the effect of the ith actuated omni-wheel on the sphere. The kinematic model of the PSW is obtained by using the theory of reciprocal screws [START_REF] Ball | A treatise on the theory of screws[END_REF][START_REF] Hunt | Kinematic geometry of mechanisms[END_REF] as follows:
A t = B φ ( 1
)
where t is the sphere twist, φ = φ1 φ2 φ3 T is the actuated omni-wheel angular velocity vector. A and B are respectively the forward and inverse kinematic Jacobian matrices of the PSW and take the form:
A = A rω A rp 0 3×3 I 3 (2)
B = I 3 0 3×3 (3) 0 G 2 G 3 G 1 B 2 B 3 B 1 A 2 A 3 A 1 Locked Prismatic Joint Actuated Prismatic Joint f a1 f a2 f a3 f c1 f c2 f c3 G
A rω = R(n 1 × v 1 ) T R(n 2 × v 2 ) T R(n 3 × v 3 ) T and A rp = v T 1 v T 2 v T 3 (4)
As the contact points on the sphere form an equilateral triangle, γ = 2π/3. As a consequence, matrices A rω and A rp are expressed as functions of the design parameters α and β :
A rω = R 2 -2CαCβ -2Sβ 2SαCβ
CαCβ + √ 3Sβ Sβ - √ 3CαCβ 2SαCβ
CαCβ - √ 3Sβ Sβ + √ 3CαCβ 2SαCβ (5)
A rp = 1 2 -2CαSβ 2Cβ 2SαSβ CαSβ - √ 3Cβ -( √ 3CαSβ +Cβ ) 2SαSβ CαSβ + √ 3Cβ √ 3CαSβ -Cβ 2SαSβ (6)
where C and S denote the cosine and sine functions, respectively.
Singularity Analysis
As matrix B cannot be rank deficient, the parallel spherical wrist meets singularities if and only if (iff) matrix A is singular. From Eqs. ( 5) and ( 6), matrix A is singular
Ι 0 G 2 G 3 G 1 f a1 f a2 f a3 (a) β = ±π/2 0 G 2 G 3 G 1 f a1 f a2 f a3 (b) α = π/2 and β = 0
det(A) = 3 √ 3 2 R 3 SαCβ (1 -S 2 αC 2 β ) = 0 (7)
namely, if α = 0 or π; if β = ±π/2; if α = π/2 and β = 0 or ±π.
Figs. 5a and 5b represent two singular configurations of the parallel spherical wrist under study. The three actuation forces f a1 , f a2 and f a3 intersect at point I in Fig. 5a. The PSW reaches a parallel singularity and gains an infinitesimal rotation (uncontrolled motion) about an axis passing through points O and I in such a configuration. The three actuation forces f a1 , f a2 and f a3 are coplanar with plane (X 1 OY 1 ) in Fig. 5b. The PSW reaches a parallel singularity and gains two-dof infinitesimal rotations (uncontrolled motions) about an axes that are coplanar with plane (X 1 OY 1 ) in such a configuration.
Kinematically Isotropic Wheel Configurations
This section aims at finding a good placement of the actuated omni-wheels on the sphere with regard to the manipulator dexterity. The latter is evaluated by the condition number of reduced Jacobian matrix J ω = rA -1 rω which maps angular velocities of the omni-wheels φ to the required angular velocity of the end-effector ω. From Eqs. ( 5) and ( 6), the condition number κ F (α, β ) of J ω based on the Frobenius norm [START_REF] Angeles | Fundamentals of Robotic Mechanical Systems: Theory, Methods and Algorithms[END_REF] is expressed as follows: Figure 6 depicts the inverse condition number of matrix A based on the Frobenius norm as a function of angles α and β . κ F (α, β ) is a minimum when its partial derivatives with respect to α and β vanish, namely,
κ F (α, β ) = 1 3 3S 2 αC 2 β + 1 S 2 αC 2 β (1 -S 2 αC 2 β ) (8)
κα (α, β ) = ∂ κ ∂ α = Cα(3S 2 αC 2 β -1)(S 2 αC 2 β + 1) 18S 3 αC 2 β (S 2 αC 2 β -1) 2 κ = 0 (9) κβ (α, β ) = ∂ κ ∂ β = - Sβ (3S 2 αC 2 β -1)(S 2 αC 2 β + 1) 18S 2 αC 3 β (S 2 αC 2 β -1) 2 κ = 0 ( 10
)
and its Hessian matrix is semi-positive definite. As a result, κ F (α, β ) is a minimum and equal to 1 along the hippopede curve, which is shown in Fig. 6 and defined by the following equation: 3S 2 αC 2 β -1 = 0 [START_REF] Yime | A novel 6-dof parallel robot with decoupled translation and rotation[END_REF] This hippopede curve amounts to the isotropic loci of the parallel spherical wrist. Figure 7 illustrates some placements of the actuated omni-wheels on the sphere leading to kinematically isotropic wheel configurations in the parallel spherical wrist. It should be noted that the three singular values of matrix A rω are equal to the ratio between the sphere radius R and the actuated omni-wheel radius r along the hippopede curve, namely, the velocity amplification factors of the PSW are the same and constant along the hippopede curve.
If the rotating sphere were to carry a camera, a laser or a jet of some sort, then the reachable orientations would be limited by interferences with the omni-wheels.
α = 35.26 • , β = 0 • α = 65 • , β = 50.43 • α = 50 • , β = 41.1 • α = 80 • , β = 54.11 •
Fig. 7: Kinematically isotropic wheel configurations in the parallel spherical wrist Therefore, a designer would be interested in choosing a small value of alpha, so as to maximize the field of view of the PSW. As a result, the following values have been assigned to the design parameters α and β :
α = 35.26 • (12) β = 0 • (13)
in order to come up with a kinematically isotropic wheel configuration in the parallel spherical wrist and a large field of view. The actuated omni-wheels are mounted in pairs in order to ensure a good contact between them and the sphere. A CAD modeling of the final solution is represented in Fig. 1.
Conclusion
This paper presents the novel concept of mounting a parallel spherical wrist in series with a CDPR, while preserving a fully-parallel actuation scheme. As a result, the actuators always remain fixed to the base, thus avoiding the need to carry electric power to the end-effector and minimizing its size, weight and inertia. Another original contribution of this article is the determination of the kinematically isotropic wheel configurations in the parallel spherical wrist. These configurations allow the designer to obtain a very good primary image of the design choices. To our knowledge, these isotropic configurations were never reported before, although several researchers have studied and used omni-wheel-actuated spheres. Future work includes the development of a control scheme to drive the end-effector rotations while accounting for the displacements of its centre, and also making a small scale prototype of the robot.
1 Fig. 3 :
13 Fig. 3: Parameterization of the parallel spherical wrist
Fig. 4 :
4 Fig. 4: (a) Actuation and constraint wrenches applied on the end-effector of the spherical wrist (b) Virtual i th leg with actuated prismatic joint
Fig. 5 :
5 Fig. 5: Singular configurations of the parallel spherical wrist
Fig. 6 :
6 Fig. 6: Inverse condition number of the forward Jacobian matrix A based on the Frobenius norm as a function of design parameters α and β |
01688104 | en | [
"spi"
] | 2024/03/05 22:32:10 | 2013 | https://hal.science/hal-01688104/file/tagutchou_2013.pdf | J P Tagutchou
Dr L Van De Steene
F J Escudero Sanz
S Salvador
Gasification of Wood Char in Single and Mixed Atmospheres of H 2 O and CO 2
Keywords: biomass, gasification, kinetics, mixed atmosphere, reactivity
In gasification processes, char-H 2 O and char-CO 2 are the main heterogenous reactions that are responsible for carbon conversion into H 2 and CO. These two reactions are generally looked at independently without considering interactions between them. The objective of this work was to compare kinetics of each reaction alone to kinetics of each reaction in a mixed atmosphere of H 2 O and CO 2 . A char particle was gasified in a macro thermo gravimetry reactor at 900 ı C successively in H 2 O/N 2 , CO 2 /N 2 , and H 2 O/CO 2 /N 2 atmospheres.
INTRODUCTION
The process of biomass conversion to syngas (H 2 C CO) involves a number of reactions. The first step is drying and devolatilization of the biomass, which leads to the formation of gas (noncondensable species), tar (gaseous condensable species), and a solid residue called char. Gas and tar are generally oxidized to produce H 2 O and CO 2 .
The solid residue (the subject of this work) is converted to produce syngas (H 2 C CO) thanks to the following heterogeneous reactions:
C C H 2 O ! CO C H 2 ;
(1)
C C CO 2 ! 2CO; (2) C C O 2 ! CO=CO 2 : (3)
Many studies have been conducted on char gasification in reactive H 2 O, CO 2 , or O 2 atmospheres. The reactivity of char during gasification processes depends on the reaction temperature and on the concentration of the reactive gas. Additionally, these heterogeneous reactions are known to be surface reactions, involving a so-called "reactive surface." While the role of temperature and reactive gas partial pressure are relatively well understood, clearly defining and quantifying the reactive surface remains a challenge. The surface consists of active sites located at the surface of pores where the adsorption/desorption of gaseous molecules takes place. The difficulty involved in determining this surface can be explained by a number of physical and chemical phenomena that play an important role in the gasification process:
(i) The whole porous surface of the char may not be accessible to the reactive gas, and may itself not be reactive. The pore size distribution directly influences the access of reactive gas molecules to active sites [START_REF] Roberts | A kinetic analysis of coal char gasification reactions at high pressures[END_REF]. It has been a common practice to use the total specific surface area measured using the standard BET test as the reactive surface. However, it has been established that a better indicator is the surface of only pores that are larger than several nm or tens of nm [START_REF] Commandré | The high temperature reaction of carbon with nitric oxide[END_REF]. (ii) As the char is heated to high temperatures, a reorganization of the structure occurs. The concentration of available active sites of carbon decreases and this has a negative impact on the reactivity of the char. This phenomenon is called thermal deactivation. (iii) The minerals present in the char have a catalytic effect on the reaction and help increase the reactivity of the char. Throughout the gasification process, there is a marked increase in the mass fraction of catalytic elements contained in the char with a decrease in the mass of the carbon.
Due to the complexity of the phenomena and the difficulty to distinguish the influence of each phenomenon on reactivity, a surface function (referred to as SF in this article) is usually introduced in models to describe the gasification of carbon and to globally account for all of the physical phenomenon [START_REF] Sorensen | Determination of reactivity parameters of model carbons, cokes and flame-chars[END_REF][START_REF] Gobel | Dynamic modelling of char gasification in a fixed-bed[END_REF].
While single H 2 O and CO 2 atmospheres have been extensively studied, only a few authors have studied the gasification of a charcoal biomass in mixed atmospheres.
Kinetic model classically proposed for the gasification of carbon residues is as follows:
d m.t/ dt D R.t/:m.t/: (4)
The reactivity of charcoal with a reactant j is often split into intrinsic reactivity r j , which only depends on temperature T and partial pressure p of the reactive gas, and the surface function F: R.t/ D F .X.t//:r j .T:p/:
(5)
As discussed above, the surface function F depends on many phenomena. In a simplifying approach, many authors express it as a function of the conversion X.
METHODOLOGY
Using a thermogravimetry (macro-TG) apparatus, gasification of char particles was characterized in three different reactive atmospheres: single H 2 O atmosphere, single CO 2 atmosphere, and a mixed atmosphere containing both CO 2 and H 2 O.
Experimental Set-up
The macro-TG reactor used in this work is described in detail in [START_REF] Mermoud | Influence of the pyrolysis heating rate on the steam gasification rate of large wood char particles[END_REF] N 2 -at a controlled temperature. The particles are continuously weighed to monitor conversion of the charcoal.
The particles were left in the hot furnace swept by nitrogen and maintained until their weight stabilized, attesting to the removal of possible residual volatile matter or re-adsorbed species. The atmosphere then turned into a gasifying atmosphere, marking the beginning of the experiment.
Preparation and Characterization of the Samples
The material used in this study was charcoal from maritime pine wood chips. Charcoal was produced using a pilot scale screw pyrolysis reactor. The pyrolysis operating conditions were chosen to produce a char with high fixed carbon content, i.e., a temperature of 750 ı C, a 1 h residence time, and 15 kg/h of flow rate in a 200-mm internal diameter electrically heated screw. Based on previous studies, the heating rate in the reactor was estimated to be 50 ı C/min [START_REF] Fassinou | Pyrolysis of Pinus pinaster in a two-stage gasifier: Influence of processing parameters and thermal cracking of tar[END_REF].
After pyrolysis, samples with a controlled particle size were prepared by sieving, and the thickness of particles was subsequently measured using an electronic calliper. Particles with a thickness of 1.5 and 5.5 mm were selected for all the experiments. Table 1 lists the results of proximate and ultimate analysis of the charcoal particles. The amount of fixed carbon was close to 90%, attesting to the high quality of the charcoal. The amount of ash, a potential catalyzer, was 1.4%.
GASIFICATION OF CHARCOAL IN SINGLE ATMOSPHERES
Operating Conditions
All experiments were carried out at a temperature of 900 ı C and at atmospheric total pressure. For each gasifying atmosphere, the mole fraction was chosen to cover values encountered in industrial reactors; experiments were performed at respectively 10, 20, and 40% mole fraction, respectively, for both H 2 O and CO 2 . In order to deal with the variability of the composition of biomass chips, each experiment was carried out with three to five particles in the grid basket. Care was taken to ensure there was no interaction between the particles. Each experiment was repeated at least three times.
Results and Interpretations
From the mass m(t) at any time, the conversion progress X was calculated according to Eq. ( 6): where m 0 and m ash represent, respectively, the initial mass of the char and the mass of ash at the end of the process. Figure 2 shows the conversion progress versus time for all the experiments. For char-H 2 O experiments, good repeatability was observed. Before 50% conversion, dispersion was small (<5%), while after 50% conversion, it could reach 10%. An average gasification rate was calculated for each experiment at X D 0:5 as 0:5=t (in s 1 ). It was 2.5 times larger in 40% steam than in 10% steam.
X.t/ D m 0 m.t/ m 0 m ash ; (6)
For char-CO 2 experiments, much larger dispersion was observed. It is difficult to give an explanation for this result. The gasification rate was 2.4 times higher in 40% CO 2 than in 10% CO 2 . Moreover, the results revealed a strange evolution in 20% CO 2 : the reaction was considerably slowed down after 60% conversion. This was also observed by [START_REF] Standish | Gasification of single wood charcoal particles in CO 2[END_REF] during their experiments on gasification of charcoal particles in CO 2 at a concentration of 20% CO 2 .
At a given concentration (for instance 40%) steam gasification was on average three times faster than CO 2 gasification.
Determination of Surface Functions (SF)
In practice, the SF can be derived without using a model by plotting R=R 50 (where R 50 is the reactivity for X D 50%). The reactivity R was obtained by derivation of the X curves. It was not possible to plot the values of SF when X tends towards 1 because by the end of the experiment, the decrease in mass was very small leading to a too small signal/noise ratio to enable correct derivation of the signal and calculation of R. At the beginning of the experiments, the derivative was also too noisy for accurate determination. Thus, for small values of X ranging from zero to 0.15, F .X/ was assumed to be constant and equal to F .X D 0:15/. In addition, from a theoretical point of view, F .X/ should be determined using intrinsic values of R, i.e., from experiments in which no limitation by heat or mass transfer occurs. In practice, it has been shown in the literature that experiments with larger particles can be used [START_REF] Sorensen | Determination of reactivity parameters of model carbons, cokes and flame-chars[END_REF]. It is shown in Figure 3 that the results obtained for small particles (1.5 mm thickness) were similar to those for larger particles (5.5 mm thickness). All results are plotted as F .X/ versus X in Figure 4 for the two reactant gases. For the atmospheres with 10 and 40% CO 2 , it is interesting to note that good repeatability was obtained for the SF when the evolution of X over time showed bad repeatability. While the reactivity of the three samples differed, the SF remained the same. Conversely, in 20% CO 2 , the repeatability of the test appeared to be good in the X D f .t/ plot (Figure 2), but results led to quite different shapes for the SF after 60% conversion.
An average value for repeatability experiments was then determined and is plotted in Figure 5. From these results, polynomials were derived for F .X/, as shown in Table 2. It was clearly observed that the 5th order was the most suitable to fit simultaneously all the experimental results of F .X/ in the different atmospheres with the best correlation coefficients. The results show that except in 20% CO 2 , the SF are monotonically increasing functions. For this representation, where the SF are normalized to 1 at X D 0:5, the plots indicate a small increase (from 0.6 to 1) when X increases from 0.1 to 0.5, and a very strong increase (to 4 or 5) when X tends towards 0.9. In experiments with 10, 20, and 40% H 2 O, the SF appeared not to be influenced by the concentration of steam. When CO 2 was the gasifying agent, a strong influence of the concentration was observed, confirming the strange behavior observed in Figure 2 in 20% CO 2 . The function for 10% CO 2 was similar to that of H 2 O (whatever the concentration).
A decreasing SF was found with 20% CO 2 for X between 0.6 and 0.75. This evolution has never previously been reported in the literature. Referring to the discussion about the phenomena that are taken into account in the SF, it is not possible to attribute this irregular shape to a physical phenomenon.
Figure 6 plots several SF from the literature, normalized at X D 0:5 to enable comparison. Expressions, such as ˛-order of .1 X/, and polynomial forms commonly used for biomass were retained. The SF obtained in 10% H 2 O, which is similar to that obtained in 40% CO 2 , has been added in the figure. It can be observed that up to 50% conversion, most of the SF published in the literature are similar. At higher conversions, all SF follow an exponential type function, but differ significantly in their rate of increase. The results of the authors' experiments (10% H 2 O) are within the range of values reported in the literature.
GASIFICATION OF CHARCOAL IN H 2 O C CO 2 ATMOSPHERES
To investigate mixed atmospheres, experiments were conducted using 20% H 2 O with the addition of alternatively 10, 20, and 40% CO 2 . The results of conversion versus time are plotted in Figure 7. For each mixed atmosphere, the average results obtained in the single atmospheres are given as references. Rather good repeatability was observed. It can be seen that adding CO 2 to H 2 O accelerated steam gasification. Indeed, mixing, respectively, 10, 20, and 40% of CO 2 with 20% of H 2 O increased the rate of gasification by 20, 33, and 57%, respectively, compared to the rate of gasification in 20% H 2 O alone. This is a new result, since in the literature, studies on biomass gasification concluded on that steam gasification was inhibited by CO 2 [START_REF] Ollero | The CO 2 gasification kinetics of olive residue[END_REF]. In the 20% H 2 O C 10% CO 2 atmosphere, the average gasification rate was 0.745 10 3 s 1 , which is approximately equal to the sum of the gasification rates obtained in the two separate atmospheres: 0.740 10 3 s 1 . This was also the case for the mixed atmosphere 20% H 2 O C 20% CO 2 . In the 20% H 2 O C 40% CO 2 atmosphere, the average gasification rate was 1.19 10 3 s 1 , i.e., 20% higher than the sum of the gasification rates obtained in the two single atmospheres. In other words, cooperation between CO 2 and H 2 O led to unexpected behaviors. A number of considerations can help interpret this result.
First, the geometrical structure of the two molecules-polar and non-linear for H 2 O and linear and apolar for CO 2 -predestines them to different adsorption mechanisms on potentially different active carbon sites [START_REF] Slasli | Modelling of water adsorption by activated carbons: Effects of microporous structure and oxygen content[END_REF].
The presence of hydrophilic oxygen, such as [-O], at the surface of char leads to the formation of hydrogen bonds, which could hinder H 2 O adsorption and favor that of CO 2 [START_REF] Stoeckli | The characterization of microporosity in carbons with molecular sieve effects[END_REF]. In the same way, as it is a non-organic molecule, H 2 O can only access hydrophobic sites while CO 2 , which is an organic molecule, can access both hydrophilic and hydrophobic sites.
According to [START_REF] Stoeckli | The characterization of microporosity in carbons with molecular sieve effects[END_REF], due to constriction or molecular sieve effects, CO 2 molecules have access to micropores of materials while those of H 2 O, which are assumed to be bigger, do not.
For one of the previous reasons or for any other reason, CO 2 molecules can access internal micropores more easily than H 2 O molecules, and can therefore open certain pores, making them accessible to H 2 O molecules.
The assumption that H 2 O and CO 2 molecules reacted with different sites and that no competition occurred is not sufficient to explain the increase of 20% in the gasification rate under mixed atmospheres. Point 0 can be proposed as an explanation, but a more precise explanation requires further research work. [START_REF] Roberts | Char gasification in mixtures of CO 2 and H 2 O: Competition and inhibition[END_REF] recently concluded that CO 2 has an inhibitory effect on H 2 O gasification, in contradiction to the authors' results. It is believed that the conclusions of [START_REF] Roberts | Char gasification in mixtures of CO 2 and H 2 O: Competition and inhibition[END_REF] are valid in their experimental conditions only, and with a little hindsight may be called into question.
Figure 8 gives the plots of SF obtained with the three mixed atmospheres and for all repeatability tests. Again, the repeatability of experiments was excellent until X D 0:6; this attests to the good quality of experiments and confirms that the variations in SF after 60% conversion are due to specific phenomena. Figure 9 compares all the average SF obtained in mixed atmosphere. From these curves, it can be seen that the curve is similar when the amount of CO 2 was modified from 10 to 40%. Thus, an average 5th-order polynomial expression for mixed atmosphere is given in Eq. ( 7):
F .X/ D 130:14X 5 264:67X 4 C 192:38X 3 57:90X 2 C 7:28X C 0:25:
(7)
CONCLUSION
The gasification of wood char particles during gasification in three atmospheres, i.e., H 2 O, CO 2 , and H 2 O/CO 2 , was experimentally investigated. The formulation adopted enables to split the reactivity R.t/ into kinetic parameters, r j , and all physical aspects, i.e., reactive surface evolution, thermal annealing, catalytic effects, into a surface function SF, F .X/, as follows: The repeatability of the derived SF was always very good until X D 0:6, which attests to the good quality of the experiments. For higher values of X, significant dispersion was observed, despite the use of several particles for each experiment. The SF depends on the nature of the reactant gas, and-in the case of CO 2 -on the concentration of the gas. A SF that surprisingly decreased with increasing X in the range 0.6-0.75 was obtained with CO 2 atmosphere in this work.
d
An important result of this article is that the addition of CO 2 in a H 2 O atmosphere led to an acceleration of gasification kinetic. In a mixture of 20% H 2 O and 40% CO 2 , the gasification rate was 20% higher than the sum of the gasification rates in the two single atmospheres.
FIGURE 2
2 FIGURE 2 Conversion progress versus time during gasification at 900 ı C in single atmospheres (10, 20, and 40% H 2 O and 10, 20, and 40% CO 2 ). (color figure available online)
FIGURE 3
3 FIGURE 3 SF for the two cases of 1.5 mm and 5.5 mm particles in steam atmosphere. (color figure available online)
FIGURE 4
4 FIGURE 4 SF for each experimental result obtained in a single atmosphere.
FIGURE 5
5 FIGURE 5 Average SF obtained in each single atmosphere. (color figure available online)
FIGURE 7
7 FIGURE 7 Experimental results obtained in mixed atmospheres (A: 10% H 2 O and 20% CO 2 ; B: 20% H 2 O and 20% CO 2 ; and C: 20% H 2 O and 40% CO 2 ). For each mixed atmosphere, the corresponding average experimental results for single atmospheres are shown in thick solid line (20% H 2 O single atmosphere) and in thick dashed lines (CO 2 single atmospheres). (color figure available online)
FIGURE 8
8 FIGURE 8 SF obtained in different mixed atmospheres for all experimental repeatability tests.
FIGURE 9
9 FIGURE 9 Average SF obtained in the different mixed atmospheres.
and is presented in Figure1. It consists of positioning several charcoal particles in a grid basket inside the reactor at atmospheric pressure. The reactor is swept by the oxidizing agent-H 2 O or CO 2 in
FIGURE 1 Macro thermogravimetry experimental apparatus. (1) Electric furnace; (2) Quartz tube; (3) Extractor; (4) Preheater; (5) Evaporator; (6) Water feeding system; (7) Water flow rate; (8) Leakage compensation; (9) Suspension basket; (10) Weighing system; (T i ) Regulation thermocouples; (M i ) Mass flow meter.
TABLE 1
1 Proximate and Ultimate Analysis of Charcoal from Maritime Pine Wood Chips
Proximate Analysis, Mass % Ultimate Analysis, Mass %
M VM (dry) FC(dry) Ash (dry) C˙0.3% H ˙0.3% O ˙0.3% N ˙0.1% S ˙0.005%
1.8 4.9 93.7 1.4 89.8 2.2 6.1 0.1 0.01
M: Moisture content; VM: Volatile matter; FC: Fixed carbon.
m.t/ dt D R.t/:m.t/ with R.t/ D F .X.t//:r j .T:p/: |
01758141 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01758141/file/DesignRCDPRs_Gagliardini_Gouttefarde_Caro_Final_HAL.pdf | Lorenzo Gagliardini
email: lorenzo.gagliardini.at.work@gmail.com
Marc Gouttefarde
email: marc.gouttefarde@lirmm.fr
Stéphane Caro
email: stephane.caro@ls2n.fr
Design of Reconfigurable Cable-Driven Parallel Robots
This chapter is dedicated to the design of Reconfigurable Cable-Driven Parallel Robots (RCDPRs) where the locations of the cable exit points on the base frame can be selected from a finite set of possible values. A task-based design strategy for discrete RCDPRs is formulated. By taking into account the working environment, the designer divides the prescribed workspace or trajectory into parts. Each part shall be covered by one configuration of the RCDPR. Placing the cable exit points on a grid of possible locations, numerous CDPR configurations can be generated. All the possible configurations are analysed with respect to a set of constraints in order to determine the parts of the prescribed workspace or trajectory that can be covered. The considered constraints account for cable interferences, cable collisions, and wrench feasibility. The configurations satisfying the constraints are then compared in order to find the combinations of configurations that accomplish the required task while optimising one or several objective function(s). A case study comprising the design of a RCDPR for sandblasting and painting of a three-dimensional tubular structure is finally presented. Cable exit points are reconfigured, switching from one side of the tubular structure to another, until three external sides of the structure are covered. The optimisation includes the minimisation of the number of cable attachment/detachment operations required to switch from one configuration to another one, minimisation of the size of the RCDPR, and the maximisation of the RCDPR stiffness.
Introduction
Cable-Driven Parallel Robots (CDPRs) form a particular class of parallel robots whose moving platform is connected to a fixed base frame by cables. Hereafter, the connection points between the cables and the base frame will be referred to as exit points. The cables are coiled on motorised winches. Passive pulleys may guide the cables from the winches to the exit points. A central control system coordinates the motors actuating the winches. Thereby, the pose and the motion of the moving platform are controlled by modifying the cable lengths. An example of CDPR is shown in Fig. 1. CDPRs have several advantages such as a relatively low mass of moving parts, a potentially very large workspace due to size scalibility, and reconfiguration capabilities. Therefore, they can be used in several applications, e.g. heavy payload handling and airplane painting [START_REF] Albus | The NIST spider, a robot crane[END_REF], cargo handling [START_REF] Holland | Cable array robot for material handling[END_REF], warehouse applications [START_REF] Hassan | Analysis of large-workspace cable-actuated manipulator for warehousing applications[END_REF], large-scale assembly and handling operations [START_REF] Pott | Large-scale assembly of solar power plants with parallel cable robots[END_REF][START_REF] Williams | Contour-crafting-cartesian-cable robot system concepts: Workspace and stiffness comparisons[END_REF], and fast pick-and-place operations [START_REF] Kamawura | High-speed manipulation by using parallel wire-driven robots[END_REF][START_REF] Maeda | On design of a redundant wire-driven parallel robot WARP manipulator[END_REF][START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF]. Other possible applications include the broadcasting of sporting events, haptic devices [START_REF] Fortin-Coté | An admittance control scheme for haptic interfaces based on cable-driven parallel mechanisms[END_REF][START_REF] Gallina | 3-DOF wire driven planar haptic interface[END_REF][START_REF] Rosati | Design, implementation and clinical test of a wire-based robot for neurorehabilitation[END_REF], support structures for giant telescopes [START_REF] Yao | Dimensional optimization design for the four-cable driven parallel manipulator in FAST[END_REF][START_REF] Yao | A modeling method of the cable driven parallel manipulator for FAST[END_REF], and search and rescue deployable platforms [START_REF] Merlet | Kinematics of the wire-driven parallel robot MARIONET using linear actuators[END_REF][START_REF] Merlet | A portable, modular parallel wire crane for rescue operations[END_REF]. Recent studies have been performed within the framework of an ANR Project CoGiRo [2] where an efficient cable layout has been proposed [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] and used on a large CDPR prototype called CoGiRo.
CDPRs can be used successfully if the tasks to be fulfilled are simple and the working environment is not cluttered. When these conditions are not satisfied, Reconfigurable Cable-Driven Parallel Robots (RCDPRs) may be required to achieve the prescribed goal. In general, several parameters can be reconfigured, as described in Section 2. Moreover, these reconfiguration parameters can be selected in a discrete or a continuous set of possible values.
Preliminary studies on RCDPRs were performed in the context of the NIST RoboCrane project [START_REF] Bostelman | Cable-based reconfigurable machines for large scale manufacturing[END_REF]. Izard et al. [START_REF] Izard | A reconfigurable robot for cable-driven parallel robotic research and industrial scenario proofing[END_REF] also studied a family of RCDPRs for industrial applications. Rosati et al. [START_REF] Rosati | On the design of adaptive cable-driven systems[END_REF][START_REF] Zanotto | Sophia-3: A semiadaptive cable-driven rehabilitation device with a tilting working plane[END_REF] and Zhou et al. [START_REF] Zhou | Tension distribution shaping via reconfigurable attachment in planar mobile cable robots[END_REF][START_REF] Zhou | Stiffness modulation exploiting configuration redundancy in mobile cable robots[END_REF] focused their work on planar RCDPRs. Recently, Nguyen et al. [START_REF] Nguyen | On the analysis of large-dimension reconfigurable suspended cable-driven parallel robots[END_REF][START_REF] Nguyen | Study of reconfigurable suspended cable-driven parallel robots for airplane maintenance[END_REF] proposed reconfiguration strategies for large-dimension suspended CDPRs mounted on overhead bridge cranes. Contrary to these antecedent studies, this chapter considers discrete reconfigurations where the locations of the cable exit points are selected from a finite set (grid) of possible values. Hereafter, reconfigurations are limited to the cable exit point locations and the class of RCDPRs whose exit points can be placed on a grid of positions is defined as discrete RCDPRs. Figure 2 shows the prototype of a reconfigurable cable-driven parallel robot developed at IRT Jules Verne within the framework of CAROCA project. This prototype is reconfigurable for the purpose of being used for industrial operations in a cluttered environment. Indeed, its pulleys can be displaced onto the robot frame faces such that the collisions between the cables and the environment can be avoided during operation. The prototype has eight cables, can work in both suspended and fully constrained configurations and can carry up to 400 kg payloads. It contains eight motor-geardhead-winch sets. The nominal torque and velocity of each motor are equal to 15.34 Nm and 2200 rpm, respectively. The ratio of the twp-stage gearheads is equal to 40. The diameter of the Huchez TM industrial winches is equal to 120 mm. The CAROCA prototype is also equipped with 6 mm non-rotating steel cables and a B&R control board using Ethernet Powerlink TM communication.
To the best of our knowledge, no design strategy has been formulated in the literature for discrete RCDPRs. Hence, Section 4 presents a novel task-based design strategy for discrete RCDPRs. By taking into account the working environment, the designer divides the prescribed workspace or trajectory into n t parts. Each part will be covered by one and only one configuration of the RCDPR. Then, for each configuration, the designer selects a cable layout, parametrising the position of the cable exit points. The grid of locations where the cable exit points can be located is defined by the designer as well. Placing the exit points on the provided set of possible locations, it is possible to generate many CDPR configurations. All the possible configurations are analysed with respect to a set of constraints in order to verify which parts of the prescribed workspace or trajectory can be covered. The configurations satisfying the constraints are compared in order to find the combinations of n t configurations that accomplish the required task and optimise at the same time one or several objective function(s). A set of objective functions, dedicated to RCD-PRs, is provided in Section 4.2. These objective functions aim at maximising the productivity (production cycle time) and reducing the reconfiguration time of the cable exit points. Let us note that if the design strategy introduced in Section 4 does not produce satisfactory results, the more advanced but complex method recently introduced by the authors in [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF] can be considered.
In order to analyse the advantages and limitations of the proposed design strategy, a case study is presented in Section 5. It involves the design of an RCDPR for sandblasting and painting of a three-dimensional tubular structure. The tools performing these operations are embarked on the RCDPR moving platform, which follows the profile of the tubular structure. Each side of the tubular structure is associated to a single configuration. Cable exit points are reconfigured switching from one side of the tubular structure to another, until three external sides of the structure are sandblasted and painted. The cable exit point locations of the three configurations to be designed are optimised so that the number of cable attachment/detachment operations required to switch from a configuration to another is minimised. The size of the RCDPR is also minimised while its stiffness is maximised along the trajectories to be followed.
Classes of RCDPRs
CDPRs usually consist of several standard components: A fixed base, a moving platform, a set of m cables connecting the moving platform to the fixed base through a set of pulleys, a set of m winches, gearboxes and actuators, and a set of internal and external sensors. These components are usually dimensioned in such a way that the geometry of the CDPR does not vary during the task. However, by modifying the CDPR geometry, the capabilities of CDPRs can be improved. RCDPRs are then defined as CDPRs whose geometry can be adapted by reconfiguring part of their components. RCDPRs can then be classified according to the components, which are reconfigured and the nature of the reconfigurations. Fig. 3: CableBot designs with cable exit points fixed to a grid (left) and with cable exit points sliding on rails (right). Courtesy of the European FP7 Project CableBot.
Reconfigurable Elements and Technological Solutions
Part of the components of an RCDPR may be reconfigured in order to improve its performances. The geometry of the RCDPRs is mostly dependent on the locations of the cable exit points, the locations of the cable attachment points on the moving platform, and the number of cables.
The locations of the cable exit points A i , i = 1, . . . , m have to be reconfigured to avoid cable collisions when the environment is strongly cluttered. Indeed, modifying the cable exit point locations can increase the RCDPR workspace size. Furthermore, the reconfiguration of cable exit points provides the possibility to modify the layout of the cables and improve the performance of the RCDPR (such as its stiffness). From a technological point of view, the cable exit points A i are displaced by moving the pulleys orienting the cables and guiding them to the moving platform. Pulleys are connected on the base of the RCDPR. They can be displaced by sliding them on linear guides or fixing them on a grid of locations, as proposed in the concepts of Fig. 3. These concepts have been developed in the framework of the European FP7 Project CableBot [7, [START_REF] Nguyen | On the study of large-dimension reconfigurable cable-driven parallel robots[END_REF][START_REF] Blanchet | Contribution à la modélisation de robots à câbles pour leur commande et leur conception[END_REF]. Alternatively, pulleys can be connected to several terrestrial or aerial unmanned vehicles, as proposed in [START_REF] Jiang | The inverse kinematics of cooperative transport with multiple aerial robots[END_REF][START_REF] Manubens | Motion planning for 6D manipulation with aerial towed-cable systems[END_REF][START_REF] Zhou | Analysis framework for cooperating mobile cable robots[END_REF].
The geometry of the RCDPR and the cable layout can be modified as well by displacing the cable anchor points on the moving platform, B i , i = 1, . . . , m. Changing the locations of points B i allows the stiffness of the RCPDR as well as its wrench (forces and moments) capabilities to be improved. A modification of the cable anchor points may also result in an increase of the workspace dimensions. The reconfiguration of points B i can be performed by attaching and detaching the cables at different locations on the moving platform.
The number m of cables has a major influence on performance of the RCDPR. Using more cables than DOFs can enlarge the workspace of suspended CDPRs [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF] or yields fully constrained CDPRs where internal forces can reduce vibrations, e.g. [START_REF] Kamawura | High-speed manipulation by using parallel wire-driven robots[END_REF]. However, the larger the number of cables, the higher the risk of collisions. In this case, the reconfiguration can be performed by attaching or detaching one or several cable(s) to/from the moving platform and possibly to/from a new set of exit points. Furthermore, by attaching and detaching one or several cable(s), the
Discrete and Continuous Reconfigurations
According to the reconfigured components and the associated technology, reconfiguration parameters can be selected over a continuous or discrete domain of values, as summarised in Table 1. Reconfigurations performed over a discrete domain consist of selecting the reconfigurable parameters within a finite set of values. Modifying the number of cables is a typical example of a discrete reconfiguration. Discrete reconfigurations also apply to cable anchor points, when the cables can be installed on the moving platform at a (discrete) number of specific locations, e.g. its corners. Another example of discrete RCDPR is represented in Fig. 3 (left). In this concept, developed in the framework of the European FP7 Project CableBot, cable exit points are installed on a predefined grid of locations on the ceiling. Discrete reconfigurations are performed off-line, interrupting the task the RCDPR is executing. For this reason, the set up time for these RCDPRs can be relative long. On the contrary, RCDPRs with discrete reconfigurations can use the typical control schemes already developed for CDPRs. Furthermore, they do not require to motorise the cable exit points, thereby avoiding a large increase of the CDPR cost. Reconfigurations performed over a continuous domain provide the possibility of selecting the geometric parameters over a continuous set of values delimited by upper and lower bounds. A typical example of continuous RCDPR is represented in Fig. 3 (right), which illustrates another concept developed in the framework of the European FP7 Project CableBot. In this example, the cable exit points slide on rails fixed on the ceiling. Reconfigurations can be performed on-line, by continuously modifying the reconfigurable parameters during the task execution. The main advantages of continuous reconfigurations are the reduced set-up time and the local optimisation of the RCDPR properties. However, modifying the locations of the exit points in real time may require the design of a complex control scheme. Furthermore, the cost of RCDPRs with continuous reconfigurations is significantly higher than the cost of discrete RCDPRs when the movable pulleys are actuated.
Nomenclature for RCDPRs
Similarly to CDPRs, an RCDPR is mainly composed of a moving platform connected to the base through a set of cables, as illustrated in Fig. 4. The moving platform is driven by m cables, which are actuated by winches fixed on the base frame of the robot. The cables are routed by means of pulleys to exit points from which they extend toward the moving platform. The main difference between this chapter and previous works on CDPRs is the possibility to displace the cable exit points on a grid of possible locations.
As illustrated in Fig. 4, F b , of origin O b and axes x b , y b , z b , denotes a fixed reference frame while F p of origin O p and axes x p , y p and z p , is fixed to the moving platform and thus called the moving platform frame. The anchor points of the ith cable on the platform are denoted as B i,c , where c represents the configuration number. For the c-th configuration, the exit point of the i-th cable is denoted as A i,c , i = 1, . . . , m. The Cartesian coordinates of each point A i,c , with respect to F b , are given by the vector a b i,c while b b i,c is the position vector of point B i,c expressed in F b . Neglecting the cable mass, the vector l b i,c directed along the i-th cable from point B i,c to point A i,c can be written as:
l b i,c = a b i,c -t -Rb p i,c i = 1, . . . , m ( 1
)
where t is the moving platform position, i.e. the position vector of O p in F b , and R is the rotation matrix defining the orientation of the moving platform, i.e. the orientation of F p with respect to F b . The length of the i-th cable is then defined by the 2-norm of the cable vector l b i,c , namely, l i,c = l b i,c 2 , i = 1, . . . , m. In order to balance an external wrench (combination of a force and a moment), each cable generates on the moving platform a wrench proportional to its tension τ i = 1, . . . , m. The cables balance the external wrench w e , according to the following equation [START_REF] Roberts | On the inverse kinematics, statics, and fault tolerance of cable-suspended robots[END_REF]:
Wτ + w e = 0 (2)
The cable tensions are collected into the vector τ = [τ 1 , . . . , τ m ] and multiplied by the wrench matrix W whose columns are composed of the unit wrenches w i exerted by the cables on the platform:
W = d b 1,c d b 2,c . . . d b m,c Rb p 1,c × d b 1,c Rb p 2,c × d b 2,c . . . Rb p m,c × d b m,c (3)
where d b i,c , i = 1, . . . , m are the unit cable vectors associated with the c-th configuration:
d b i,c = l b i,c l i,c , i = 1, . . . , m (4)
Design Strategy for RCDPRs
Similarly to CDPRs, the design of RCDPRs requires the dimensioning of all its components. In this chapter, the design of RCDPRs focuses on the selection of the cable exit point locations. The other components of the RCDPR are required to be chosen in advance.
Design Problem Formulation
The RCDPR design strategy proposed in this section consists of ten steps. The design can be formulated as a mono-objective or hierarchical multi-objective optimisation problem. The designer defines a prescribed workspace or moving platform trajectory and divides it into n t parts. Each part should be covered by one and only one configuration. The design variables are the locations of the cable exit points for the n t configurations covering the n t parts of the prescribed workspace or trajectory. The global objective functions investigated in this chapter (Section 4.2) aim to reduce the overall complexity of the RCDPR and the reconfiguration time. The optimisation is performed while verifying a set of user-defined constraints such as those presented in Section 4.3.
Step I. Task and Environment. The designer describes the task to be performed. He/She specifies the nature of the problem, defining if the motion of the moving platform is static, quasi-static or dynamic. According to the nature of the problem, the designer defines the external wrenches applied to the moving platform and, possibly, the required moving platform twist and accelerations. The prescribed workspace or trajectory of the moving platform is given. A description of the environment is provided as well, including the possible obstacles encountered during the task execution.
Step II. Division of the Prescribed Trajectory. Given the prescribed workspace or moving platform trajectory, the designer divides it into n t parts, assuming that each of them is accessible by one and only one configuration of the RCDPR. The division may be performed by trying to predict the possible collisions of the cables and the working environment.
Step III. Constant Design Parameters. The designer defines a set of constant design parameters and their values. The parameters are collected in the constant design parameter vector q.
Step IV. Design Variables and Layout Parametrisation. For each part of the prescribed workspace or moving platform trajectory, the designer defines the cable layout of the associated configuration. The cable layout associated with the t-th part of the prescribed workspace or trajectory defines the locations of the cable exit points, parametrised with respect to a set of n t,v design variables, u t,v , v = 1, . . . , n t,v . The design variables are defined as a discrete set of ε t,v values, [u] t,v , v = 1, . . . , n t,v .
Step V. RCDPR Configuration Set. For each part of the prescribed trajectory, the possible configurations, which can be generated combining the values They analyse the properties of the combination of n t configurations comprising the RCDPR. If several global objective functions are to be solved simultaneously, the optimisation problem can be classically reduced to a mono-objective optimisation according to:
[u] t,v , v = 1, . . . ,
V = n V ∑ t=1 µ t V t , µ t ∈ [0, 1], n V ∑ t=1 µ t = 1 (5)
The weighting factors µ t ,t = 1, . . . , n V , are defined according to the prior- ity assigned to each objective function V t , the latter lying between 0 and 1.
If several global optimisation functions have to be solved hierarchically, the designer will define those functions according to their order of priority, t = 1, . . . , n V , where V 1 has the highest priority and V n V the lowest one.
Step X. Discrete Optimisation Algorithm. The design problem is formulated as an optimisation problem and solved by analysing all the n C set of feasible configurations. The analysis is performed with respect to the global objective functions defined at Step IX. The sets of n t configurations with the best global objective function value are determined. If a hierarchical multi-objective optimisation is required, the following procedure is applied:
a. The algorithm analyses the n C sets of feasible configurations with respect to the global objective function which currently has the highest priority, V t (the procedure is initialised with t = 1). b. If only one set of configuration optimises V t , this solution is considered as the optimum. On the contrary, if n C ,t multiple solutions optimise V t , the algorithm proceeds to the following step. c. The algorithm analyses the n C ,t sets of optimal solutions with respect to the global objective function with lower priority, V t+1 . Then, t = t + 1 and the procedure moves back to Step b.
Global Objective Functions
The design strategy proposed in the previous section aims to optimise the characteristics of the RCDPR. The optimisation may be performed with respect to one or several global objective functions. The objective functions used in this chapter are described hereafter.
RCDPR Size
The design optimisation problem may aim to minimise the size of the robot, defined as the convex hull of the cable exit points. The Cartesian coordinates of exit point A i,c are defined as a b i,c = [a x i,c , a y i,c , a z i,c ] T . The variables s x , s y and s z denote the lower bounds on the Cartesian coordinates of the cable exit points along the axes x b , y b and z b , respectively:
s x = min a x i,c , ∀i = 1, ..., m, c = 1, ..., n t (6)
s y = min a y i,c , ∀i = 1, ..., m, c = 1, ..., n t (7)
s z = min a z i,c , ∀i = 1, ..., m, c = 1, ..., n t (8)
The upper bounds on the Cartesian coordinates of the RCDPR cable exit points, along the axes x b , y b , z b , are denoted by sx , sy and sz , respectively.
sx = max a x i,c , ∀i = 1, ..., m, c = 1, ..., n t (9) sy = max a y i,c , ∀i = 1, ..., m, c = 1, ..., n t (10) sz = max a z i,c , ∀i = 1, ..., m, c = 1, ..., n t (11)
Hence, the objective function related to the size of the robot is expressed as follows:
V = ( sx -s x )( sy -s y )( sz -s z ) (12)
Number of Cable Reconfigurations
According to the reconfiguration strategy proposed in this chapter, reconfiguration operations require the displacement of the cable exit points, and consequently attaching/detaching operations of the cables. These operations are time consuming.
Hence, an objective can be to minimise the number of reconfigurations, n r , defined as the number of exit point changes to be performed in order to switch from configuration C i to configuration C j . By reducing the number of cable attaching/detaching operations, the RCDPR set up time could be significantly reduced.
Number of Configuration Changes
During the reconfiguration of the exit points, the task executed by the RCDPR has to be interrupted. These interruptions impact the task execution time. Therefore, it may be necessary to minimise the number of interruptions, n i , in order to improve the effectiveness of the RCDPR. The objective function V = n i associated with this goal measures the number of configuration changes, n i , to be performed during a prescribed task.
RCDPR Complexity
The higher the number of configuration sets n C allowing to cover the prescribed workspace or trajectory, the more complex the RCDPR. When the RCDPR requires a large number of configurations, the base frame of the CDPR may become complex.
In order to minimise the complexity of the RCDPR, an objective can be to minimise the overall number of exit point locations, V = n e , required by the n C configuration sets. Therefore, the optimisation aims to maximise the number of exit point locations shared among two or more configurations.
Constraint Functions
Any CDPR optimisation problem has to take into account some constraints. Those constraints represent the technical limits or requirements that need to be satisfied. The constraints used in this chapter are described hereafter.
Wrench Feasibility
Since cables can only pull on the platform, the tensions in the cables must always be non-negative. Moreover, cable tensions must be lower than an upper bound, τ max , which corresponds either to the maximum tension τ max1 the cables (or other me- chanical parts) can bear, or to the maximum tension τ max2 the motors can provide. The cable tension bounds can thus be written as:
0 ≤ τ i ≤ τ max , ∀i = 1, . . . , m (13)
where τ max = min {τ max1 , τ max2 }.
Due to the cable tension bounds, RCDPRs can balance only a bounded set of external wrenches. In this chapter, the set of external wrenches applied to the platform and that the cables have to balance is called the required external wrench set and is denoted [w e ] r . Moreover, the set of of admissible cable tensions is defined as:
[τ] = {τ i | 0 ≤ τ i ≤ τ max , i = 1, . . . , m} (14)
A pose (position and orientation) of the moving platform is then said to be wrench feasible if the following constraint holds:
∀w e ∈ [w e ] r , ∃τ ∈ [τ] such that Wτ + w e = 0 (15)
Eq. ( 15) can be rewritten as follows:
Cw e ≤ d, ∀w e ∈ [w e ] r ( 16
)
Methods to compute matrix C and vector d are presented in [START_REF] Bouchard | On the ability of a cable-driven robot to generate a prescribed set of wrenches[END_REF][START_REF] Gouttefarde | Characterization of parallel manipulator available wrench set facets[END_REF].
Cable Lengths
Due to technological reasons, cable lengths are bounded between a minimum cable length, l min , and a maximum cable length, l max :
l min ≤ l i,c ≤ l max , ∀i = 1, . . . , m (17)
The minimum cable lengths are defined so that the RCDPR moving platform is not too close to the base frame. The maximum cable lengths depend on the properties of the winch drums that store the cables, in particular their lengths and their diameters.
Cable Interferences
A second constraint is related to the possible collisions between cables. If two or more cables collide, the geometric and static models of the CDPR are not valid anymore and the cables can be damaged or their lifetime severely reduced.
In order to verify that cables do not interfere, it is sufficient to determine the distances between them. Modeling the cables as linear segments, the distance d cc i, j between the i-th cable and the j-th cable can be computed, e.g. by means of the method presented in [START_REF] Lumelsky | On fast computation of distance between line segments[END_REF]. There is no interference if the distance is larger than the diameter of the cables, φ c :
d cc i, j ≥ φ c ∀i, j = 1, . . . , m, i = j ( 18
)
The number of possible cable interferences to be verified is equal to C m 2 = m! 2!(m-2)! . Note that, depending on the way the cables are routed from the winches to the moving platform, possible interferences of the cable segments between the winches and the pulleys may have to be considered.
Collisions between the Cables and the Environment
Industrial environments may be cluttered. Collisions between the environment and the cables of the CDPR should be avoided. In general, for fast collision detection, the environment objects (obstacles) are enclosed in bounding volumes such as spheres and cylinders. When more complex shapes have to be considered, their surfaces are approximated with polygonal meshes. Thus, collision analysis can be performed by computing the distances between the edges of those polygons and the cables, e.g. by using [START_REF] Lumelsky | On fast computation of distance between line segments[END_REF]. Many other methods may be used, e.g., those described in [START_REF] Blanchet | Contribution à la modélisation de robots à câbles pour leur commande et leur conception[END_REF].
In the case study presented in Section 5, a tubular structure is considered. The ith cable and the k-th structure tube will not collide if the distance between the cable and the axis (straight line segment) of the structure tube is larger than the sum of the cable radius φ c /2 and the tube radius φ s /2, i.e.:
d cs i,k ≥ (φ c + φ s ) 2 ∀i = 1, . . . , m, ∀k = 1, . . . , n st ( 19
)
where n st denotes the number of tubes composing the structure.
Pose Infinitesimal Displacement Due to the Cable Elasticity
Cables are not perfectly rigid body. Under load, they are notably subjected to elongations that may induce some moving platform displacements. In order to quantify the stiffness of the CDPR, an elasto-static model may be used:
δ w e = Kδ p = K δ t δ r ( 20
)
where δ w e is the infinitesimal change in the external wrench applied to the platform, δ p is the infinitesimal displacement screw of the moving platform and K is the stiffness matrix whose computation is explained in [START_REF] Behzadipour | Stiffness of cable-based parallel manipulators with application to stability analysis[END_REF]. δ t = [δt x , δt y , δt z ] T is the variation in the moving platform position and δ r = [δ r x , δ r y , δ r z ] T is the vector of the infinitesimal (sufficiently small) rotations of the moving platform around the axes x b , y b and z b .
The pose variation should be bounded by the positioning error threshold vector, δ t = [δt x,c , δt y,c , δt z,c ], where δt x,c , δt y,c and δt z,c are the bounds on the positioning errors along the axes x b , y b and x b , and the orientation error threshold vector, δ φ = [δ γ c , δ β c , δ α c ], where δ γ c , δ β c and δ α c are the bounds on the platform orientation errors about the axes x b , y b and z b , i.e.:
-[δt x,c , δt y,c , δt z,c ] ≤ [δt x , δt y , δt z ] ≤ [δt x,c , δt y,c , δt z,c ] (21) -[δ γ c , δ β c , δ α c ] ≤ [δ γ, δ β , δ α] ≤ [δ γ c , δ β c , δ α c ] (22)
5 Case Study: Design of a RCDPRs for Sandblasting and Painting of a Large Tubular Structure
Problem Description
The necessity to improve the production rate of large tubular structures has incited companies to investigate new technologies. These technologies should be able to reduce manufacturing time associated with the assembly of the structure parts or the treatment of their surfaces. Painting and sandblasting operations over wide tubular structures can be realised by means of RCDPRs, as illustrated in the present case study.
Task and Environment
The tubular structure selected for the given case study is 20 m long, with a cross section of 10 m x 10 m. The number of tubes to be painted is equal to twenty. Their diameter, φ s , is equal to 0.8 m. The sandblasting and painting operations are realised indoor. The structure lies horizontally in order to reduce the dimensions of the painting workshop. The whole system can be described with respect to a fixed reference frame, F b , of origin O b and axes x b , y b , z b , as illustrated in Fig. 6. Sandblasting and painting tools are embarked on the RCDPR moving platform. The Center of Mass (CoM) of the platform follows the profile of the structure tubes and the tools perform the required operations. The paths to be followed, P 1 , P 2 and P 3 , are represented in Fig. 6. Note that each path P i , i = 1, . . . , 3 is discretised into 38 points P j,i , j = 1, . . . , 38 i = 1, . . . , 3 and that n p denotes the corresponding total number of points. The offset between paths P i , i = 1, . . . , 3 and the structure tubes is equal to 2 m. No path will be assigned to the lower external side of the structure, since it is sandblasted and painted from the ground.
Division of the Prescribed Workspace
In order to avoid collisions between the cables and structure, reconfigurations of the cable exit points are necessary. Each external side of the structure should be painted by only one robot configuration. Three configurations are necessary to work on the outer part of the structure, configuration C i being associated to path P i , i = 1, two and three, in order not to interrupt the painting and sandblasting operations during their execution. Passing from one configuration to another, one or more cables are disconnected from their exit points and connected to other exit points located elsewhere. For each configuration, the locations of the cable exit points are defined as variables of the design problem. In the present case study, the dimensions of the platform as well as the position of the cable anchor points on the platform are fixed.
Constant Design parameters
The number of cables, m = 8, the cable properties, and the dimensions of the platform are given. Those parameters are the same for the three configurations. The moving platform of the RCDPR analysed in this case study is driven by steel cables. The maximum allowed tension in the cables, τ max , is equal to 34 950 N and we have:
0 < τ i ≤ τ max , ∀i = 1, . . . , 8 (23)
Moreover, l p , w p and h p denote the length, width and height of the platform, respectively: l p = 30 cm, w p = 30 cm and h p = 60 cm. The mass of the moving platform is m MP = 60 kg. The design (constant) parameter vector q is expressed as:
q = [m, φ c , k s , τ max , l p , w p , h p , m MP ] T (24)
Constraint Functions and Configuration Analysis
The design problem aims to identify the locations of points A i,c for the configurations C 1 , C 2 and C 3 . At first, in order to identify the set of feasible locations for the exit points A i,c , the three robot configurations are parameterised and analysed separately in the following paragraphs. A set of exit points is feasible if the design constraints are satisfied along the whole path to be followed by the moving platform CoM. The analysed constraints are: wrench feasibility, cable interferences, cable collisions with the structure, and the maximum moving platform infinitesimal displacement due to the cable elasticity. Both suspended and fully constrained eight-cable CDPR architectures are used. In the suspended architecture, gravity plays the role of an additional cable pulling the moving platform downward, thereby keeping the cables under tension. The suspended architecture considered in this work is inspired by the CoGiRo CDPR prototype [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF][START_REF] Lamaury | Dual-space adaptive control of redundantly actuated cable-driven parallel robots[END_REF]. For the non-suspended configuration, note that eight cables is the smallest possible even number of cables that can be used for the platform to be fully constrained by the cables.
Collisions between the cables as well as collisions between the cables and structure tubes should be avoided. Since sandblasting and painting operations are performed at low speed, the motion of the CDPR platform can be considered quasistatic. Hence, only the static equilibrium of the robot moving platform will be considered. The wrench feasibility constraints presented in Section 4.3 are considered such that the required external wrench set [w e ] r is an hyperrectangle defined as:
-50 N ≤ f x , f y , f z ≤ 50 N (25) -7.5 Nm ≤m x , m y , m z ≤ 7.5 Nm (26)
where w e = [ f x , f y , f z , m x , m y , m z ] T , f x , f y and f z being the force components of w e and m x , m y , and m z being its moment components. Besides, the moving platform infinitesimal displacements, due to the elasticity of the cables, are constrained by: An advantage of this configuration is a large workspace to footprint ratio. The exit points A i,2 have been arranged in a parallelepiped layout. The Cartesian coordinates a i,c are defined as follows: Variables v i , i = 1, . . . , 5 are equivalent for configuration C 2 to variables u i , i = 1, . . . , 5, describing configuration C 1 . The layout of this configuration is illustrated in Fig. 8. The design variables of configuration C 2 are collected into the vector x 2 :
-5 cm ≤ δt x , δt y , δt z ≤ 5 cm (27) -0.1 rad ≤ δ r x , δ r y , δ r z ≤ 0.1 rad (28)
a b 1,2 = a b 2,2 = [v 1 -v 4 , v 2 -v 5 , v 3 ] T ( 38
)
a b 3,2 = a b 4,2 = [v 1 -v 4 , v 2 + v 5 , v 3 ] T ( 39
)
a b 5,2 = a b 6,2 = [v 1 + v 4 , v 2 + v 5 , v 3 ] T (40) a b 7,2 = a b 8,2 = [v 1 + v 4 , v 2 -v 5 , v 3 ] T (41)
x 2 = [v 1 , v 2 , v 3 , v 4 , v 5 ] T (42)
Note that this configuration is composed of couples of exit points theoretically connected to the same locations: {A 1,2 , A 2,2 }, {A 3,2 , A 4,2 }, {A 5,2 , A 6,2 }, and {A 7,2 , A 8,2 }. From a technical point of view, in order to avoid any cable interference, the coupled exit points should be separated by a certain distance. For the design problem at hand, this distance has been fixed to v 0 = 5 mm.
a b 1,2 = v 1 -v ′ 4 , v 2 -v 5 , v 3 T ( 43
)
a b 2,2 = v 1 -v 4 , v 2 -v ′ 5 , v 3 T ( 44
)
a b 3,2 = v 1 -v 4 , v 2 + v ′ 5 , v 3 T ( 45
)
a b 4,2 = v 1 -v ′ 4 , v 2 + v 5 , v 3 T ( 46
)
a b 5,2 = v 1 + v ′ 4 , v 2 + v 5 , v 3 T ( 47
)
a b 6,2 = v 1 + v 4 , v 2 + v ′ 5 , v 3 T ( 48
)
a b 7,2 = v 1 + v 4 , v 2 -v ′ 5 , v 3 T ( 49
)
a b 8,2 = v 1 + v ′ 4 , v 2 -v 5 , v 3 T ( 50
)
where v ′ 4 = v 4v 0 and v ′ 5 = v 5v 0 The Cartesian coordinates of points B i,2 are defined as:
b b 1,2 = 1 2 [l p , -w p , h p ] T , b b 2,2 = 1 2 [-l p , w p , -h p ] T (51) b b 3,2 = 1 2 [-l p , -w p , h p ] T , b b 4,2 = 1 2 [l p , w p , -h p ] T (52) b b 5,2 = 1 2 [-l p , w p , h p ] T , b b 6,2 = 1 2 [l p , -w p , -h p ] T (53) b b 7,2 = 1 2 [l p , w p , h p ] T , b b 8,2 = 1 2 [-l p , -w p , -h p ] T (54)
Table 2 describes the lower and upper bounds as well as the number of values considered for the configuration C 2 . Combining these values, 22275 configurations have been generated. Among these configurations, only 5579 configurations are feasible.
Configuration C 3
The configuration C 3 follows the path P 3 . This path is symmetric to the path P 1 with respect to the plane y b O b z b . Considering the symmetry of the tubular structure, configuration C 3 is thus selected as being the same as configuration C 1 . The discretised set of design variables chosen for the configuration C 3 is described in Table 2. The design variables for the configuration C 3 are collected into the vector x 3 :
x 3 = [w 1 , w 2 , w 3 , w 4 , w 5 ] T ( 55
)
where the variables w i , i = 1, . . . , 5 amount to the variables u i , i = 1, . . . , 5, describing configuration C 1 . Therefore, the Cartesian coordinates of the exit points A i,3 are expressed as follows:
a b 1,3 = [w 1 + w 4 , w 2 + w 5 , -w 3 ] T a b 2,3 = [w 1 + w 4 , w 2 + w 5 , w 3 ] T (56)
Objective Functions and Design Problem Formulation
The RCDPR should be as simple as possible so that the minimisation of the total number of cable exit point locations, V 1 = n e , is required. Consequently, the number of exit point locations shared by two or more configurations should be maximised. The size of the robot is also minimised to reduce the size of the sandblasting and painting workshop. Finally, the mean of the moving platform infinitesimal displacement due to cable deformations is minimised. The optimisations are performed hierarchically, by means of the procedure described in Section 4.1 and the objective functions collected in Section 4.2. Hence, the design problem of the CDPR is formulated as follows:
minimise
V 1 = n e V 2 = ( sx -s x )( sy -s y )( sz -s z ) V 3 = δ t 2 n p over x 1 , x 2 , x 3 subject to: ∀P m,n , m = 1, . . . , 38 n = 1, . . . , 3 Cw ≤ d, ∀w ∈ [w e ] r d cc i, j ≥ φ c ∀i, j = 1, . . . , 8, i = j d cs i,k ≥ (φ c + φ s ) 2 ∀i = 1, . . . , 8, ∀k = 1, . . . , 20
-5 cm ≤ δt x , δt y , δt z ≤ 5 cm -0.1 rad ≤ δ r x , δ r y , δ r z ≤ 0.1 rad (60)
Once the set of feasible solutions have been obtained for each path P i , a list of RCDPRs with a minimum number of exit points, n c , is extracted from the list of feasible RCDPRs. Finally, the most compact and stiff RCDPRs from the list of RCDPRs with a minimum number of exit points are the desired optimal solutions.
Optimisation Results
The feasible robot configurations associated with paths P 1 , P 2 and P 3 have been identified. For each path, a configuration is selected, aiming to minimise the total number of exit points required by the RCDPR to complete the task. These optimal solutions have been computed in two phases. At first, the 4576 feasible robot configurations for path P 1 are compared with the 5579 feasible robot configurations for path P 2 looking for the couple of configurations having the minimum total number of exit points. The resulting couple of configurations is then compared to the feasible robot configurations for path P 3 , and the sets of robot configurations that minimise the overall number n e of exit points along the three paths are retained. According to the discrete optimisation analysis, 16516 triplets of configurations minimise this overall number of exit points.
A generic CDPR composed of eight cables requires eight exit points A i = 1, . . . , 8 on the base. It is the case for the fully constrained configurations C 1 and C 3 . The suspended CDPR presents four coincident couples of exit points. Hence, in the present case study, the maximum overall number of exit points of the RCDPR is equal to 20. The best results provide a reduction of four points. Regarding the configurations C 1 and C 2 , points A 5,2 and A 7,2 can be coincident with points A 3,1 and A 5,1 , respectively. Alternatively, points A 5,2 and A 7,2 can be coincident with points A 1,1 and A 7,1 . As far as configurations C 2 and C 3 are concerned, points A 1,2 and A 3,2 can be coincident with points A 8,3 and A 2,3 , respectively. Likewise, points A 1,2 and A 3,2 can be coincident with points A 4,3 and A 6,3 , respectively. The total volume of the robot has been computed for the 16516 triplets of configurations minimising the overall number of exit points. Ninety six RCDPRs amongst the 16516 triplets of configurations have the smallest size, this minimum size being equal to 5104 m 3 . Selection of the best solutions has been promoted through the third optimisation criterion based on the robot stiffness. Twenty solutions provided a minimum mean of the moving platform displacement equal to 1.392 mm. An optimal solution is illustrated in Fig. 9. The corresponding optimal design parameters are given in Table 3.
Figure 10 illustrates the minimum degree of constraint satisfaction s introduced in [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF] and computed thereafter along the paths P 1 , P 2 , and P 3 , which were discretised into 388 points. It turns out that the moving platform is in a feasible static equilibrium along all the paths because the minimum degree of constraint satisfaction remains negative. Referring to [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF], the minimum degree of constraint satisfaction can be used to test wrench feasibility since it is negative when a platform pose is wrench feasible. Configurations C 1 and C 3 maintain their degree of satisfaction lower than -400 N. On the contrary, configuration C 2 is often close to 0. The poses where s vanishes are such that two cables of the suspended CDPR of configuration C 2 are slack.
The proposed RCDPR design strategy yielded good solutions, but it is time consuming. The whole procedure, performed on an Intel Core TM i7-3630QM 2.40 GHz, required 19 h of computations, on Matlab 2013a. Therefore, the development of more efficient strategies for the design of RCDPRs will be part of our future work. Moreover, the mass of the cables may have to be taken into account.
Conclusions
When the task to be accomplished is complicated, and the working environment is extremely cluttered, CDPRs may not succeed in the task execution. The problem can be solved by means of RCDPRs. This chapter focused on RCDPRs whose cable exit points on the base frame can be located on a predefined grid of possible positions. A design strategy for such discrete RCDPRs was introduced. This design strategy assumes that the number of configurations needed to complete the task is defined by the designer according to its experience. The designer divides the prescribed trajec-Fig. 9: Optimal Reconfigurable Cable-Driven Parallel Robot. tory or workspace into a set of partitions. Each partition has to be entirely covered by one configuration. The position of the cable exit points, for all the configurations, is computed by means of an optimisation algorithm. The algorithm optimises one or more global objective function(s) while satisfying a set of user-defined constraints. Examples of possible global objective functions include the RCDPR size, the overall number of exit points, and the number of cable reconfiguration. A case study was presented in order to validate the RCDPR design strategy. The RCDPR has to paint and sandblast three of the four external sides of a tubular structure. Each of these three sides is covered by one configuration. The design strategy provided several optimal solutions to the case study, minimising hierarchically the overall number of cable exit points, the size of the RCDPR, and the moving platform displacements due to the elasticity of the cables. The computation of the optimal solution Fig. 10: Minimum degree of constraint satisfaction [START_REF] Guay | Measuring how well a structure supports varying external wrenches[END_REF]. The analysis has been performed by discretising the paths P 1 , P 2 , and P 3 into 388 points. required nineteen hours of computation. More complicated tasks may thus require higher computation times. An improvement of the proposed RCDPR design strategy should be investigated in order to reduce this computational effort.
Fig. 1 :
1 Fig. 1: Architecture of a CDPR developed in the framework of the IRT Jules Verne CAROCA project.
Fig. 2 :
2 Fig. 2: CAROCA prototype: a reconfigurable cable-driven parallel robot working in a cluttered environment (Courtesy of IRT Jules Verne and STX France).
Fig. 4 :
4 Fig. 4: Schematic of a RCDPR. The red points represent the possible locations of the cable exit points, where the pulleys can be fixed.
Fig. 5 :
5 Fig. 5: Design strategy for RCDPRs.
Fig. 6 :
6 Fig. 6: Case study model and prescribed paths P 1 , P 2 and P 3 of the moving platform CoM.
Fig. 7 :
7 Fig. 7: Design variables parametrising the configuration C 1 .
Fig. 8 :
8 Fig. 8: Design variables parametrising the configuration C 2 .
a b 3 , 3 = [w 1 -
331 w 4 , w 2 + w 5 , -w 3 ] T a b 4,3 = [w 1w 4 , w 2 + w 5 , w 3 ] T (57) a b 5,3 = [w 1w 4 , w 2w 5 , -w 3 ] T a b 6,3 = [w 1w 4 , w 2w 5 , w 3 ] T (58) a b 7,3 = [w 1 + w 4 , w 2w 5 , -w 3 ] T a b 8,3 = [w 1 + w 4 , w 2w 5 , w 3 ] T (59)
Table 1 :
1 CDPR reconfigurable parameter classification.
Reconfigurable Parameter Discrete Domain Continuous Domain
Exit Point Locations Yes Yes
Platform Anchor Point Locations Yes Yes
Cable Number Yes No
architecture of the RCDPRs can be modified, permitting both suspended and fully
constrained CDPR configurations.
n t,v of the n t,v design variables, are computed. Therefore, n t,C = ∏ n t,v v=1 ε t,v possible configurations are generated for the t-th part of the prescribed workspace or trajectory.Step VI. Constraint Functions. The user defines a set of n φ constraint functions, φ k , k =, 1, . . . , n φ . These functions are applied to all possible configurations associated to the n t parts of the prescribed workspace or trajectory.Step VII. Configuration Analysis. For each portion of the prescribed workspace or trajectory, all the possible configurations generated at Step V with respect to the n
φ user-defined constraint functions are tested. The n f ,t configurations satisfying the constraints all over the t-th part of the prescribed workspace or trajectory are defined hereafter as feasible configurations.
Step VIII. Feasible Configuration Combination. The set of n t configurations that lead to the achievement of the prescribed task are computed. Each set is composed by selecting one of the n f ,t feasible configurations for each part of the prescribed workspace or trajectory. The number of feasible configuration sets generated during this step is equal to n C . Step IX. Objective Functions. The designer defines one or more global objective function(s), V t ,t =, 1, . . . , n V , where n V is equal to the number of global objective functions taken into account. The global objective functions associated with RCDPRs do not focus solely on a single configuration.
Table 2 :
2 Design variables associated with configurations C 1 , C 2 and C 3 .
Variables Lower Bounds Upper Bounds Number of values
u 1 5.5 7.5 9
u 2 8.0 12.0 9
C 1 u 3 6 10 5
u 4 0.5 2.5 9
u 5 10 14 5
v 1 -1 1 9
v 2 8.0 12.0 5
C 2 v 3 7 11 9
v 4 5 7.5 11
v 5 10 14 5
w 1 -7.5 -5.5 9
w 2 8.0 12.0 9
C 3 w 3 6 10 5
w 4 0.5 2.5 9
w 5 10 14 5
Table 3 :
3 Design parameters of the selected optimum RCDPR.
Conf. var.1 var.2 var.3 var.4 var.5
x 1 6.25 10.0 8.0 1.0 11.0
x 3 0 10.0 8.0 5.25 11.0
x 3 -6.25 10.0 8.0 1.0 11.0
Acknowledgements This research work is part of the CAROCA project managed by IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing Technologies for Composite, Metallic and Hybrid Structures). The authors wish to associate the industrial and academic partners of this project, namely, STX, Naval Group, AIRBUS and CNRS.
Configuration C 1 A fully-constrained configuration has been assigned to configuration C 1 . The exit points A i,1 have been arranged in a parallelepiped layout. The edges of the parallelepiped are aligned with the axes of frame F b . This layout can be fully described by means of five variables: u 1 , u 2 and u 3 define the Cartesian coordinates of the parallelepiped center, while u 4 and u 5 denote the half-lengths of the parallelepiped along the axes x b and y b , respectively. Therefore, the Cartesian coordinates of the exit points A i,1 are expressed as follows:
The layout of the first robot configuration is described in Fig. 7. The corresponding design variables are collected into the vector x 1 :
The Cartesian coordinates of the anchor points B i,1 of the cables on the platform are expressed as:
A discretised set of design variables have been considered. The lower and upper bounds as well as the number of values for each variable are given in Table 2. 18225 robot configurations have been generated with those values. It turns out that 4576 configurations satisfy the design constraints along the 38 discretised points of path P 1 .
Configuration C 2
A suspended redundantly actuated eight-cable CDPR architecture has been attributed to the configuration C 2 in order to avoid collisions between the cables and the tubular structure. The selected configuration is based on CoGiRo, a suspended CDPR designed and built in the framework of the ANR CoGiRo project [START_REF] Gouttefarde | Geometry selection of a redundantly actuated cable-suspended parallel robot[END_REF][START_REF] Lamaury | Dual-space adaptive control of redundantly actuated cable-driven parallel robots[END_REF]. |
01758178 | en | [
"spi.meca.geme",
"spi.auto"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01758178/file/Sensitivity%20Analysis%20of%20the%20Elasto-Geometrical%20Model%20of%20Cable-Driven%20Parallel%20Robots%20-%20Cablecon2017.pdf | Sana Baklouti
Stéphane Caro
Eric Courteille
Sensitivity Analysis of the Elasto-Geometrical Model of Cable-Driven Parallel Robots
This paper deals with the sensitivity analysis of the elasto-geometrical model of Cable-Driven Parallel Robots (CDPRs) to their geometric and mechanical uncertainties. This sensitivity analysis is crucial in order to come up with a robust model-based control of CDPRs. Here, 62 geometrical and mechanical error sources are considered to investigate their effect onto the static deflection of the movingplatform (MP) under an external load. A reconfigurable CDPR, named ``CAROCA´´, is analyzed as a case of study to highlight the main uncertainties affecting the static deflection of its MP.
Introduction
In recent years, there has been an increasing number of research works on the subject of Cable-Driven Parallel Robots (CDPRs). The latter are very promising for engineering applications due to peculiar characteristics such as large workspace, simple structure and large payload capacity. For instance, CDPRs have been used in many applications like rehabilitation [START_REF] Merlet | MARIONET, a family of modular wire-driven parallel robots[END_REF], pick-and-place [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], sandblasting and painting [START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF][START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF] operations.
Many spatial prototypes are equipped with eight cables for six Degrees of Freedom (DOF) such as the CAROCA prototype, which is the subject of this paper.
Sana Baklouti Université Bretagne-Loire, INSA-LGCGM-EA 3913, 20, avenue des Buttes de Cöesmes, 35043 Rennes, France, e-mail: sana.baklouti@insa-rennes.fr Stéphane Caro CNRS, Laboratoire des Sciences du Numérique de Nantes, UMR CNRS n6004, 1, rue de la Noë, 44321 Nantes, France, e-mail: stephane.caro@ls2n.fr Eric Courteille Université Bretagne-Loire, INSA-LGCGM-EA 3913, 20, avenue des Buttes de Cöesmes, 35043 Rennes, France, e-mail: eric.courteille@insa-rennes.fr 1 To customize CDPRs to their applications and enhance their performances, it is necessary to model, identify and compensate all the sources of errors that affect their accuracy.
Improving accuracy is still possible once the robot is operational through a suitable control scheme. Numerous control schemes were proposed to enhance the CDPRs precision on static tasks or on trajectory tracking [START_REF] Jamshidifar | Adaptive Vibration Control of a Flexible Cable Driven Parallel Robot[END_REF][START_REF] Fang | Motion control of a tendonbased parallel manipulator using optimal tension distribution[END_REF][START_REF] Zi | Dynamic modeling and active control of a cable-suspended parallel robot[END_REF]. The control can be either off-line through external sensing in the feedback signal [START_REF] Dallej | Towards vision-based control of cable-driven parallel robots[END_REF], or on-line control based on a reference model [START_REF] Pott | IPAnema: a family of cable-driven parallel robots for industrial applications[END_REF].
This paper focuses on the sensitivity analysis of the CDPR MP static deflection to uncertain geometrical and mechanical parameters. As an illustrative example, Fig. 1: CAROCA prototype: a reconfigurable CDPR (Courtesy of IRT Jules Verne, Nantes) a suspended configuration of the reconfigurable CAROCA prototype, shown in Fig. 1, is studied. First, the manipulator under study is described. Then, its elastogeometrical model is written while considering cable mass and elasticity in order to express the static deflection of the MP subjected to an external load. An exhaustive list of geometrical and mechanical uncertainties is given. Finally, the sensitivity of the MP static deflection to these uncertainties is analyzed.
Parametrization of the CAROCA prototype
The reconfigurable CAROCA prototype illustrated in Fig. 1 was developed at IRT Jules Verne for industrial operations in cluttered environment such as painting and sandblasting large structures [START_REF] Gagliardini | A reconfiguration strategy for reconfigurable cable-driven parallel robots[END_REF][START_REF] Gagliardini | Discrete reconfiguration planning for cable-driven parallel robots[END_REF]. This prototype is reconfigurable because its pulleys can be displaced in a discrete manner on its frame. The size of the latter is 7 m long, 4 m wide and 3 m high. The rotation-resistant steel cables Carl Stahl Technocables Ref 1692 of the CAROCA prototype are 4 mm diameter. Each cable consists of 18 strands twisted around a steel core. Each strand is made up of 7 steel wires. The cable breaking force is 10.29 kN. ρ denotes the cable linear mass and E the cable modulus of elasticity. In this section, both sag-introduced and axial stiffness of cables are considered in the elasto-geometrical modeling of CDPR. The inverse elasto-geometrical model and the direct elasto-geometrical model of CDPR are presented. Then, the variations in static deflection due to external loading is defined as a sensitivity index.
Inverse Elasto-Geometric Modeling (IEGM)
The IEGM of a CDPR aims at calculating the unstrained cable length for a given pose of its MP. If both cable mass and elasticity are considered, the inverse kinematics of the CDPR and its static equilibrium equations should be solved simultaneously. The IEGM is based on geometric closed loop equations, cable sagging relationships and static equilibrium equations. The geometric closed-loop equations take the form:
b p = b b i + b l i -b R p p a i , (1)
where b R p is the rotation matrix from F b to F p and l i is the cable length vector.
The cable sagging relationships between the forces i f i = [ i f xi , 0, i f zi ] applied at the end point A i of the ith cable and the coordinates vector i a i = [ i x Ai , 0, i z Ai ] of the same point resulting from the sagging cable model [START_REF] Irvine | Cable structures[END_REF] are expressed in F i as follows:
i x Ai = i f xi L usi ES + | i f xi | ρg [sinh -1 ( i f zi f C i xi ) -sinh -1 ( i f zi -ρgL usi i f xi )], (2a)
i z Ai = i f xi L usi ES - ρgL 2 usi 2ES + 1 ρg [ i f xi 2 + i f zi 2 -i f xi 2 + ( i f zi -ρgL usi ) 2 ], (2b)
where L usi is the unstrained length of ith cable, g is the acceleration due to gravity, S is the cross sectional area of the cables. The static equilibrium equations of the MP are expressed as:
Wt + w ex = 0, (3)
where W is the wrench matrix, w ex is the external wrench vector and t is the 8dimensional cable tension vector. Those tensions are computed based on the tension distribution algorithm described in [START_REF] Mikelsons | A real-time capable force calculation algorithm for redundant tendon-based parallel manipulators[END_REF].
Direct elasto-geometrical model (DEGM)
The direct elasto-geometrical model (DEGM) aims to determine the pose of the mobile platform for a given set of unstrained cable lengths. The constraints of the DEGM are the same as the IEGM, i.e, Eq. ( 1) to Eq. ( 3). If the effect of cable weight on the static cable profile is non-negligible, the direct kinematic model of CDPRs will be coupled with the static equilibrium of the MP. For a 6 DOFs CDPR with 8 driving cables, there are 22 equations and 22 unknowns. In this paper, the non-linear Matlab function ``lsqnonlin´´is used to solve the DEGM.
Static deflection
If the compliant displacement of the MP under the external load is small, the static deflection of the MP can be calculated by its static Cartesian stiffness matrix [START_REF] Carbone | Stiffness analysis and experimental validation of robotic systems[END_REF]. However, once the cable mass is considered, the sag-introduced stiffness should be taken into account. Here, the small compliant displacement assumption is no longer valid, mainly for heavy or/and long cables with light mobile platform. Consequently, the static deflection can not be calculated through the Cartesian stiffness matrix. In this paper, the IEGM and DEGM are used to define and calculate the static deflection of the MP under an external load. The CDPR stiffness is characterized by the static deflection of the MP. Note that only the positioning static deflection of the MP is considered in order to avoid the homogenization problem [START_REF] Nguyen | Stiffness Matrix of 6-DOF Cable-Driven Parallel Robots and Its Homogenization[END_REF].
As this paper deals with the sensitivity of the CDPR accuracy to all geometrical and mechanical errors, the elastic deformations of the CDPR is involved. This problem is solved by deriving the static deflection of the CDPR obtained by the subtraction of the poses calculated with and without an external payload. For a desired pose of the MP, the IEGM gives a set of unstrained cable lengths L us . This set is used by the DEGM to calculate first, the pose of the MP under its own weight. Then, the pose of the MP is calculated when an external load (mass addition) is applied. Therefore, the static deflection of the MP is expressed as:
dp j,k = p j,k -p j,1 , (4)
where p j,1 is the pose of the MP considering only its own weight for the j th pose configuration and p j,k is the pose of the MP for the set of the j th pose and k th load configuration.
Error modeling
This section aims to define the error model of the elasto-geometrical CDPR model. Two types of errors are considered: geometrical errors and mechanical errors.
Geometrical errors
The geometrical errors of the CDPR are described by δ b i , the variation in vector b i , δ a i , the variation in vector a i , and δ g, the uncertainty vector of the gravity center position; So, 51 uncertainties. The geometric errors can be divided into base frame geometrical errors and MP geometrical errors and mainly due to manufacturing errors.
Base frame geometrical errors
The base frame geometrical errors are described by vectors δ b i , (i=1..8). As the point B i is considered as part of its correspondent pulley, it is influenced by the elasticity of the pulley mounting and its assembly tolerance. b i is particularly influenced by pulleys tolerances and reconfigurability impact.
Moving-platform geometrical errors
The MP geometrical errors are described by vectors δ a i , (i=1..8), and δ g. The gravity center of the MP is often supposed to coincide with its geometrical center P. This hypothesis means that the moments generated by an inaccurate knowledge of the gravity center position or by its potential displacement are neglected. The Cartesian coordinate vector of the geometric center G does not change in frame F p , but strongly depends on the real coordinates of exit points A i that are related to uncertainties in mechanical welding of the hooks and in MP assembly.
Mechanical errors
The mechanical errors of the CDPR are described by the uncertainty in the MP mass (δ m) and the uncertainty on the cables mechanical parameters (δ ρ and δ E).
Besides, uncertainties in the cables tension δ t affect the error model. As a result, 11 mechanical error sources are taken into account.
End-effector mass
As the MP is a mechanically welded structure, there may be some differences between the MP mass and inertia matrix given by the CAD software and the real ones. The MP mass and inertia may also vary in operation In this paper, MP mass uncertainty δ m is about ± 10% the nominal mass.
Cables parameters
Linear mass: The linear mass ρ of CAROCA cables is equal to 0.1015 kg/m. The uncertainty of this parameter can be calculated from the measurement procedure as:
δ ρ = m c δ L + L δ m c L 2
, where m c is the measured cable mass for a cable length L. δ L and δ m c are respectively the measurement errors of the cable length and mass.
Modulus of elasticity:
This paper uses experimental hysteresis loop to discuss the modulus of elasticity uncertainty. Figure 3 shows the measured hysteresis loop of the 4 mm cable where the unloading path does not correspond to the loading path. The area in the center of the hysteresis loop is the energy dissipated due to internal friction in the cable. It depicts a non-linear correlation in the lower area between load and elongation. Based on experimental data presented in Fig. 3, Table 2 presents the modulus of elasticity of a steel wire cable for different operating margins, when the cable is in loading or unloading phase. This modulus is calculated as follows:
E p-q = L c F q% -F p% S(x q -x p ) , ( 5
)
where S is the metallic cross-sectional area, i.e. the value obtained from the sum of the metallic cross-sectional areas of the individual wires in the rope based on their nominal diameters. x p and x q are the elongations at forces equivalent to p% and q% (F p% and F q% ), respectively, of the nominal breaking force of the cable measured during the loading path (Fig. 3). L c is the measured initial cable length. For a given range of loads (Tab. 2), the uncertainty on the modulus of elasticity depends only on the corresponding elongations and tensions measurements. In this case, the absolute uncertainty associated with applied force and resulting elongation measurements from the test bench outputs is estimated to be ± 1 N and ± 0.03 mm, respectively; so, an uncertainty of ± 2 GPa can be applied to the calculation of the modulus of elasticity.
According to the International Standard ISO 12076, the modulus of elasticity of a steel wire cable is E 10-30 . However, the CDPR cables do not work always between F 10% and F 30% in real life and the cables can be in loading or unloading phase. The mechanical behavior of cables depends on MP dynamics, which affects the variations in cable elongations and tensions. From Table 2, it is apparent that the elasticity moduli of cables change with the operating point changes. For the same applied force, the modulus of elasticity for loaded and unloaded cables are not the same. While the range of the MP loading is unknown, a large range of uncertainties on the modulus of elasticity should be defined as a function of the cable tensions.
Tension distribution
Two cases of uncertainties of force determination can be defined depending on the control scheme:
The first case is when the control scheme gives a tension set-point to the actuators resulting from the force distribution algorithm. If there is no feedback about the tensions measures, the range of uncertainty is relatively high. Generally, the effort of compensation does not consider dry and viscous friction in cable drum and pulleys. This non-compensation leads to static errors and delay [START_REF] De Wit | Robust adaptive friction compensation[END_REF] that degrade the CDPR control performance. That leads to a large range of uncertainties in tensions. As the benefit of tension distribution algorithm used is less important in case of a suspended configuration of CDPR than the fully-constrained one [START_REF] Lamaury | Contribution a la commande des robots parallles a cbles redondance d'actionnement[END_REF], a range of ± 15 N is defined.
The second case is when the tensions are measured. If measurement signals are very noisy, amplitude peaks of the correction signal may lead to a failure of the force distribution. Such a failure may also occur due to variations in the MP and pulleys parameters. Here, the deviation is defined based on the measurement tool precision. However, it remains lower than the deviation of the first case by at least 50%.
Sensitivity Analysis
Due to the non-linearities of the elasto-geometrical model, explicit sensitivity matrix and coefficients [START_REF] Zi | Error modeling and sensitivity analysis of a hybrid-driven based cable parallel manipulator[END_REF][START_REF] Miermeister | An elastic cable model for cable-driven parallel robots including hysteresis effects[END_REF] cannot be computed. Therefore, the sensitivity of the elastogeometrical model of the CDPR to geometrical and mechanical errors is evaluated statistically. Here, MATLAB has been coupled with modeFRONTIER, a process integration and optimization software platform [17] for the analysis.
The RMS (Root Mean Square) of the static deflection of CAROCA MP is studied. The nominal mass of the MP and the additional mass are equal to 180 kg and 50 kg, respectively.
Influence of mechanical errors
In this section, all the uncertain parameters of the elasto-geometrical CAROCA model are defined with uniformly distributed deviations. The uncertainty range and discretization step are given in Tab. 3. In this basis, 2000 SOBOL quai-randm observations are created.
m (kg) ρ (kg/m) E (GPa) a i (m) b i (m) δt i (N) Uncertainty range ± 18 ± 0.01015 ± 18 ± 0.015 ± 0.03 ± 15
Step 0.05 3*10 -5 0.05 0.0006 0.0012 0.1
In this configuration, the operating point of the MP is supposed to be unknown. A large variation range of the modulus of elasticity is considered. The additional mass corresponds to a variation in cable tensions from 574 N to 730 N, which corresponds to a modulus of elasticity of 84.64 GPa. Thus, while the operating point of the MP is unknown, an uncertainty of ± 18 GPa is defined with regard to the measured modulus of elasticity E= 102 GPa.
Figure 4a displays the distribution fitting of the static deflection RMS. It shows that the RMS distribution follows a quasi-uniform law whose mean µ 1 is equal to 1.34 mm. The RMS of the static deflection of the MP is bounded between a minimum value RMS min equal to 1.12 mm and a maximum value RMS max equal to 1.63 mm; a variation of 0.51 mm under all uncertainties, which presents 38% of the nominal value of the static deflection.
Figure 4b depicts the RMS of the MP static deflection as a function of variations in E and ρ simultaneously, whose values vary respectively from 0.09135 to 0.11165 kg/m and from 84.2 to 120.2 GPa. The static deflection is very sensitive to cables mechanical behavior. The RMS varies from 0.42 mm to 0.67 mm due to the uncertainties of these two parameters only. As a matter of fact, the higher the cable modulus of elasticity, the smaller the RMS of the MP static deflection. Conversely, the smaller the linear mass of the cable, the smaller the RMS of the MP static deflection. Accordingly, the higher the sag-introduced stiffness, the higher the MP static deflection. Besides, the higher the axial stiffness of the cable, the lower the MP static deflection. Figure 4c illustrates the RMS of the MP static deflection as a function of variations in ρ and m, whose value varies from 162 kg to 198 kg. The RMS varies from 0.52 mm to 0.53 mm due to the uncertainties of these two parameters only. The MP mass affects the mechanical behavior of cables: the heavier the MP, the larger the axial stiffness, the smaller the MP static deflection. Therefore, a fine identification of m and ρ is very important to establish a good CDPR model.
Comparing to the results plotted in Fig. 4b, it is clear that E affects the RMS of the MP static deflection more than m and ρ. As a conclusion, the integration of cables hysteresis effects on the error model is necessary and improves force algorithms and the identification of the robot geometrical parameters [START_REF] Miermeister | An elastic cable model for cable-driven parallel robots including hysteresis effects[END_REF].
Influence of geometrical errors
In this section, the cable tension set-points during MP operation are supposed to be known; so, the modulus of elasticity can be calculated around the operating point and the confidence interval is reduced to ± 2 GPa. The uncertainty range and the discretization step are provided in Tab. 4. Figure 5a displays the distribution fitting of the MP static deflection RMS. It shows that the RMS distribution follows a normal law whose mean µ 2 is equal to 1.32 mm and its standard deviation σ 2 is equal to 0.01 mm. This deviation is relatively small, which allows to say that the calibration through static deflection is not obvious. The RMS of the static deflection of the MP is bounded between a minimum value RMS min equal to 1.28 mm and a maximum value RMS max equal to 1.39 mm; a variation of 0.11 mm under all uncertainties. The modulus of elasticity affects the static compliant of the MP, which imposes to always consider E error while designing a CDPR model.
The bar charts plotted in Fig. 5b and Fig. 5c present, respectively, the effects of the uncertainties in a i and b i , (i=1..8), to the static deflection of the CAROCA for symmetric (0 m, 0 m, 1.75 m) and non-symmetric (3.2 m, 1.7 m, 3 m) robot configurations. These effects are determined based on t-student index of each uncertain parameter. This index is a statistical tool that can estimate the relationships between outputs and uncertain inputs. The t-Student test compares the difference between the means of two samples of designs taken randomly in the design space: • M + is the mean of the n + values for an objective S in the upper part of domain of the input variable, • M -is the mean of the n -values for an objective S in the lower part of domain of the input variable.
The t-Student is defined as t = |M --
M + | V 2 g n - + V 2 g n +
, where V g is the general variance [START_REF] Courteille | Design optimization of a deltalike parallel robot through global stiffness performance evaluation[END_REF].
When the MP is in a symmetric configuration, all attachment points have nearly the same effect size. However, when it is located close to points B 2 and B 4 , the effect size of their uncertainties becomes high. Moreover, the effect of the corresponding mobile points (A 2 and A 4 ) increases. It means that the closer the MP to a given point, the higher the effect of the variations in the Cartesian coordinates of the corresponding exit point of the MP onto its static deflection. That can be explained by the fact that when some cables are longer than others and become slack for a non-symmetric position, the sag effect increases. Consequently, a good identification of geometrical parameters is highly required. In order to minimize these uncertainties, a good calibration leads to a better error model.
Conclusion
This paper dealt with the sensitivity analysis of the elasto-geometrical model of CDPRs to mechanical and geometrical uncertainties. The CAROCA prototype was used as a case of study. The validity and identifiability of the proposed model are verified for the purpose of CDPR model-based control. That revealed the importance of integrating cables hysteresis effect into the error modeling to enhance the knowledge about cables mechanical behavior, especially when there is no feedback about tension measurement. It appears that the effect of geometrical errors onto the static deflection of the moving-platform is significant too. Some calibration [START_REF] Dit Sandretto | Certified calibration of a cable-driven robot using interval contractor programming[END_REF][START_REF] Joshi | Calibration of a 6-DOF cable robot using two inclinometers[END_REF] and self-calibration [START_REF] Miermeister | Auto-calibration method for overconstrained cable-driven parallel robots[END_REF][START_REF] Borgstrom | Nims-pl: A cable-driven robot with self-calibration capabilities[END_REF] approaches were proposed to enhance the CDPR performances. More efficient strategies for CDPR calibration will be performed while considering more sources of errors in a future work.
Fig. 2 :
2 Fig. 2: The ith closed-loop of a CDPR
5 A
5 6 0.2 0.15 -0.125 B 7 -3.5 -2 3.5 A 7 0.2 -0.15 -0.125 B 8 3.5 -2 3.5 A 8 -0.2 -0.15 0.125 3 Elasto-geometric modeling
Fig. 3 :
3 Fig. 3: Load-elongation diagram of a steel wire cable measured in steady state conditions at the rate of 0.05 mm/s
Fig. 4 :
4 Fig. 4: (a) Distribution of the RMS of the MP static deflection (b) Evolution of the RMS under a simultaneous variations of E and ρ (c) Evolution of the RMS under a simultaneous variations of m and ρ
Fig. 5 :
5 Fig. 5: (a) Distribution of the RMS of the MP static deflection (b) Effect of uncertainties in a i (c) Effect of uncertainties in b i
Table 1 :
1 Cartesian coordinates of anchor points A i (exit points B i , resp.) expressed in F p (in F b , resp.)
Table 2 :
2 Modulus of elasticity while loading or unloading phase
Modulus of elasticity (GPa) E 1-5 E 5-10 E 5-20 E 5-30 E 10-15 E 10-20 E 10-30 E 20-30
Loading 72.5 83.2 92.7 97.2 94.8 98.3 102.2 104.9
Unloading 59.1 82.3 96.2 106.5 100.1 105.1 115 126.8
Table 3 :
3 Uncertainties and steps used to design the error model
Parameter
Table 4 :
4 Uncertainties and steps used to design the error model Parameter m (kg) ρ (kg/m) E (GPa) a i (m) b i (m) δt i (N) Uncertainty range ± 18 ± 0.01015 ± 2 ± 0.015 ± 0.03 ± 15
Step 0.05 3*10 -5 0.05 0.0006 0.0012 0.1
number of observations
1.26 1.28 1.3 1.32 1.34 1.36 1.38 1.4
Static deflection RMS (mm) |
01758205 | en | [
"info.info-rb"
] | 2024/03/05 22:32:10 | 2018 | https://inria.hal.science/hal-01758205/file/2018_tro_mani.pdf | A Direct Dense Visual Servoing Approach using Photometric Moments Manikandan Bakthavatchalam, Omar Tahri and Franc ¸ois Chaumette
Abstract-In this paper, visual servoing based on photometric moments is advocated. A direct approach is chosen by which the extraction of geometric primitives, visual tracking and image matching steps of a conventional visual servoing pipeline can be bypassed. A vital challenge in photometric methods is the change in the image resulting from the appearance and disappearance of portions of the scene from the camera field of view during the servo. To tackle this issue, a general model for the photometric moments enhanced with spatial weighting is proposed. The interaction matrix for these spatially weighted photometric moments is derived in analytical form. The correctness of the modelling, effectiveness of the proposed strategy in handling the exogenous regions and improved convergence domain are demonstrated with a combination of simulation and experimental results. Index Terms-image moments, photometric moments, dense visual servoing, intensity-based visual servoing
I. INTRODUCTION
Visual servoing (VS) refers to a wide spectrum of closedloop techniques for the control of actuated systems with visual feedback [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. A task function is defined from a set of selected visual features, based on the currently acquired image I(t) and the reference image I ⇤ learnt from the desired robot pose. In a typical VS pipeline, the image stream is subjected to an ensemble of measurement processes, including one or more image processing, image matching and visual tracking steps, from which the visual features are determined. Based on the nature of the visual features used in the control law, VS methods can be broadly classified into geometric and photometric approaches. The earliest geometric approaches employ as visual features parameters observed in the image of geometric primitives (points, straight lines, ellipses, cylinders) [START_REF] Espiau | A new approach to visual servoing in robotics[END_REF]. These approaches are termed Image-based Visual Servoing (IBVS). In Pose-based Visual Servoing [START_REF] Wilson | Relative end-effector control using cartesian position based visual servoing[END_REF], geometric primitives are used to reconstruct the camera pose which is then used as input for visual servoing. These approaches are thus dependent on the reliable detection, extraction and subsequent tracking of the aforesaid primitives. While PBVS may be affected by instabilities in pose estimation, IBVS designed from image points may be subject to local minima, singularity, inadequate robot trajectory and limited convergence domain, when the six degrees of freedom are controlled and when the image error is large and/or when the robot has a large displacement to achieve to reach the desired pose [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. This is due to the Parts of this work have been presented in [START_REF] Bakthavatchalam | Photometric moments: New promising candidates for visual servoing[END_REF] and [START_REF] Bakthavatchalam | An improved modelling scheme for photometric moments with inclusion of spatial weights for visual servoing with partial appearance/disappearance[END_REF]. Manikandan Bakthavatchalam and Franc ¸ois Chaumette are with Inria, Univ Rennes, CNRS, IRISA, Rennes, France. e-mail: Manikandan.Bakthavatchalam@inria.fr, Francois.Chaumette@inria.fr Omar Tahri is with INSA Centre Val de Loire, Université d'Orléans, PRISME EA 2249, Bourges, France. email: omar.tahri@insa-cvl.fr strong non linearities and coupling in the interaction matrix of image points. To handle these issues, geometric moments were introduced for VS in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]- [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], which allowed obtaining a large convergence domain and adequate robot trajectories, thanks to the reduction of the non linearities and coupling in the interaction matrix of adequate combinations of moments. However, these methods are afflicted by a serious restriction: their dependency on the availability of well-segmented regions or a set of tracked and matched points in the image. Breaking this traditional dependency, the approach proposed in this paper embraces a more general class, known as dense VS, in which the extraction, tracking and matching of set of points or well-segmented regions is not necessary.
In another suite of geometric methods, an homography and a projective homography are respectively used as visual features in [START_REF] Benhimane | Homography-based 2d visual tracking and servoing[END_REF] and [START_REF] Silveira | Direct visual servoing: Vision-based estimation and control using only nonmetric information[END_REF], [START_REF] Silveira | On intensity-based nonmetric visual servoing[END_REF]. These quantities are estimated by solving a geometric or photo-geometric image registration problem, carried out with non-linear iterative methods. However, these methods require a perfect matching of the template considered in the initial and desired images, which strongly limits their practical relevance.
The second type of methods adopted the photometric approach by avoiding explicit geometric extraction and resorting instead to use the image intensities. A learning-based approach was proposed in [START_REF] Nayar | Subspace methods for robot vision[END_REF], where the intensities were transformed using Principal Component Analysis to a reduced dimensional subspace. But it is prohibitive to scale this approach to multiple degrees of freedom [START_REF] Deguchi | A direct interpretation of dynamic images with camera and object motions for vision guided robot control[END_REF]. The set of intensities in the image were directly used as visual features in [START_REF] Collewet | Photometric visual servoing[END_REF] but the high nonlinearity between the feature space and the state space limits the convergence domain of this method and does not allow obtaining adequate robot trajectories. This direct approach was later extended to omnidirectional cameras [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF] and to depth map [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF].
In this work, instead of using directly the raw luminance of all the pixels, we investigate the usage of visual features based on photometric moments. As it has been shown in [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] that considering geometric moments (built from a set of image points) provides a better behavior than considering directly a set of image points, we will show that considering photometric moments (built from the luminance of the pixels in the image) provides a better behavior than considering directly the luminance of the pixels. These moments are a specific case of the Kernel-based formulation in [START_REF] Kallem | Kernelbased visual servoing[END_REF] which synthesized controllers only for 3D translations and rotation around the optic axis. Furthermore, the analytical form of the interaction matrix of the features proposed in [START_REF] Kallem | Kernelbased visual servoing[END_REF] has not been determined, which makes impossible the theoretical sta-bility analysis of the corresponding control scheme. Different from [START_REF] Kallem | Kernelbased visual servoing[END_REF], the interaction matrix is developed in closed-form in this paper, and most importantly taking into account all the six degrees of freedom, which is the first main contribution of this work. It is shown that this is more general as well as consistent with the current state-of-the-art.
Furthermore, an important practical (and theoretical) issue that affects photometric methods stem from the changes in the image due to the appearance of new portions of the scene or the disappearance of previously viewed portions from the camera field-of-view (FOV). This means that the set of measurements varies along the robot trajectory, with a potential large discrepancy between the initial and desired images, leading to an inconsistency between the set of luminances I(t) in the current image and the set I ⇤ in the desired image, and thus also for the photometric moments computed in the current and desired images. In practice, such unmodelled disturbances influence the system behaviour and may result in failure of the control law. Another original contribution of this work is an effective solution proposed to this challenging problem by means of a spatial weighting scheme. In particular, we determine a weighting function so that a closed-form expression of the interaction matrix can be determined.
The main contributions of this paper lie in the modelling issues related to considering photometric moments as inputs of visual servoing and in the study of the improvements it brings with respect to the pure luminance method. The control scheme we have used to validate these contributions is a classical and basic kinematic controller [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. Let us note that more advanced control schemes, such as dynamic controllers [START_REF] Mahony | A port-Hamiltonian approach to imagebased visual servo control for dynamic systems[END_REF]- [START_REF] Wang | Adaptive visual tracking for robotic systems without imagespace velocity measurement[END_REF], could be designed from these new visual features.
The sequel of the paper is organized as follows: in Section II, the modelling aspects of photometric moments and the associated weighting strategy are discussed in depth. In Section III, the visual features adopted and the control aspects are discussed. Sections IV and V are devoted to simulations and experimental results. Finally, the conclusions drawn are presented in Section VI.
II. MODELLING
Generalizing the classical definition of image moments, we define a weighted photometric moment of order (p + q) as:
m pq = Z Z ⇡ x p y q w (x) I (x, t) dx dy (1)
where x = (x, y) is a spatial point on the image plane ⇡ where the intensity I(x, t) is measured at time t and w(x) is a weight attributed to that measurement. By linking the variations of these moments to the camera velocity v c , the interaction matrix of the photometric moments can be obtained.
ṁpq = L mpq v c (2)
where
L mpq = ⇥ L vx mpq L vy mpq L vz mpq L !x mpq L !y mpq L !z mpq ⇤ . Each L v/! mpq
2 R is a scalar with the superscripted v denoting translational velocity and ! the rotational velocity along or around the axis x, y or z axis of the camera frame.
Taking the derivative of the photometric moments in (1), we have
ṁpq = Z Z ⇡ x p y q w(x) İ(x, y) dx dy (3)
The first step is thus to model the variations in the intensity İ(x, y) that appear in (3). In [START_REF] Collewet | Photometric visual servoing[END_REF] which aimed to use raw luminance directly as visual feature, the intensity variations were modelled using the Phong illumination model [START_REF] Phong | Illumination for computer generated pictures[END_REF] resulting in an interaction matrix with parts corresponding to the ambient and diffuse terms. In practice, use of light reflection models requires cumbersome measurements for correct instantiation of the models. Besides, a perfect model should take into account the type of light source, attenuation model and different possible configurations between the light sources, the vision sensor and the target object used in the scene. Since VS is robust to modelling errors, adding such premature complexity to the models can be avoided. Instead, this paper adopts a simpler and more practical approach by using the classical brightness constancy assumption [START_REF] Horn | Determining optical flow[END_REF] to model the intensity variations, as done in [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. This assumption considers that the intensity of a moving point x = (x, y) remains unchanged between successively acquired images. This is encapsulated in the following well-known equation
I(x + x, t + t) = I(x, t) (4)
where x is the infinitesimal displacement undergone by the image point after an infinitesimal increment in time t. A first order Taylor expansion of (4) around x leads to
rI > ẋ + İ = 0 (5)
known as the classical optic flow constraint equation (OFCE), where rI > = h @I @x @I @y i = ⇥ I x I y ⇤ is the spatial gradient at the image point x. Further, the relationship linking the variations in the coordinates of a point in the image with the spatial motions of a camera is well established [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]: ẋ = L x v c where
L x = 1 Z 0 x Z xy (1 + x 2 ) y 0 1 Z y Z 1 + y 2 xy x (6)
In general, the depth of the scene points can be considered as a polynomial surface expressed as a function of the image point coordinates [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF].
1 Z = X p 0,q 0,p+qn A pq x p y q (7)
where n is the degree of the polynomial with n = 1 for a planar scene. Equation ( 7) is a general form with the only assumption that the depth is continuous. In this work however, for simplifying the analytical forms presented, only planar scenes have been considered in the modelling 1 . We will see in Section V-D that this simplification is not crucial by considering non planar environments. Therefore, with n = 1, (7) becomes
1 Z = Ax + By + C (8)
where A(= A 10 ), B(= A 01 ), C(= A 00 ) are scalar parameters that describe the configuration of the plane in the camera frame. From (5), we can write:
İ(x, y) = rI > ẋ (9)
By plugging ( 8) and ( 6) in [START_REF] Benhimane | Homography-based 2d visual tracking and servoing[END_REF], we obtain
İ(x, y) = L I v c = rI > L x v c ( 10
)
where L I = rI > L x is given by:
L > I = 2
Substituting ( 10) into (3), we see that
ṁpq = Z Z ⇡ x p y q w(x) L I v c dx dy (12)
By comparing with (2), we can then identify and write down the interaction matrix of the photometric moments as
L mpq = Z Z ⇡ x p y q w(x) L I dx dy (13)
Direct substitution of (11) into the above equation gives us
L vx mpq = Z Z ⇡
x p y q w(x)I x (Ax + By + C) dx dy
L vy mpq = Z Z ⇡
x p y q w(x)I y (Ax + By + C) dx dy
L vz mpq = Z Z ⇡ x p y q w(x)( xI x yI y )(Ax + By + C) dx dy L !x mpq = Z Z ⇡ x p y q w(x)( xyI x (1 + y 2 )I y ) dx dy L !y mpq = Z Z ⇡ x p y q w(x)((1 + x 2 )I x + xyI y ) dx dy L !z mpq = Z Z ⇡ x p y q w(x)(xI y yI x ) dx dy
We see that the interaction matrix consists of a set of integrodifferential equations. For convenience and fluidity in the ensuing developments, the following compact notation is introduced.
m rx pq = Z Z ⇡
x p y q w(x) I x dx dy (14a)
m ry pq = Z Z ⇡ x p y q w(x) I y dx dy (14b)
Each component of the interaction matrix in ( 13) can be easily re-arranged and expressed in terms of the above compact notation as follows:
L vx mpq = A m rx p+1,q + B m rx p,q+1 + C m rx p,q
L vy mpq = A m ry p+1,q + B m ry p,q+1 + C m ry p,q
L vz mpq = A m rx p+2,q B m rx p+1,q+1 C m rx p+1,q A m ry p+1,q+1 B m ry p,q+2 C m ry p,q+1 L !x mpq = m rx p+1,q+1 m ry p,q m ry p,q+2
L !y mpq = m rx p,q + m rx p+2,q + m ry p+1,q+1
L !z mpq = m rx p,q+1 + m ry p+1,q (15)
The terms m rx pq and m ry pq have to be evaluated to arrive at the interaction matrix. This in turn would require the computation of the image gradient terms I x and I y , an image processing step performed using derivative filters, which might introduce an imprecision in the computed values. In the following, it is shown that a clever application of the Green's theorem can help subvert the image gradients computation.
The Green's theorem helps to compute the integral of a function defined over a subdomain ⇡ of R 2 by transforming it into a line (curve/contour) integral over the boundary of ⇡, denoted here as @⇡:
Z Z ⇡ ( @Q @x @P @y )dx dy =
I @⇡ P dx + I @⇡ Qdy (16)
With suitable choices of functions P and Q, we aim to transform the terms m rx pq and m ry pq . To compute m rx pq , we let Q = x p y q w(x) I(x) and P = 0. We have @P @y = 0 and @Q @x = px p 1 y q w(x)I(x)+x p y q @w @x I(x)+x p y q w(x)I x [START_REF] Kallem | Kernelbased visual servoing[END_REF] Substituting this back into [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF], we can write Z Z ⇡ h p x p 1 y q w(x)I(x) + x p y q @w @x I(x)
+ x p y q w(x) I x i dxdy = I @⇡
x p y q w(x) I(x)dy (18)
Recalling our compact notation in (14a) and rearranging [START_REF] Mahony | A port-Hamiltonian approach to imagebased visual servo control for dynamic systems[END_REF], we obtain
m rx pq = Z Z ⇡ ⇣ p x p 1 y q w(x)I(x) + x p y q @w @x I(x) ⌘ dx dy + I @⇡
x p y q w(x)I(x)dy
Applying (1) to the first term in the RHS, we have
m rx pq = p m p 1,q Z Z ⇡ x p y q @w @x I(x) dx dy + I @⇡
x p y q w(x) I(x) dy
In the same manner, the computation of the term m ry pq is again simplified by employing the Green's theorem with P = x p y q w(x) I(x) and Q = 0.
m ry pq = q m p,q 1 Z Z ⇡
x p y q @w(x, y) @y I(x) dx dy
I @⇡ x p y q w(x) I(x) dx (20)
The results ( 19) and ( 20) are generic, meaning there are no explicit conditions on the weighting except that the function is differentiable. Clearly, depending on the nature of the weighting chosen for the measured intensities in (1), different analytical results can be obtained. In the following, two variants of the interaction matrix are developed corresponding to two different choices for the spatial weighting.
A. Uniformly Weighted Photometric Moments (UWPM)
First, the interaction matrix is established by attributing the same importance to all the measured intensities on the image plane. These moments are obtained by simply fixing w(x, t) = 1, 8 x 2 ⇡ leading to @w @x = @w @y = 0. Subsequently, ( 19) and ( 20) get reduced to 8 < :
m rx pq = p m p 1,q + H @⇡
x p y q I(x, y) dy
m ry pq = q m p,q 1 H @⇡ x p y q I(x, y) dx (21)
The second terms in m rx pq and m ry pq are contour integrals along @⇡. These terms represent the contribution of information that enter and leave the image due to camera motion. They could be evaluated directly but for obtaining simple closedform expressions, the conditions under which they vanish are studied. Let us denote I @⇡ = H @⇡
x p y q I(x, y) dy.
The limits y = y m and y = y M are introduced at the top and bottom of the image respectively (see Fig 1a). Since y(= y M ) is constant along C1 and y(= y m ) is constant along C3, it is sufficient to integrate along C2 and C4. Along C 2 , y varies from y M to y m while x remains constant at x M . Along C 4 , y varies from y m to y M while x remains constant at x m . Introducing these limits, we get
I @⇡ = x p M ym Z y M y q I(x M , y)dy + x p m y M Z ym y q I(x m , y)dy If I(x M , y) = I(x m , y) = I,
8y, then we have
I @⇡ = (x p M x p m ) I ym Z y M y q dy
Since we want I @⇡ = 0, the only solution is to have I = 0, that is when the acquired image is surrounded by a uniformly colored black2 background. This assumption, named information persistence (IP) was already implicitly done in [START_REF] Kallem | Kernelbased visual servoing[END_REF] [START_REF] Swensen | Empirical characterization of convergence properties for kernel-based visual servoing[END_REF]. It does not need not be strictly enforced. In fact, mild violations of the IP assumption were deliberately introduced in experiments (refer IV-B) and this was quite acceptable in most cases, as evidenced by our results. This assumption gets naturally eliminated when appropriate weighting functions are introduced in the moments formulation as shown in II-B.
Substituting ( 22) into (15), we get the final closed form expression for the interaction matrix.
L vx mpq = A(p + 1)m pq Bpm p 1,q+1 Cpm p 1,q L vy mpq = Aqm p+1,q 1 B(q + 1)m p,q Cqm p,q 1 L vz mpq = A (p + q + 3) m p+1,q + B(p + q + 3) m p,q+1 + C(p + q + 2) m pq L !x mpq = q m p,q 1 + (p + q + 3) m p,q+1 L !y mpq = p m p 1,q (p + q + 3) m p+1,q L !z mpq = p m p 1,q+1 q m p+1,q 1 (23)
The interaction matrix in ( 23) has a form which is exactly identical to those developed earlier for the geometric moments [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. A consistency with previously developed results is thus observed even though the method used for the modelling developments differ completely from [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]. Consequently, all the useful results available in the state of the art with regards to the developments of visual features [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] [8] are applicable as they are for the proposed photometric moments. Unlike [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF], the image gradients do not appear anymore in the interaction matrix. Their computation is no longer necessary. The developments presented have led to the elimination of this image processing step required by pure luminance-based visual servoing [START_REF] Collewet | Photometric visual servoing[END_REF]. The computation of the interaction matrix is now reduced to a simple and straight-forward computation of the moments on the image plane. Note also that in order to calculate L mpq , only moments of order upto p + q + 1 are required. In addition, we note that as usual in IBVS, the interaction matrix components corresponding to the rotational degrees of freedom are free from 3D parameters.
B. Weighted Photometric Moments (WPM)
In order to remove the IP assumption we do not attribute anymore an equal contribution to all the measured intensities (w(x) 6 = 1, 8x 2 @⇡), as was done in Sec II-A. Instead, a lesser importance is attributed to peripheral pixels, on which the appearance and disappearance effects are pronounced. To achieve this, the spatial weighting function is made to attribute maximal importance to the pixels in the area around the image center and smoothly reducing it radially outwards towards 0 at the image periphery. If w(x, y) = 0, 8x 2 @⇡, this still ensures I @⇡ = 0 obviating the need to have any explicit IP assumption anymore.
Weighting scheme: The standard logistic function l(x) = 1 1+e x smoothly varies between 0 and 1 and has simple derivatives. It is a standard function that is used in machine learning. However, if used to design w(x), it is straight-forward to check that the interaction matrix cannot be expressed as functions of the weighted photometric moments. To achieve this, we propose to use functions with the general structure:
F(x) = K exp p(x) (24)
with
p(x) = a 0 + a 1 x + 1 2 a 2 x 2 + 1 3 a 3 x 3 + ... + 1 n a n x n .
Indeed, functions of this structure possess the interesting property that their derivatives can be expressed in terms of the function itself. It is given by:
F 0 (x) = K exp p(x) p 0 (x) = p 0 (x)F(x)
with p 0 (x) = a 1 + a 2 x + a 3 x 2 + ... + a n x n 1 . In line with the above arguments, we propose the following custom exponential function (see Fig 1b)
w(x, y) = K exp a(x 2 +y 2 ) 2 ( 25
)
where K is the maximum value that w can attain and a can be used to vary the area which receives maximal and minimal weights respectively. This choice allows the interaction matrix to be obtained directly in closed-form as a function of the weighted moments. Therefore, no additional computational overheads are introduced since nothing other than weighted moments upto a specific order are required. In addition, the symmetric function to which the exponential is raised ensures that the spatial weighting does not alter the behaviour of weighted photometric moments to planar rotations. The spatial derivatives of (25) are as follows: 8 > < > :
@w @x = 4ax(x 2 + y 2 ) w(x) @w @y = 4ay(x 2 + y 2 ) w(x) (26)
Substituting ( 26) into ( 19) and ( 20), we obtain
⇢ m rx pq = p m p 1,q + 4a (m p+3,q + m p+1,q+2 ) m ry pq = q m p,q 1 + 4a (m p,q+3 + m p+2,q+1 ) (27)
By combining (27) with the generic form in [START_REF] Caron | Photometric visual servoing for omnidirectional cameras[END_REF], the interaction matrix of photometric moments w L mpq weighted with the radial function ( 25) is obtained.
w L mpq = h w L vx mpq w L vy mpq w L vz mpq w L !x mpq w L !y mpq w L !z mpq i (28) with w L vx mpq = L vx mpq + 4 a A (m p+4,q + m p+2,q+2 ) + 4 a B (m p+3,q+1 + m p+1,q+3 ) + 4 a C (m p+3,q + m p+1,q+2 ) w L vy mpq = L vy mpq + 4 a A (m p+3,q+1 + m p+1,q+3 ) + 4 a B (m p,q+4 + m p+2,q+2 ) + 4 a C (m p,q+3 + m p+2,q+1 ) w L vz mpq = L vz mpq 4 a A (m p+5,q + 2m p+3,q+2 + m p+1,q+4 ) 4 a B (m p+4,q+1 + 2m p+2,q+3 + m p,q+5 ) 4 a C (m p+4,q + 2m p+2,q+2 + m p,q+4 ) w L !x mpq = L !x 1mpq 4 a(m p+4,q+1 + 2 m p+2,q+3 + m p,q+3 + m p+2,q+1 + m p,q+5 ) w L !y mpq = L !y 1mpq + 4 a(m p+3,q + m p+1,q+2 + m p+5,q + 2 m p+3,q+2 + m p+1,q+4 ) w L !z mpq = L !z mpq = pm p 1,q+1 qm p+1,q 1
We note that the interaction matrix can be expressed as a matrix sum
w L mpq = L mpq + 4aL w (29)
where L mpq has the same form as [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. We note however that the moments are now computed using the weighting function in [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. The matrix L w is tied directly to the weighting function. Of course if a = 0 which means w(x) = 1, 8x 2 ⇡, we find w L mpq = L mpq .
To compute L mpq , moments of order upto (p + q + 1) are required whereas L w is a function of moments m tu , where t + u p + q + 5. This is in fact a resultant of the term (x 2 + y 2 ) 2 to which the exponential is raised (see [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]).
On observation of the last component of w L mpq , we see that it does not contain any new terms when compared to [START_REF] Collewet | Visual servoing set free from image processing[END_REF]. That is, the weighting function has not induced any extra terms, thus retaining the invariance of the classical moment invariants to optic axis rotations. This outcome was of course desired from the symmetry of the weighting function. On the other hand, if we consider the other five components, additional terms are contributed by the weighting function. As a result, moment polynomials developed from the classical moments [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] will not be invariant to translational motions when used with WPM. Thus, there is a need to develop new invariants for use with WPM such that they would retain their invariance to translations. This is an open problem that is not dealt with in this paper. Finally and as usual, the components of the interaction matrix corresponding to the rotational motions are still free from any 3D parameters.
Weighted photometric moments allow visual servoing on scenes prone to appearance and disappearance effects. Moreover, the interaction matrix has been developed in closed-form in order to facilitate detailed stability and robustness analyses.
The above developments would be near-identical for other weighting function choices of the form given by ( 24) [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF].
III. VISUAL FEATURES AND CONTROL SCHEME
The photometric moments are image-based measurements m(t) = (m 00 (t), m 10 (t), m 01 (t), ...) obtained from the image I(t). To control n ( 6) degrees of freedom of the robot, a large set of k (> n) individual photometric moments could be used as input s to the control scheme: s = m(t). However, this would lead to redundant features, for which it is well known that, at best, only the local asymptotic stability can be demonstrated [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]. That is why we prefer to use the same strategy as in [START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF]- [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], that is, from the set of available measurements m(t), we design a set of n visual features s = s(m(t)) so that L s is of full rank n and has nice decoupling properties. The interaction matrix L s can easily be obtained from the matrices L mpq 2 R 1⇥6 modelled in the previous section. Indeed, we have:
L s = @s @m L m ( 30
)
where L m is the matrix obtained by stacking the matrices L mpq . Then, the control scheme with the most basic and classical form has been selected [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]:
v c = c L s 1 (s s ⇤ ) (31)
where s ⇤ = s(m ⇤ ) and c L s is an estimation or an approximation of L s . Such an approximation or estimation is indeed necessary since, as detailed in the previous section, the translational components of L mpq are function of the 3D parameters A pq describing the depth map of the scene. Classical choices are c
L s = L s (s(t), b Z(t)) where Z = (A, B, C) when an estimation of Z is available, c L s = L s (s(t), c Z ⇤ ), or even c L s = L s (s ⇤ , c Z ⇤ ). Another classical choice is to use the mean c L s = 1 2 ⇣ L s (s(t), b Z(t)) + L s (s ⇤ , c Z ⇤ ) ⌘ or c L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ )
⌘ since it was shown to be efficient for very large camera displacements [START_REF] Malis | Improving vision-based control using efficient second-order minimization techniques[END_REF].
With such a control scheme, it is well known that the global asymptotic stability (GAS) of the system in the Lyapunov sense is ensured if the following sufficient condition holds [START_REF] Chaumette | Visual servoing and visual tracking[END_REF]:
L s c L s 1 > 0 (32)
Of course, in case c L s = L s , the system is GAS if L s is never singular, and a perfect decoupled exponential decrease of the error s s ⇤ is obtained. Such a perfect behavior is not obtained as long as c L s 6 = L s , but the error norm will decrease and the system will converge if condition (32) is ensured. This explains the fact that a non planar scene can be considered in practice (see Section V-D), even if the modelling developed in the previous section was limited to the planar case.
A. Control of SCARA motions
Photometric moments-based visual features can be used to control not only the subset of SE(3) motions considered in [START_REF] Kallem | Kernelbased visual servoing[END_REF] but also full 6 dof motions. In the former case, the robot is configured for SCARA (3T+1R, n = 4) type actuation to control only the planar translation, translation along the optic axis and rotation around the optic axis. The camera velocity is thus reduced to v cr = (v x , v y , v z , ! z ). Similarly to [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF], the following set of 4 visual features is used to control these 4 dofs.
s r = (x n , y n , a n , ↵)
where
x n = x g a n , y n = y g a n , a n = Z ⇤ q m ⇤
From the simple relations between s r and m pq , (p + q < 3), it is quite simple to determine the analytical form of the interaction matrix L sr using (30). When the target is parallel to the image plane (A = B = 0), the following sparse matrix is obtained for UWPM.
L sr = 2 6 6 4
L xn L yn L an L ↵ 3 7 7 5 = 2 6 6 4
1 0 0 y n 0 1 0 x n 0 0 1 0 0 0 0 1 3 7 7 5 (35)
Let us note that the current value of the depth does not appear anywhere in L sr and only the desired value Z ⇤ intervenes indirectly through x n and y n , and thus in L sr . This nice property and the sparsity in (35) justify the choice of s r . Following the line of analysis at the start of this section, we infer that the control law using d L sr = L sr (s r (t), Z ⇤ ) is GAS since L sr is always of full rank 4 and L sr d L sr
1 = I 4 when c Z ⇤ = Z ⇤ .
Let us now consider the more general case where c Z ⇤ 6 = Z ⇤ . From (35), it is straight-forward to obtain
L s c L s 1 = 2 6 6 4
1 0 0 Y 0 1 0 X 0 0 1 0 0 0 0 1 3 7 7 5 (36)
where
Y = ( b Z ⇤ Z ⇤ 1)y n and X = (1 b Z ⇤ Z ⇤ )x n .
The eigen values of the symmetric part of the above matrix product are given by = {1, 1, 1
± p X 2 +Y 2 2
}. For (32) to hold, all eigen values have to be positive, that is,
p X 2 +Y 2 2 < 1 , X 2 + Y 2 < 4.
Back-substitution of X and Y yields the following bounds for system stability:
1 2 p x 2 n + y 2 n < b Z ⇤ Z < 1 + 2 p x 2 n + y 2 n ( 37
)
which are easily ensured in practice since x n and y n are small (0.01 typically).
Let us now consider the case where c L s = I 4 , which is a coarse approximation. In that case, we obtain
L s c L s 1 = 2 6 6 4
1 0 0 y n 0 1 0 x n 0 0 1 0 0 0 0 1 3 7 7 5 (38)
Then, proceeding as previously leads to the following condition for GAS x 2 n + y 2 n < 4 (39) which, once again, is always ensured in practice. Note that these satisfactory theoretical results have not been reported previously and are an original contribution of this work. Unfortunately, exhibiting similar conditions for the WPM case is not so easy since the first three columns of L sr are not as simple as (35) due to the loss of invariance property of WPM.
B. 6 dof control
To control all the 6 dof, two more features in addition to (33) are required. In moments-based VS methods, these features are chosen as ratios of moment polynomials which are invariant to 2D translations, planar rotation and scale. In [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF], [START_REF] Tahri | Visual servoing based on shifted moments[END_REF], several moment invariants-based visual features have been introduced. In principle, all these previous results could be adopted for use with the photometric moments proposed in this work. Certainly, an exhaustive exploration of all these choices is impractical. Based on several simulations and experimental convergence trials (see [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]), the following visual feature introduced in [START_REF] Tahri | Visual servoing based on shifted moments[END_REF] was selected:
r = 1 / 2 (40)
with
⇢ 1 = 3μ 30 μ12 + μ2 30 + 3μ 03 μ21 + μ2 03 2 = μ30 μ12 + μ2 21 μ03 μ21 + μ2 12 ( 41
)
where μpq is the shifted moment of order p + q with respect to shift point x sh (x sh , y sh ) defined by [START_REF] Tahri | Visual servoing based on shifted moments[END_REF]: μpq = Z Z (x x g + x sh ) p (y y g + y sh ) q w(x)I(x) dx dy To sum up, the shifted moments in (42) are computed with respect to P 1 and P 2 , resulting in two different sets of shifted moments. Then, the feature in (40) is computed employing these two sets of moments to derive two corresponding visual features r P1 and r P2 . Therefore, the following set of visual features for controlling the 6 dof is obtained:
s = (x n , y n , a n , r P1 , r P2 , ↵) (44)
The interaction matrix developments of r P1 and r P2 are provided in Appendix A. When UWPM are used, the interaction matrix L s exhibits the following sparse structure when the sensor and target planes are parallel. The matrix E is non-singular if its left 2 ⇥ 2 submatrix has a non-zero determinant. When the interaction matrix is computed with moments from shift points (P 1 6 = P 2 ) as described above, this condition is effortlessly ensured. As a result, the interaction matrix L || s is non-singular [START_REF] Tahri | Visual servoing based on shifted moments[END_REF]. On the other hand, when the features are built from WPM, the sparsity in (45) cannot be achieved anymore. This is because L s has a more complex form, except for its last column which remains exactly the same (since behaviour with respect to optic axis rotations is not altered). Nevertheless, the obtained results were quite satisfactory for a variety of scenes and camera displacements, as shown in the next section.
L || s = I 3 D 0 3 E (
IV. VALIDATION RESULTS FOR UWPM
A. Modelling Validation and Comparison to Pure Luminance
In this section, simulation results of 6 dof positioning tasks are presented to demonstrate the correctness of the modelling of the UWPM proposed in Sec.II-A and to compare their behavior to pure luminance. The initial and desired images are shown in Figs. 3a and3b respectively. The background is empty without appearance or disappearance of scene portions in the camera view. The initial pose is chosen far away from the desired one such that the image overlap is small. The displacements required for convergence are a translation of t = [1.0m, 1.0m, 1.0m] and a rotation of R = [25 , 10 , 55 ].
The control law in (31) is used with c L s = L s (s(t), Z(t)). This control law is expected to result in a pure exponential decrease of the errors to 0. In simulation, the depths Z(t) are readily available from ground truth and need not be estimated. A gain of = 1.0 was used for this experiment.
As seen from Fig 3c, a perfect exponential decrease of the errors is indeed obtained as expected. Furthermore, the camera traces a straight-forward path to the goal pose as shown in Fig 3d . This demonstrates the validity of the modelling steps and the design of the visual features. Let us note that no image processing (image matching or visual tracking) were used with the photometric moments in the reported experiments.
Comparison to pure luminance: Then, the same control law configuration was tested using pure luminance directly as visual feature, that is using v
B. Experimental Results with UWPM
Experiments were performed at video rate on a Viper850 6 dof robot. Unlike in Sec IV-A, mild violations of the IP assumption are deliberately allowed. The photometric moments are tested first on SCARA-type motions and then with 6 dof.
1) SCARA motions: For this experiment, the features in (33) are used with their current interaction matrix b L s = c L s (s(t), c Z ⇤ ), with c Z ⇤ = (0, 0, 1/ Ẑ⇤ ), Ẑ⇤ roughly approximated with depth value at the desired pose. A gain of = 1.5 was used. The desired image is shown in Figure 5b. The initial pose is chosen such that the image in 5a is observed by the camera. The target is placed such that very small portions of its corners are slightly outside the field of view (see Fig 5a). Furthermore, the background is not perfectly black, thereby non-zero. It can be observed from Fig 6c that the decrease in errors is highly satisfactory while we recall that only the interaction matrix at the desired configuration and approximate depth were employed. The generated velocity profiles are also smooth as shown in Fig. 6d. Clearly, the camera spatial trajectory is close to a geodesic as shown in Figure IV-B2. Further, an accuracy of [ 0.56mm, 0.08mm, 0.14mm] in translation and [ 0.01 , 0.04 , 0.03 ] in rotation was obtained. The above experimental results showed results with UWPM where there are only mild violations of the IP assumption. Next, we show results on more general scenes with WPM where this restrictive assumption (black background) has been eliminated.
V. VALIDATION RESULTS FOR WPM
For all the experiments presented in this section, the parameter K = 1 is fixed, so maximum weight a pixel can have is 1. Then, a is chosen with a simple heuristic, that 40% of the image pixels will be assigned a weight greater than 0.5 and around 90% a weight greater than 0.01. This is straightforward to compute from the definition of w(x, y). For an image resolution of 640 ⇥ 480 for example, with K = 1, a = 650 satisfies the above simple heuristic. The surface of w(x, y) with these parameters is depicted in Fig. 1b. Let us note that the tuning of these parameters is not crucial. In our case, changing a by ±200 does not introduce any drastic changes in the results.
A. Validation of WPM
In this section, the modelling of WPM is validated using 6 dof positioning tasks in simulation. No specific backgrounds are considered anymore since the WPM designed in Section II-B are equipped to handle such scenarios. Comparisons to both the pure luminance feature and to moments without the weighting strategy are made.
The image learnt from the desired pose is shown in Fig 7b . In the image acquired from the initial robot pose (see Fig 7a ), a large subset of pixels not present in the desired image have appeared. In fact, there is no clear distinction of which pixels constitute the background. These scenarios are more representative of camera-mounted robotic arms interacting with real world objects. For the control, the set of visual features (44) is adopted with the current interaction matrix L s (s(t), Z(t)). The depths are not estimated but available from the ground truth data. A gain of = 1.5 was used for all the experiments. The resulting behaviour is very satisfactory. The errors in the visual features decrease exponentially as shown in Figures 7c and7d. This confirms the correctness of the modelling steps used to obtain the interaction matrix of WPM. Naturally, the successful results also imply the correctness of the visual features obtained from the weighted moments.
Comparison with UWPM: For the comparison, the same experiment is repeated with the same control law but without the weighting strategy. In this case, the errors appear to decrease initially (see Figs 8a and8b). However, after about 25 iterations the system diverges (see Fig 8c) and the servo is stopped after few iterations. As expected, the system in this case is clearly affected by the appearance and disappearance of parts of the scene.
Comparison to pure luminance: Next, we also compared the WPM with the pure luminance feature. Also in this case, the effect of the extraneous regions is severe and the control law does not converge to the desired pose. The generated velocities do not regulate the errors satisfactorily (see Fig 8d). The error This can be compared to the case of the WPM where the error norm decreases exponentially as shown in Figure 8e. Also, as mentioned previously, the visual features are redundant and there is no mapping of individual features to the actuated dof. The servoing behaviour depends on the profile of the cost function, which is dependent on all the intensities in the acquired image. The appearance and disappearance of scene portions thus also affects the direct visual servoing method. Thus, we see that the extraneous regions have resulted in the worst case effect namely non-convergence to the desired pose in both the UWPM as well as when using the pure luminance. Next, we discuss results obtained from servoing on a scene different from the one used in this experiment.
B. Robustness to large rotations
In this simulation, we consider 4 dof and very large displacements such that large scene portions enter and leave the camera field of view (see Figures 9a and9b).
A rotation of 100 around the optic axis and translational displacement of c⇤ t c = [5cm, 4cm, 25cm] are required for convergence. For this experiment, the VS control law in (31) with the features in (33) is used with a gain of = 2. For this difficult task, the mean c has been selected in the control scheme. Note that the depths are not updated at each iteration and only approximated using Z ⇤ = 1. This choice was on purpose to show that online depth estimation is not necessary and an approximation of its value at the desired pose is sufficient for convergence. The visual servoing converged to the desired pose with an accuracy of 0.29 in rotation and [ 0.07mm, 0.48mm, 0.61mm] in translation. The control velocities generated are shown in Fig. 9d and the resulting Cartesian trajectories are shown in Fig. 9e. This experiment demonstrates the robustness of the WPM to very large displacements even when there is appearance and disappearance of huge parts of the image. This affirms also that the convergence properties are improved with the proposed WPM.
L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ ) ⌘ (
C. Empirical Convergence Analysis
In this section, we compare through simulations the convergence domain of WPM with pure luminance and UWPM. For this, we considered the 4dof case as in [START_REF] Swensen | Empirical characterization of convergence properties for kernel-based visual servoing[END_REF]. Artificially generated synthetic scenes in which polygonal blocks are sprinkled at the image periphery were employed. As seen from Fig 10, this allows to simulate in varying degrees the appearance and disappearance of scene portions in the camera FOV. For this analysis, the desired pose to be attained is fixed at 1.8m. Positioning tasks starting from 243 different initial poses consisting of 3 sets of 81 poses each, conducted at 3 different depths of 1.8m, 1.9m and 2.0m were considered. In all these initial poses, the camera is subjected to a rotation of 25 around the optic axis while the x and y translations vary from 0.2m to 0.2m.
The interaction matrix c L s = L s (s ⇤ , c Z ⇤ ) is chosen in the control scheme, just like in previous works on convergence analysis [START_REF] Collewet | Photometric visual servoing[END_REF] [START_REF] Teulière | A dense and direct approach to visual servoing using depth maps[END_REF]. We consider an experiment to have converged if the task error kek is reduced to less than 1e 10 in a maximum of 300 iterations. In addition to this condition, we also impose that the SSD error defined by e SSD = P x [I(x) I ⇤ (x)] 2 /N pix between the final and learnt images is less than 1.0. This criterion ensures that a non-desired equilibrium point is not considered wrongly as converged. In the reported converged experiments, the final accuracy in pose is less than 1mm for translations and less than 1 for the planar rotation. The UWPM met with failure in all the cases. No segmentation or thresholding is employed and the servo is subjected to appearance and disappearance effects at the image periphery. A dismal performance resulted as expected without the weighting strategy since the model is not equipped to handle the energy inflow and outflow at respect to UWPM, the same set of experiments was repeated using a dense texture (see Fig. 11), where the WPM yield a better result than non-weighted moments. The non-weighted moments have converged on an average only in 55% of the cases. Also note that this is different from the synthetic case at 0%, that is they were completely unable to handle the entry and exit of extraneous regions. In comparison, for WPM, only 3 cases failed to converge out of 243 total runs with a very satisfactory convergence rate of 98%. In fact, in the first two sets of experiments, WPM converged for all the generated poses yielding a 100% convergence rate. No convergence to any undesired equilibrium points were observed, thanks to the textured object. The final accuracies for all the converged experiments was less than 1mm in translation and less then 1 in rotation. Based on the clear improvements in convergence rate, we conclude that WPM are effective as a solution to the problem of extraneous image regions and result in a larger convergence domain in comparison to classical nonweighted moments. We have finally to note that for larger lateral displacements, all methods fail since initial and desired images do not share sufficient common information.
D. Robustness to non planar environments
In this section, visual servoing with WPM is demonstrated on a non planar scene with the Viper850 robot by considering 4 dof as previously. A realistic scenario is emulated by placing five 3D objects of varying shape, size and color in the scene as shown in
c L s = 1 2 ⇣ L s (s(t), c Z ⇤ ) + L s (s ⇤ , c Z ⇤ )
⌘ has been selected in the control scheme. The depth distributions in the scene are not estimated nor known apriori. An approximation c Z ⇤ = (0, 0, 1/ Ẑ⇤ ) with Ẑ⇤ = 0.5m was used. A gain of = 0.4 was employed. The control law generates camera velocities that decrease exponentially (see Fig 12d), which causes a satisfactory decrease in the feature errors (see Fig 12c). The average accuracy in positioning in translations is 0.6mm while the rotational accuracy is 0.15 . The camera spatial trajectory is satisfactory as seen from Fig. 12e. The simplification [START_REF] Tahri | Point-based and region-based image moments for visual servoing of planar objects[END_REF] of planar scene introduced in the modelling (see Section II) is therefore a reasonable tradeoff of complexity, even if it is not possible to demonstrate that the sufficient stability condition (32) is ensured since c L s 6 = L s . This demonstrates the robustness of visual servoing with respect to (moderate) modelling approximations.
E. 6 dof experimental results
Several 6dof positioning experiments were conducted on the ViPER 850 robot. A representative one is presented below while the others can be consulted in [START_REF] Bakthavtachalam | Utilisation of photometric moments in visual servoing[END_REF]. For this experiment, the desired robot pose is such that the camera is at 0.5m in a frontoparallel configuration in front of the target. The image learnt from this pose is shown in Fig 13b. L s as the mean of the desired and current interaction matrices. No pose estimation is performed and the depth is approximated roughly as 0.5m. The appearance of new scene portions in the camera view from the left side of the image does not affect the convergence of the visual servo. This influx of information is handled gracefully thanks to the improved modelling used by the WPM. The error in features related to control of rotational motions is very satisfactory (see Fig13e). On the other hand, from the error decrease in features related to control of translational motions in Figure 13d, it can be seen that the error in feature a n is noisy. This feature is based on the area moment m 00 directly related to the quantity of pixels in the image. Since the lighting conditions are not controlled, this might sometimes contribute to some noise in the features. It is also to be noted that when the interaction matrix is updated at each iteration (for the mean configuration in this case), this noise in the features sometimes make the velocities noisy as well (see Figure 13f). However, this noise does not affect the satisfactory convergence as evidenced by our results. A satisfactory Cartesian behaviour was obtained as shown in Fig 13g . The final accuracy in translations is [ 0.05mm, 1.1mm, 0.08mm] and for the rotations is [0.18 , 0.006 , 0.019 ]. Let us finally note that a superior strategy would be to use the photometric moments during the beginning of the servo and to switch over to the pure luminance feature near convergence (when the error norm is below a certain lower bound). This strategy would ensure both enhanced convergence domain thanks to photometric moments and excellent accuracies at convergence thanks to luminance feature.
Let us finally note that it is possible to use a normalized intensity level in order to be robust to global lighting variations. Such a normalization can be easily obtained by computing in a first step the smallest and highest values observed in the image. This simple strategy does not modify any modelling step presented in this paper as long as the parts of the scene corresponding to these extremal values do not leave the image (or new portions with higher or smaller intensities do not enter in the camera field of view), which would thus allow obtaining exactly the same results in that case. On the other hand, if the extremal values do not correspond to the same parts of the scene, the induced perturbations may cause the failure of the servoing.
VI. CONCLUSION
This paper proposed a novel visual servoing scheme based on photometric moments, which capture the image intensities in the form of image moments. The analytical form of the interaction matrix has been derived for these new features. Visual servoing is demonstrated on scenes which do not contain a discrete set of points or monotone segmented objects. Most importantly, the proposed enhanced model takes into account the effect of the scene portions which appear and disappear from the camera field of view during the visual servoing. Existing results based on moment invariants are then exploited to obtain visual features from the photometric moments. The control using these visual features is performant for large SCARA motions (where the images acquired during the servo have very less overlap with the desired image), with a large convergence domain in comparison to both the pure luminance feature and to features based on nonweighted moments. The proposed approach can also be used with non planar environments. This paper thus brings notable improvements over the pure luminance feature and existing moments-based VS methods.
The 6 dof control using weighted photometric moments yielded satisfactory results for small displacements to be realized. The control can be rendered suitable for large displacements if the alteration in invariance properties induced by the weighting function can be prevented. So, an important future direction of work would be about the formulation of alternate weighting strategies that preserve the invariance properties as in the non-weighted moments. This is an open and challenging problem that, once solved, would ease a complete theoretical stability and robustness analysis. Also, it is to be noted that the method will certainly fail when the shared portions between the initial and desired images are too low. Another distinction with respect to geometric approaches is that the performance depends on the image contents and hence large uniform portions with poorly texture scenes might pose issues for the servoing. Despite these obvious shortcomings, we believe that direct approaches will become more commonplace and lead to highly performant visual servoing methods.
APPENDIX
A. Interaction matrix of r P1 and r P2
In (42), on expanding the terms (x x g + x sh ) p and (y y g + y sh ) q using the binomial theorem, the shifted moments can be expressed in terms of the centred moments:
Fig. 1 .
1 Fig. 1. a) Evaluation of contour integrals in the interaction matrix developments, b) Custom exponential function w(x, y) = exp 650(x 2 +y 2 ) 2 in the domain 0.4 x 0.4 and 0.3 y 0.3. Gradual reduction in importance from maximum (dark red) in the centre outwards to minimum (blue) at the edges With the same line of reasoning, the contour integral in m ry pq also vanishes. Then (21) transforms to the following simple form:⇢ m rx pq = p m p 1,q m ry pq = q m p,q 1
2 arctan ⇣ 2µ 11 µ 20 µ 02 ⌘µ 20 = m 20 m 00 x 2 g µ 02 = m 02 m 00 y 2 g µ 11 =
21102202211 00m00 with x g = m 10 /m 00 and y g = m 01 /m 00 the centre of gravity coordinates, Z ⇤ the desired depth and finally ↵ = 1 is made of centred moments given by: m 11 m 00 x g y g
Fig. 2 .
2 Fig. 2. Shift points P 1 (xg + x sh1 ) and P 2 (xg + x sh2 ) with respect to which the shifted moments are computed. As shown in Fig 2, one shift point is selected along the major orientation (✓ = ↵) and the second point orthogonal to the previous (✓ = ↵ + ⇡ 2 ) such that we have : P 1 [x g + p m 00 cos(↵), y g + p m 00 sin(↵)] and P 2 [x g + p m 00 cos(↵ + ⇡ 2 ), y g + p m 00 sin(↵ + ⇡ 2 )]. To sum up, the shifted moments in (42) are computed with respect to P 1 and P 2 , resulting in two different sets of shifted moments. Then, the feature in (40) is computed employing these two sets of moments to derive two corresponding visual features r P1 and r P2 . Therefore, the
c = c L I + (I I ⇤ ). The velocity profiles generated are shown in Figure 4c. The pure luminance experiment is not successful as it results in an enormous final error e ⇡ 10 9 , as seen from Fig 4a. In direct
Fig. 3 .Fig. 4 .
34 Fig. 3. Simulation results with UWPM in perfect conditions
Fig. 5 .
5 Fig. 5. Experimental results in SCARA mode actuation pertaining to Section IV-B1.
Fig. 6 .
6 Fig. 6. 6 dof experimental results pertaining to Section IV-B2.
Fig. 7 .
7 Fig. 7. Simulation V-A : 6 dof VS with WPM pertaining to Section V-A.
Fig. 8 .
8 Fig.8. Simulation V-A : 6 dof VS comparison to UWPM and pure luminance (see Fig.7).
Fig. 9. 4 dof simulation results under large rotations (see Section V-B).
Fig. 10 .Fig. 11 .
1011 Fig. 10. Desired image in (a) and a sampling of different images from the 243 generated initial poses are shown in (b)-(d)
Fig 12f. In the initial acquired image (see Fig 12a), 3 out of these 5 objects are not fully visible. The WPM were in fact conceived for use in such scenarios. Rotational displacement of 10 around the optic axis and translations of c⇤ t c = [1.5cm, 1cm, 8cm] are required for convergence. Once again, the mean interaction matrix
The initial pose is chosen such that the image in Fig 13a is observed. Let us note that Lauren Bacal present in the left part of the desired image is completely absent from the initial image. The corresponding difference image is shown in Fig 13c. There is no monotone segmented object and the assumption about uniform black background is clearly not valid in this case. Nominal displacements of [ 0.35cm, 1.13cm, 6.67cm] in translation and [0.33 , 1.05 , 12.82 ] in rotation are required for convergence. The control law in (31) with the features in (44) is used, with b
Fig. 12 .
12 Fig. 12. 4 dof experimental results with a non planar scene (see Section V-D).
Fig. 13. WPM 6 dof experimental results (see Section V-E).
lx k sh y l 1 p
1 g ) p (y y g ) q w(x)I(x)dxdy (47) Differentiating (46) will yield the interaction matrix of the shifted moments.L μpq = L x sh p sh µ p k,q l + L µ p k,q l p m00 L m00 p m 00 sin ✓L ✓ L y sh = 1 2 sin ✓ p m00 L m00 + p m 00 cos ✓L ✓ (49)
Note that the general analytic form of Lm pq could be obtained with n > 1 for non planar scenes, as was done in[START_REF] Chaumette | Image moments: a general and useful set of features for visual servoing[END_REF] for the geometric moments
or white with the intensity I redefined to Imax I and the rest of the developments remain identical.
with ✓ = ↵ for shift point P 1 and ✓ = ↵+ ⇡ 2 for shift point P 2 . Further, by differentiating (47), we obtain
with r = p + q k l. Knowing (48) and (49), the interaction matrix for any shifted moment of order p + q can be obtained. The next step is to compute L 1 and L 2 by differentiating (41). Finally, the interaction matrix L r is directly obtained by differentiating (40). |
01758280 | en | [
"info.info-ni"
] | 2024/03/05 22:32:10 | 2017 | https://theses.hal.science/tel-01758280/file/2017IMTA0032_AflatoonianAmin.pdf | Dr Karine Guil
An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based NBI allowing opening up a secured BYOC-enabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. Delegating the control of all or a part of a service introduces some potential valueadded services. Security applications are one of these BYOC-based services that might be provided by an SP. We discuss their feasibility through a BYOC-based Intrusion Prevention System (IPS) service example. v Résumé Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. L'équilibre financier d'un SP dépend principalement des capacités de son réseau qui est valorisé par sa fiabilité, sa disponibilité et sa capacité à fournir de nouveaux services. À contrario l'évolution permanente du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services et conduit à une évolution des services réseau traditionnels vers de nouveaux services réseau à la demande. Ceux-ci permettent aux clients du SP de déployer et de gérer leurs services de manière autonome et optimale grâce à l'ouverture par le SP d'une interface bien définie sur sa plate-forme. Pour offrir cette souplesse de fonctionnement à ses clients en leurs fournissant des capacités réseau à la demande, le SP doit pouvoir s'appuyer sur une plate-forme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plateforme peut être fournie grâce à la technologie SDN (Software-Defined Networking). Nous proposons une caractérisation préalable de la classe de services réseau à la demande, qui en fixe le périmètre. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui précise chacune de ses étapes. L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous l'étendons à travers un Framework original permettant la gestion de toutes les étapes identifiées dans le cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. Sa mise en oeuvre nécessite une encapsulation du contrôleur SDN. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé. La maitrise par le SP de l'ouverture contrôlée de la face nord du SDN devrait être profitable tant au SP qu'à ses clients. Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. L'ouverture d'une interface de contrôle offrant un accès de granularité variable à l'infrastructure sous-jacente, nous conduit à prendre vi en compte certaines exigences incontournables telles que le multi-tenancy ou la sécurité, au niveau de l'interface Northbound (NBI) du contrôleur SDN. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous présentons un NBI basé sur XMPP permettant l'ouverture d'une API BYOC sécurisée. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. La délégation du contrôle de tout ou partie d'un service permet d'enrichir certains services d'une valeur ajoutée supplémentaire. Les applications de sécurité font partie des services BYOC pouvant être fournis par un SP. Nous illustrons leur faisabilité par l'exemple du service IPS (système de prévention d'intrusion) décline en BYOC.
iii
Abstract
Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The economy of an SP depends on its network which is evaluated by its reliability, availability and ability to deliver new services. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP's platform. To offer this freedom to its customers and to provide on-demand network capabilities, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology. Indeed, the SDN controller can be used to provide an interface to service customers where they could on-demand subscribe to new services and modify or retire existing ones.
To this end we first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps.
The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed.
Providing to the SP the mastering of SDN's openness on its northbound side should largely be profitable to both SP and customers. We therefore propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an on-demand service by the delegation of part of its control to an external third party. Opening a control interface and offering a granular access to the underlying infrastructure leads us to take into account some characteristics, such as multi-tenancy or security, at the Northbound Interface (NBI) level of the SDN controller. Dans cette thèse nous nous intéressons à la gestion des services de télécommunication dans un environnement contrôlé. L'exemple de la gestion d'un service de connectivité (MPLS xxii VPN) enrichi d' un contrôle de la qualité de service (QoS) centralisé, nous sert de fil conducteur pour illustrer notre analyse. Au cours de la dernière décennie, les réseaux MPLS ont évolué et sont devenus critiques pour les fournisseurs de services. MPLS est utilisé à la fois pour une utilisation optimisée des ressources et pour l'établissement de connexions VPN.
List of Tables
À mesure que la transformation du réseau devient réalité et que la numérisation modifie les méthodes de gestion des services, les services de réseau traditionnels sont progressivement remplacés par les services de réseau à la demande. Les services à la demande permettent aux clients de déployer et de gérer leurs services de manière autonome grâce à l'ouverture par le fournisseur de service d'une interface bien définie sur sa plate-forme. Cette interface permet à différents clients de gérer leurs propres services possédant chacun des fonctionnalités particulières. Pour offrir cette souplesse de fonctionnement à ses clients en leurs fournissant des capacités réseau à la demande, le fournisseur de services doit pouvoir s'appuyer sur une plate-forme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking).
Un réseau de télécommunications fait appel à différentes technologies fournissant plusieurs types de services. Ces services sont utilisés par plusieurs clients et une mauvaise configuration d'un service client peut avoir des conséquences sur la qualité de service des autres.
La position centrale du contrôleur SDN permet à l'opérateur de gérer tous les services et équipements. Cependant la fourniture d'une interface de gestion et de contrôle de service à granularité variable s'appuyant sur ce contrôleur requiert la mise en place d'une couche supplémentaire de gestion des services au-delà du contrôleur et permettant au fournisseur de services de gérer le cycle de vie du service tout en mettant à la disposition de ses clients une interface de gestion de service.
Nous présentons dans le cadre de cette thèse un framework basé sur SDN permettant à la fois de gérer le cycle de vie d'un service et d'ouvrir avec une granularité contrôlable l'interface de gestion de services. La granularité de cette interface permet de fournir différents -Création de service : L'application spécifie les caractéristiques de service dont elle a besoin, elle négocie le SLA associé qui sera disponible pour une durée limitée et enfin elle demande une nouvelle création de service.
-Retrait du service : l'application retire le service à la fin de la durée négociée. Cette étape définit la fin de la durée de vie.
Les applications de type 2 tire parti des événements provenant de la NBI pour surveiller le service. Il est à noter que ce service peut être créé par la même application qui surveille le service.
Ce type d'application ajoute une étape supplémentaire au cycle de vie du service côté client.
Ce cycle de vie contient trois étapes principales :
-Création de service.
-Surveillance de service : Une fois créé, le service peut être utilisé par le client pour une durée négociée. Pendant ce temps, certains paramètres réseau et de service seront surveillés grâce aux événements et aux notifications envoyées par le SDNC à l'application.
-Retrait de service.
Dans un cas plus complexe, c'est-à-dire les applications de type 3, une application peut créer le service via la NBI, elle surveille le service via cette interface et, en fonction des événements à venir, elle reconfigure le réseau via le SDNC. Ce type de contrôle ajoute une étape rétroactive au cycle de vie du service côté client. Celui-ci contient quatre étapes principales :
-Création de service.
-Surveillance de service.
-Modification de service : Les événements remontés par les notifications peuvent déclencher un algorithme implémenté dans l'application (implémenté au nord du SDNC), dont la sortie reconfigure les ressources réseau sous-jacentes via le SDNC.
-Retrait de service.
Un cycle de vie global de service côté client contient toutes les étapes préalables nécessaires pour gérer les trois types d'applications, discutées précédemment. Nous introduisons dans ce modèle une nouvelle étape déclenchée par les opérations côté opérateur :
-Création de service.
-Surveillance de service.
-Modification de service.
-Mis à jour de service : La gestion du réseau de l'opérateur peut entraîner la mise à jour du service. Cette mise à jour peut être émise en raison d'un problème survenant lors de l'utilisation du service ou d'une modification de l'infrastructure réseau. Cette mise à jour peut être minime, telle que la modification d'une règle dans l'un des équipements sous-jacents, ou peut avoir un impact sur les étapes précédentes, avec des conséquences sur la création du service et / ou sur la consommation du service.
-Retrait de service.
Le cycle de vie du service côté opérateur comprend en revanche six étapes principales : xxv -Demande de service : Une fois qu'une demande de création ou de modification de service arrive du portail de service des utilisateurs, le gestionnaire de demandes négocie le SLA et une spécification de service de haut niveau afin de l'implémenter. Il convient de noter qu'avant d'accepter le SLA, l'opérateur doit s'assurer que les ressources existantes peuvent gérer le service demandé au moment où il sera déployé.
En cas d'indisponibilité, la demande sera mise en file d'attente.
-Décomposition de service, compilation : Le modèle de haut niveau du service demandé est décomposé en plusieurs modèles de service élémentaires qui sont envoyés au compilateur de service. Le compilateur génère un ensemble de configurations de ressources réseau qui composent ce service.
-Configuration de service : Sur la base du précédent ensemble de configurations de ressources réseau, plusieurs instances de ressources virtuelles correspondantes seront créées, initialisées et réservées. Le service demandé peut ensuite être implémenté sur ces ressources virtuelles créées en déployant des configurations de ressources réseau générées par le compilateur.
-Maintenance et surveillance de service : Une fois qu'un service est mis en oeuvre, sa disponibilité, ses performances et sa capacité doivent être maintenues automatiquement. En parallèle, un gestionnaire de journaux de service surveillera tout le cycle de vie du service.
-Mise à jour de service : Lors de l'exploitation du service, l'infrastructure réseau peut nécessiter des modifications en raison de problèmes d'exécution ou d'évolution technique, etc. Elle entraîne une mise à jour susceptible d'avoir un impact différent sur le service. La mise à jour peut être transparente pour le service ou peut nécessiter de relancer une partie des premières étapes du cycle de vie du service.
-Retrait de service : la configuration du service sera retirée de l'infrastructure dès qu'une demande de retrait arrive au système. Le retrait du service émis par l'exploitant est hors du périmètre de ce travail.
Un framework d'approvisionnement de services SDN
Les processus de gestion des services peuvent être divisés en deux familles plus génériques : la première gère toutes les étapes exécutants les taches liées au service, depuis la négociation Ces modèles permettent de dériver le type et la taille des ressources nécessaires pour implémenter ce service. Le SO demande la réservation de ressources virtuelles à partir de la couche inférieure et déploie la configuration de service sur les ressources virtuelles via un SDNC.
L' "Orchestrateur de ressource" gère les opérations sur les ressources :
-Réservation de ressources -Surveillance des ressources Cet orchestrateur, qui gère les ressources physiques, réserve et lance les ressources virtuelles.
Il maintient et surveille les états des ressources physiques en utilisant son interface sud.
L'architecture interne de SO est composée de cinq modules principaux :
-Gestionnaire de demande de service (SCM) : il traite les demandes de service des clients et négocie les spécifications du service.
-Gestionnaire de décomposition et compilation de service (SDCM) : il répartit toutes les demandes de service reçues en un ou plusieurs modèles de service élémentaires qui sont des modèles de configuration de ressources.
-Gestionnaire de configuration de service (SCM) : il configure les ressources physiques ou virtuelles via le SDNC.
-Contrôleur SDN (SDNC)
-Gestionnaire de surveillance de service, d'une part, il reçoit les alarmes et notifications à venir de l'orchestrateur inférieur, RO, et d'autre part il communique les notifications de service à l'application externe via la NBI.
Bring Your Own Control (BYOC)
Conclusion et perspectives
Chapter 1
Introduction
In this chapter we introduce the context of this thesis followed by the motivation and background of this studies. Then we present our main contributions and we conclude by the structure of this document.
Thesis context
Over the past two decades, service providers have been crossed through several generations of technologies redefining networks and requiring new business models. The economy of a Service Provider depends on its network which is evaluated by its reliability, availability and ability to deliver services. Due to the introduction of new technologies requiring a pervasive network, new and innovative applications and services are increasing the demand for network access [START_REF] Metzger | Future Internet Apps: The Next Wave of Adaptive Service-Oriented Systems?[END_REF]. Service Providers, on the other hand, are looking for a cost-effective solution to meet this growing demand while reducing the network complexity [START_REF] Benson | Unraveling the Complexity of Network Management[END_REF] and costs (i.e. Capital Expenditure (CapEx) and Operating Expenditure (OpEx)), and accelerating service innovation.
The network of an Operator is designed on the basis of equipments that are carefully developed, tested and configured. Due to the importance of this network, the operators avoid the risks of modifications made to the network. Hardware elements, protocols and services require several years of standardization before being integrated into the equipment by suppliers. This hardware lock-in reduces the ability of Service Providers to innovate, integrate and develop new services.
The network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Transformation means making it possible to exploit network capabilities through application power. This transformation converts the Operator network from a simple utility to a digital service delivery platform. The latter not only increases the velocity of the service, but also creates new sources of revenue. Recently Software-Defined Networking (SDN) [START_REF] Mckeown | Software-defined networking[END_REF][START_REF] Kim | Improving network management with software defined networking[END_REF] and Network Function Virtualization (NFV) [START_REF] Mijumbi | Network Function Virtualization: State-of-the-Art and Research Challenges[END_REF][START_REF] Han | Network function virtualization: Challenges and opportunities for innovations[END_REF] technologies are proposed to accelerate the transformation of the network. The promise Chapter 1. Introduction of these technologies is to bring more flexibility and agility to the network while creating cost-effective solutions. This will allow Service Providers to become digital businesses.
The SDN concept is presented to decouple the control and forwarding functionalities of network devices by putting the first one on a central unit called controller [START_REF] Kreutz | Software-Defined Networking: A Comprehensive Survey[END_REF]. This separation makes it possible to control the network from a central application layer simplifying network control and management tasks. And the programmability of the controller accelerates the Service Providers network transformation. As the network transformation is becoming a reality and the digitalization is changing the service management methods, traditional network services are replacing with on-demand network services. On-demand services allow customers to deploy and manage their services independently through a well-defined interface opened to the Service Providers platform.
Motivation and background
This interface allows different customers to manage their own services each one possessing special features.
For example, to manage a VPN service, a customer might have several types of interactions with the Service Provider platform. For the first case, a customer might request a fully managed VPN interconnecting its sites. For this type of service, the customer owns abstract information about the service and provides a simple service request to the Service Provider.
The second case is a customer, with a more professional profile, who monitors the service by retrieving some network metrics sent from the Providers platform. And the third type consists of a more dynamic and open service sold to customers wishing to control all or part of their services. For this type of services, based on the metrics retrieved from the Service Providers platform, the customer re-configures the service.
Problem statement
In order to offer this freedom to its customers and to provide on-demand network capability, the Service Provider must be able to rely on a dynamic and programmable network control platform. We argue that this platform can be provided by SDN technology. Indeed, the SDN
Contributions of this thesis
As part of this thesis we present an SDN based framework allowing to both manage the lifecycle of a service and open the service management interface with a fine granularity. The granularity of this interface allows to provide different levels of abstraction to the customer, each one allowing to offer part of the capabilities needed by an on-demand service discussed in Section 1.2.
The following are the main research contributions of this thesis.
-A double-sided service lifecycle and the associated data model
We first characterise the applications that might be deployed upon the northbound side of an SDN controller, through their lifecycle. The characterisation rests on a classification of the complexity of the interactions between the outsourced applications and the controller. This leads us to a double-side service lifecycle presenting two articulated points of view: client and operator. The service lifecycle is refined with a data model completing each of its steps.
-
A
Document structure
In Chapter 2 we present a state of the art on SDN and NFV technologies. We try to focus our study on SDN control and application layer. We present two classifications of SDN applications. For the first classification we are interested in the functionality of applications and their contribution in the deployment of the controller. And for the second one, we present different types of applications according to the model of the interaction between them and the controller. We discuss in this second classification three types of applications, each one requiring some characteristics at the Northbound Interface (NBI) level.
In Chapter 3 we discuss the deployment of a network service in SDN environment. For the first part of this chapter, we present the MPLS networks with a rapid analysis of the control and forwarding planes of these networks in the legacy world. This analysis quickly shows which information is used to configure such a service. This information is, for confidential reasons, managed by the operator most of which is not manageable by the customer.
For the second part of this chapter, we analyze the deployment of the MPLS service on the SDN network through the OpenDaylight controller. For this analysis we consider two possibilities: (1) deployment of the service using the third-party applications developed on the controller (the VPN Services project), and (2) deployment of the service using the northern Application Programming Interface (API)s provided by the controller's native functions.
The results obtained during the second part together with the case study discussed in the first part, accentuate the lack of a service management system in the current controllers.
This justifies the presentation of a service management framework providing the service management interfaces and managing the service lifecycle.
In order to refine the perimeters of this framework, we firstly discuss a service life cycle studies in Chapter 4. This analysis is carried out on two sides: customer and operator.
For the service lifecycle analysis from the client-side perspective, we rely on the classification of applications made in Chapter 2. During this analysis we study the additional steps that each application adds in the lifecycle of a service. And for the analysis of the lifecycle from the operator side view point we study all steps an operator takes during the deployment and management of a service.
At the end of this chapter, we discuss the data model allowing to implement each step of the service lifecycle. This data model is based on a two layered approach analyzing a service provisioning system on two layers: service and device. Based on this analysis, we study the data model of each service lifecycle step, helping to define the internal architecture of the service management framework.
Document structure
Service lifecycle analysis leads us to present, in Chapter 5, the SDN-based service management framework. This framework cuts up all the tasks an operator performs to manage the lifecycle of a service. Through an MPLS VPN service deployment example we detail all of these steps. Part of tasks are carried on the service presented to the client, and part of them on the resources managed by the operator. We organize these two parts into two orchestration systems, called respectively Service Orchestrator and Resource Orchestrator.
In order to analyze the framework's capability in service lifecycle management, we take the example of MPLS VPN service update. With this example we show how the basic APIs provided by an SDN controller can be used by the framework to deploy and manage a requested service.
The presented framework allows us not only to manage the service life cycle but also to open an NBI to the client. This interface allows us to provide different levels of abstraction used by each of lastly discussed three types of applications.
In Chapter 6, we present for the first time the new service model: Bring Your Own Control (BYOC). This new service allows a customer or a third party operator to participate in the service lifecycle. This is the practical case of a type 3 application, where the client configures a service based on the events coming up from the controller.
We analyze characteristics of interface allowing to deploy such a BYOC-type service. We present in this chapter the XMPP protocol as a good candidate enabling us to implement this new service model.
In Chapter 7, we apply the BYOC model to a network service. For this use case we choose to externalize the control of an IPS. Outsourcing the IPS service control involves implementing the attack detection engine in an external controller, called Guest Controller (GC).
In Chapter 8, we point out the main contributions of this thesis and give the research perspectives in relation to BYOC services in SDN/NFV and 5G networks.
Chapter 2
Programming the network
In this chapter we present, firstly, a state of the art on programmable networks. Secondly, we study Software-Defined Networking (SDN) as a technology allowing to control and program network equipment to provide on-demand services. For this analysis we discuss the general architecture of SDN, its layers and its interfaces. Finally, we discuss SDN applications, their different types and the impact that all applications can have on the internal architecture of an SDN controller.
Technological context
Nowadays Internet whose number of users exceeds 3,7 billions [START_REF]World Internet Usage and Population Statistics[END_REF], is massively used in all human activities from the professional part to the private ones via academical ones, administrative ones, etc. The infrastructure supporting the Internet services rests on various interconnected communication networks managed by network operators. This continuously growing infrastructure evolves very dynamically and becomes quite huge, complex, and sometimes locally ossified.
Fundamentals of programmable networks
The high performance constraints required for routers in packet switched networks, limit the authorized processing to the sole modification of the packet headers. The strength of this approach is also its weakness because the counterpart of their high performance is their lack of flexibility. The evolution brought by the research on the programmability of the network, has led to the emergence of strong ideas whose relevance can be measured by their intellectual longevity.
The seed of the idea of having APIs allowing a flexible management of the network equipments at least goes back to the OpenSig initiative [START_REF] Campbell | A Survey of Programmable Networks[END_REF] which aimed to develop and promote standard programmable interfaces crafted on network devices [START_REF] Biswas | The IEEE P1520 standards initiative for programmable network interfaces[END_REF]. It is one of the first fundamental steps towards the virtualization of networks the main objectives of which consisted in switching from a strongly coupled network, where the hardware and the software are intimately linked to a network where the hardware and the software are decorrelated.
It concretely conducts in keeping the data forwarding capability inside the box while outsourcing the control.
In a general setting the control part of the processing carried out in routers, roughly consists in organizing in a smart and performant way the local forwarding of each received packet while ensuring a global soundness between all the boxes involved in its path. The outsourcing of the control has been designed according different philosophies. One aesthetically nice but extreme vision known as « Active Networks » recommends that each packet may carry in addition to its own data, the code of the process which will be executed at each crossed
Software-Defined Networking (SDN)
SDN is presented to change the way networks operate by giving hope to change the current network limitations. It enables simple network data-path programming, allows easier deployment of new protocols and innovative services, opens network virtualization and management by separating the control and data planes [START_REF] Kim | Improving network management with software defined networking[END_REF]. This paradigm is attracting attention by both academia and industry. SDN breaks the vertical integration of the traditional network devices by decoupling the control and data planes, where network devices become a simple forwarding device programmed by a logically centralized application called controller or network operating system. As the OpenFlow-based SDN community is growing up, a large variety of OpenFlowenabled networking hardware and software switches are presented into the market. Hardware devices are produced for a long range purposes, from the small businesses [START_REF] Hp | zl Switch Series[END_REF]18] to Chapter 2. Programming the network high-class one [START_REF] Ferkouss | A 100Gig network processor platform for openflow[END_REF] used for their high switching capacity. Software switches, on the other hand are mostly OpenFlow-enabled applications, and are used to provide the virtual access points in the data centers and to bring virtualized infrastructures.
Architecture
SDN Southbound Interface (SBI)
The communication between the Infrastructure layer and the control one is assured through a well-defined API called Southbound Interface (SBI), that is the element separating the data and the control plane. This one provides for upper layer a common interface to manage physical or virtual devices by a mixture of different southbound APIs and control plug-ins.
The most accepted and implemented of such southbound APIs is OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] standardized by Open Networking Foundation (ONF) [START_REF] Onf | Open Networking Foundation[END_REF].
OpenFlow Protocol
The SDN paradigm is started by the forwarding and control layer separation idea presented by OpenFlow protocol. This protocol enables flow-based programmability of a network device. Indeed, OpenFlow provides for SDN controller an interface to create, update and delete new entries reactively or proactively.
SDN Controller
Host A Host B
Switch1 Switch2 Switch3
Forwarder Forwarder Forwarder ing the first packet, Switch 1 looks up in its flow table, if no match for the flow is found, the switch sends an OpenFlow PACKET_IN message to the SDN controller for instructions.
Ctrl Agent Ctrl Agent
2.4. Software-Defined Networking (SDN)
13
Based on this message, the controller creates a PACKET_OUT message and sends it to the switch. This message is used to add a new entry to the flow table of the switch.
Programming a network device using the OpenFlow can be done in three ways [START_REF] Salisbury | OpenFlow: Proactive vs Reactive Flows[END_REF]:
-Reactive flow instantiation. When a new flow arrives to the switch, it looks up into the flow table and if the relevant action doesn't match with the flow, the switch sends a PACKET_IN message to the controller. In previous example, shown in Fig. 2.5, the SDN controller programs the Switch 1 in a reactive manner.
-Proactive flow instantiation. In contrast to the first case, a flow can be defined in advance. In this case when a new flow comes to the switch there is no lookup into the flow table and the action will be done based on a predefined entry. In our example (Fig. 2.5) the follow programing done for two Switches 2 and 3, is a proactive one. The proactive flow instantiation eliminates the latency introduced by controller interrogation.
-Hybrid flow instantiation. This one is a combination of two first modes. In our example (Fig. 2.5) for a specific traffic, sent by Host A to Host B, the controller programs the related switches using this method. The Switch 1 is programmed reactively and two other switches (Switch 2 and Switch 3) are programmed proactively. Using hybrid flow instantiation allows to benefit the flexibility of the reactive mode for granular traffics, while saving a low-latency traffic forwarding for the rest of traffic.
OpenFlow switch
The most recent OpenFlow Switch (1.5.0) has been defined by ONF [START_REF]OpenFlow Switch Specification, Version 1.5.0[END_REF]. -OpenFlow Channel creates a secured channel, over Secure Sockets Layer (SSL), between the switch and a controller. Using this channel, the controller manages the switch via OpenFlow protocol allowing commands and packet to be sent from the controller to the switch.
Chapter 2. Programming the network -Flow Table contains a set of flow entries dictating the switch how to process the flow.
These entries include match fields, counters and a set of instructions.
-Group Table contains a set of group each one having a set of actions.
Fig. 2.7 shows an OpenFlow Switch flow table. Each flow table contains three columns: rules, actions and counters [START_REF] Mckoewn | Why can't I innovate in my wiring closet?[END_REF]. The rules column contains header fields used to define a flow. For an incoming packet, the switch looks up the flow table, if a rule matches the header of the packet, the related action of action table will be applied to the packet, and finally the counter value will be updated. There are several possible actions to be taken on a packet (Fig. 2.7).
The packet can be forwarded to a switch port, it can be sent to the controller, it can be sent to a group table, it can be modified in some fashions, or it can be dropped.
SDN Controller
The Control plane, equivalent to the network operating system [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], is the intelligent part of this architecture. It controls the network thanks to its centralized perspective of networks state. On one hand, this logically centralized control simplifies the network configuration, management and evolution through the SBI. On the other hand, it gives an abstract and global view of the underlying infrastructure to the applications through the Northbound Interface (NBI).
While SDN's interest is quite extending in different environments, such as home networks [START_REF] Yiakoumis | Slicing Home Networks[END_REF],
data center network [START_REF] Al-Fares | A Scalable, Commodity Data Center Network Architecture[END_REF], and enterprise networks [START_REF] Casado | Ethane: Taking Control of the Enterprise[END_REF], the number of proposed SDN controller architecture and the implemented functions is also growing up.
Despite this large number, most of existing proposals implement several core network functions. These functions are used by upper layers, such as network applications, to build their own logic. Among the various SDN controller implementations, these logical blocks can be classified into: Topology Manager, Device Manager, Stats Manager, Notification Manager and Shortest Path Forwarding. For instance, a controller should be able to provide a network topology model to the upper layer applications. It also should be able to receive, process and forward events by creating alarm notifications or state changes.
As mentioned previously, nowadays, numerous commercial and non-commercial communities are developing SDN controllers proposing network applications on top of them. Controllers such as NOX [START_REF] Gude | NOX: Towards an Operating System for Networks[END_REF], Ryu [29], Trema [30], Floodlight [START_REF]Floodlight OpenFlow Controller[END_REF], OpenDayLight [START_REF]The OpenDaylight SDN Platform[END_REF] and ONOS [START_REF] Berde | ONOS: Towards an Open, Distributed SDN OS[END_REF] are the top five today's controllers. These controllers implement basic network functions such as topology manager, switch manager, etc. and provide the network programmability to applications via NBI. In order to implement a complex network service on a SDN-based network, service providers face a large number of controllers each one implementing a large number of core services based on a dedicated work flow and specific properties. R. Khondoker et al. [START_REF] Khondoker | Feature-based comparison and selection of Software Defined Networking (SDN) controllers[END_REF] tried to solve the problem of selecting the most suitable controller by proposing a decision making template. The decision however requires a deep analysis of each controller and totally depends on the service use case. It is worth to mention that in addition to this miscellaneous controller's world, the NBI abstraction level diversity also emphasizes the challenge.
SDN Northbound Interface (NBI)
In the SDN ecosystem the NBI is the key. This interface allows applications to be independent of a specific implementation. Unlike the southern interface, where we have some standard proposals (OpenFlow [START_REF] Mckeown | OpenFlow: Enabling Innovation in Campus Networks[END_REF] and NETCONF [START_REF] Enns | Network Configuration Protocol (NETCONF)[END_REF]), the subject of a common and a standard NBI standard is remained open. Since use cases are still in development, it is still immature to define a standardized NBI. Contrary to its equivalent in the south (SBI), the NBI is a software ecosystem, it means that the standardization of this interface requires more maturity and a well standardized SDN framework. In application ecosystems, implementation is usually the leading engine, while standards emerge later [START_REF] Guis | The SDN Gold Rush To The Northbound API[END_REF].
Open and standard interfaces are essential to promote application portability and interoperability across different control platforms. As illustrated in Table 2.1, existing controllers such as Floodlight, Trema, NOX, ONOS, and OpenDaylight propose and define their own APIs in the north [START_REF] Salisbury | The Northbound API-A Big Little Problem[END_REF]. However, each of them has its own specific definitions. . The experience gained in developing various controllers will certainly be the basis for a common application-level interface.
SDN Controller
If we consider the SDN controller as a platform allowing to develop applications on a resource pool, a north API can be compared to the Portable Operating System Interface (POSIX) standard in operating systems [START_REF] Josey | POSIX -Austin Joint Working Group[END_REF]. This interface provides generic functions hiding the operational details of the computer hardware. These ordinary functions allow a software to manipulate this hardware by ignoring their technical details. Today, programming languages such as Procera [START_REF] Voellmy | Procera: A Language for Highlevel Reactive Network Control[END_REF] and Frenetic [START_REF] Foster | Frenetic: A Network Programming Language[END_REF] are proposed to follow this logic by providing an abstraction layer on controller functions. The yanc project [START_REF] Monaco | Applying Operating System Principles to SDN Controller Design[END_REF] also offers an abstraction layer simplifying the development of SDN applications. This layer allows programmers to interact with lower-level devices and subsystems through the traditional file system.
It may be concluded that it is unlikely that a single northern interface will emerge as a winner because the requirements for different network applications are quite different. For example, APIs for security applications may be different from routing ones. In parallel with its SDN development work, the ONF has begun a vertical solution in its North Bound Interface Working Group (NBI -WG) to present standardized northbound APIs [START_REF] Menezes | North Bound Interface Working Group (NBI-WG) Charter[END_REF]. This work is still ongoing.
SDN Applications Analysis
SDN Applications
At the toppest part of the SDN architecture, the Application layer programs the network behavior through the NBI offered by the SDN controller. Existing SDN applications implement a large variety of network functionalities from simple one, such as load balancing and routing, to more complex one, such as mobility management in wireless networks. This wide variety of applications is one of the major reasons to raise up the adoption of SDN into current networks. Regardless of this variety most SDN applications can be grouped mainly in five categories [START_REF] Hu | A Survey on Software-Defined Network and Open-Flow: From Concept to Implementation[END_REF], including (I) traffic engineering, (II) mobility and wireless, (III) measurement and monitoring, (IV) security and dependability, and (V) data center networking.
Traffic engineering
The first group of SDN application consists of proposals that monitor the traffic through the SDN Controller (SDNC) and provide the load balancing and energy consumption optimization. Load balancing as one of the first proposed SDN applications [START_REF] Nikhil | Aster*x: Load-Balancing Web Traffic over Wide-Area Networks[END_REF] covers a big range of network management tasks, from redirecting clients requests traffic to simplifying the network services placement. For instance, the work [START_REF] Wang | OpenFlow-based Server Load Balancing Gone Wild[END_REF] proposes the use of wilcard-based for aggregating a group of clients requests based on their Internet Protocol (IP) prefixes. In the [START_REF] Handigol | Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow[END_REF] also the network application is used to distribute the network traffic among the available servers based on the network load and computing capacity of servers.
The ability of network load monitoring through the SBI introduces applications such as energy consumption optimization and traffic optimization. The information received from the SBI can be used by specialized optimization algorithms to aim up to 50% of economization of network energy consumption [START_REF] Heller | ElasticTree: Saving Energy in Data Center Networks[END_REF] by dynamically scale in/out of the links and devices. This capacity can be leveraged to provision dynamic and scalable of services, such as Virtual Private Network (VPN) [START_REF] Scharf | Dynamic VPN Optimization by ALTO Guidance[END_REF], and increase network efficiency by optimizing rules placement [START_REF] Nguyen | Optimizing Rules Placement in OpenFlow Networks: Trading Routing for Better Efficiency[END_REF].
Mobility and wireless
The programmability of the stack layers of wireless networks [START_REF] Bansal | OpenRadio: A Programmable Wireless Dataplane[END_REF], and decoupling the wireless protocol definition from the hardware, introduce new wireless features, such as creation of on-demand Wireless Access Point (WAP) [START_REF] Vestin | CloudMAC: Towards Software Defined WLANs[END_REF], load balancing [START_REF] Gudipati | SoftRAN: Software Defined Radio Access Network[END_REF], seamless mobility [START_REF] Dely | OpenFlow for Wireless Mesh Networks[END_REF] and Quality of Service (QoS) [START_REF] Li | Toward Software-Defined Cellular Networks[END_REF] management. These traditionally hard to implement features are implemented by the help of the well-defined logics presented from the SDN controller.
The decoupling of the wireless hardware from its protocol definition provides a software abstraction that allows sharing Media Access Control (MAC) layers in order to provide programmable wireless networks [START_REF] Bansal | OpenRadio: A Programmable Wireless Dataplane[END_REF].
Measurement and monitoring
The detailed visibility provided by centralized logic of the SDN controller, permits to introduce the applications that supply network parameters and statistics for other networking services [START_REF] Sundaresan | Broadband Internet Performance: A View from the Gateway[END_REF][START_REF] Kim | Improving network management with software defined networking[END_REF]. These measurement methods can also be used to improve features of the SDN controller, such as overload reduction.
Security
The capability of SDN controller in collecting network data and statistics, and allowing applications to actively program the infrastructure layer, introduce works that propose to improve the network security using SDN. In this type of applications, the SDN controller is the network policy enforcement point [START_REF] Casado | SANE: A Protection Architecture for Enterprise Networks[END_REF] through which malicious traffic are blocked before entering a specific area of the network. In the same category of applications, the work [START_REF] Braga | Lightweight DDoS flooding attack detection using NOX/OpenFlow[END_REF] uses SDN to actively detect and prevent Distributed Denial of Service (DDoS) attacks.
Intuitive classification of SDN applications
As described previously, SDN applications can be analyzed in different categories. In 2.5.1
we categorized the SDN applications based on the functionality they add to the SDN controller. In this section we analyze these applications based on their contribution on the network control life cycle.
SDN applications consist of modules implemented at the top of a SDNC which, thanks to the NBI, configure network resources through the SDNC. This configuration might control the network behavior to offer a network service. Applications which configure the network through a SDNC can be classified in three types. The Fig. 2.8 presents this classification.
The first type concerns an application configuring a network service which once initialized and running will not be modified anymore. A "simple site interconnection" through MultiProtocol Label Switching (MPLS), can be a good example for this service. This type of services requires a one direction up-down NBI which can be implemented with a RESTful solution. The second one concerns an application which, firstly, configures a service and, secondly, monitors it during the service life. One example for this model is a network monitoring application which monitors the network via the SDNC in order to generate QoS reports. For example, for assuring the QoS of an MPLS network controlled by the SDNC, this application might calculate the traffic latency between two network endpoints thanks to metrics received from the SDNC. This model requires a bottom-up communication model in the NBI level so that the real-time events can be sent from the controller to the application. Finally, the third type of coordination concerns an application resting on, and usually including, the two previous types and adding specific control treatments executed in the application layer. In this case the application configures the service (type one), listens to network real-time events (type two), and calculates some specific network configurations in order to re-configure the underlying network accordingly (type one).
SDN Controller
Impact of SDN Applications on Controller design
The variety of SDN applications developed at the top of the SDN controller may modify the internal architecture of the controller and its core functions, described in 2.4.4. In this section we analyze some of these applications and their contribution to a SDNcontroller core architecture.
The Aster*x [START_REF] Nikhil | Aster*x: Load-Balancing Web Traffic over Wide-Area Networks[END_REF] and Plug-n-Serve [START_REF] Handigol | Plug-n-Serve: Load-Balancing Web Traffic using OpenFlow[END_REF] projects propose HTTP load balancing applications that rely on three functional units implemented in the SDN controller: "Flow Manager", HTTP servers and reports it to the Flow Manager. This load-balancing application adds two complementary modules inside the controller, within the core functions.
"
In work [START_REF] Wang | OpenFlow-based Server Load Balancing Gone Wild[END_REF] authors implemented a series of load-balancing modules in a NOX controller that partition client traffics between multiple servers. The partitioning algorithm implemented in the controller receives client's Transmission Control Protocol (TCP) connection requests, arriving into the Load Balancer Switch, and balances the load over the servers by generating wildcard rules. The load-balancing application proposed in this work is implemented inside the controller, in addition with other controller's core modules.
Adjusting the set of active network devices in order to save the data center energy consumption is another type of SDN applications. ElasticTree [START_REF] Heller | ElasticTree: Saving Energy in Data Center Networks[END_REF], as one of these applications, proposes a "network-wide power manager" increasing the network performance and fault tolerance while minimizing its power consumption. This system implements three main modules: "Optimize", "Power control" and "Routing". The optimizer finds the minimum power network subset, it uses the topology, traffic matrix, and calculates a set of active components to both the power control and routing modules. Power control toggles the power states of elements. The routing chooses paths for all flows and pushes routes into the network. In ElasticTree these modules are implemented as a NOX application inside the controller. The application pulls network statistics (flow and port counters) sends them to the Optimizer module, and based on calculated subset it adjusts flow routes and port status by OpenFlow protocol. In order to toggle the elements, such as active ports, linecards, or entire switches different solutions, such as Simple Network Management Protocol (SNMP)
or power over OpenFlow can be used.
In SDN architecture the network topology is one of the information provided to applications. The work [START_REF] Gurbani | Abstracting network state in Software Defined Networks (SDN) for rendezvous services[END_REF] proposes the Application-Layer Traffic Optimization (ALTO) protocol as a topology manager component of this architectures. In this work authors propose this protocol to provide an abstract view of the network to the applications which, based on Chapter 2. Programming the network this informations, can optimize their decision related to service rendezvous. ALTO protocol provides network topology by hiding its internal details or policies. The integration of ALTO protocol to the SDN architecture introduces an ALTO server inside the SDN controller through which the controller abstracts the information concerning the routing costs between network nodes. This information will be sent to SDN applications in the form of ALTO maps. These maps are used in different types of applications, such as: data centers, Content Distribution Network (CDN)s, and peer-to-peer applications.
Network Function Virtualization, an approach to service orchestration
Network Function Virtualization (NFV) is an approach to virtualize and orchestrate network functions, traditionally carried out on dedicated hardware, on Commercial Off-The-Shelf (COTS) hardware platform. This is an important aspect of the SDN particularly studied by service providers, who see here a solution to better adjust the investment according to the needs of their customers. The main advantage of using NFV to deploy and manage VNFs is that the Time To Market (TTM) of NFV-based service is less than a legacy service, thanks to the standard hardware platform used in this technology. The second advantage of NFV is lower Capital Expenditure (CapEx) while standard hardware platforms are usually cheaper than wholesale hardware used on legacy services. This approach, however, has certain issues. Firstly, in a service operator network, there is no more a single central (data center type) network to manage, but also several networks deployed by different technologies, both physical or virtual. At first glance this seems to be contrary to one of the primary objectives of the SDN: the simplification of network operations. The second problem is the complexity that the diversity of NFV architecture elements brings to the service management system. In order to create and manage a service, several VNFs should be created. These VNFs are configured, each one, by an EMS, the life cycle of which is managed though the Virtual Network Function Manager (VNFM). All VNFs are deployed within an infrastructure managed by the Virtual Infrastructure Manager (VIM).
For the sake of simplicity, we don't mention the license management systems proposed by VNF editors to manage the licensing of their products. In order to manage a service all mentioned systems should be managed by the Orchestrator.
Chapter 3
SDN-based Outsourcing Of A Network Service
In this chapter we present the MPLS networks, its control plan and its data plan. Then, we study the processes and the necessary parameters in order to configure a VPN network. In the second part, we study the deployment of this type of network using SDN. For this analysis, we firstly analyze the management of the VPN network with non-openflow controllers, such as OpenContrail. Then, we analyze the deployment of the VPN network with one of the most developed OpenFlow enabled controller: OpenDaylight.
Introduction to MPLS networks
MPLS [START_REF] Rosen | Multiprotocol Label Switching Architecture, RFC 3031[END_REF] technology supports the separation of traffic flows to create VPNs. It allows the majority of packets to be transferred over Layer 2 rather than Layer 3 of the service provider network. In an MPLS network, the label determines the route that a packet will follow. The label is injected between Layer 2 and Layer 3 headers of the packet. A label is a 32 bits word containing several information:
-Label: 20 bits -Time-To-Live (TTL): 8 bits -CoS/EXP: specifies the Class of Service used for the QoS, 3 bits -BoS: determines if the label is the last one in the label stack (if BoS = 1), 1 bit
MPLS data plan
The path taken by the MPLS packet is called Label Switch Path (LSP). MPLS technology is used by providers to improve their QoS by defining LSPs capable of satisfying Service Level Agreement (SLA) in terms of traffic latency, jitter, packet loss. In general, the MPLS network router is called a Label Switch Router (LSR). -Customer Equipment (CE) is the LAN's gateway from the customer to the core network of the service provider -Provider Equipment (PE) is the entry point to the core network. The PE labels packets, classifies them and sends them to a LSP. Each PE can be an Ingress or an Egress LSR. We discussed earlier the way this device injects or removes the label of the packet.
MPLS control plan
-P routers are the core routers of an MPLS network that switch MPLS packets. These devices are Transit LSRs, the operation of whom is discussed earlier.
Each PE can be connected to one or several client sites (Customer Edge (CE)s), Cf. Fig. 3.3. In order to isolate PE-CE traffics and to separate routing tables within PE, an instance of Virtual Routing and Forwarding (VRF) is instantiated for each site, this instance is associated with the interface of the router connected to the CE. The routes that PE receives from the CE are recorded in the appropriate VRF Routing Table. These routes can be propagated by Exterior BGP (eBGP) [START_REF] Rekhter | A Border Gateway Protocol 4 (BGP-4), RFC 4271[END_REF] or Open Shortest Path First (OSPF) [START_REF] Moy | OSPF Version 2, RFC 1247[END_REF] protocols. The PE distributes the VPN information via Multiprotocol BGP (MP-BGP) [START_REF] Bates | Multiprotocol Extensions for BGP-4, RFC 4760[END_REF] to the other PE within the MPLS network. It also installs the Interior Gateway Protocol (IGP) routes learned from the MPLS backbone in its Global Routing Table. We drive this configuration example by joining one of customer_1 sites (Site D of Figure 2) to his VPN. Assuming that the MP-BGP of PE4 is already configured and the MPLS backbone IGP is already running on this router. To start the configuration, the service provider creates a dedicated VRF, called customer_1. He adds the RD value on this VRF, we use for this example the RD = 65000:100. For allowing that VRF to distribute and learn routes of this VPN, the RT specified to this customer (65000:100) is configured on the VRF. He then associates the physical interface connected to the CE4 with the initiated VRF. A routing protocol (eBGP, OSPF, etc.) is configured between the VRF and the CE4. This protocol allows to learn Site D network prefix, the information that will be used by PE4 to send the MP-BGP update to other PEs. We discussed earlier this process. By receiving this update, all sites belonging to this customer are able to communicate with Site D.
MPLS VPN Service Management
In the network of a service provider, the parameters used to configure the MPLS network of a client, are not managed neither configured by this client. In other words, for the sake of security, the customer doesn't have any right to configure the PEs connected to these sites or to modify the parameters of his service. For example, if a client A modify the configuration of its VRF by supplying the RTs used for the other VPN (of client B), it can overlap its VPN with that of the client B and put itself in the network of this client. On the other hand, a client can parameter the elements of its sites, for example the addressing plan of its Local Area Network (LAN), and exchange the parameters of its service, ex: service classes (Class of Service (CoS)). Table 1 summarizes parameters of an MPLS VPN service that can be modified by the service provider and its client.
MPLS VPN Parameters
Service Provider Service Client LAN IP address
✗ ✓ RT ✓ ✗ RD ✓ ✗ Autonomous System (AS) ✓ ✗ VRF name ✓ ✗ Routing protocols ✓ ✗ VPN Identifier (ID)
✓ ✗
SDN-based MPLS
Decoupling control from the forwarding plane of an OpenFlow-based MPLS network permits to centralize all routing and label distribution protocols (i.e. Border Gateway Protocol (BGP), LDP, etc.) in a logically centralized SDNC. In this architecture forwarding elements deploy uniquely three MPLS actions needed to establish an LSP. However, this architecture is not the only one proposed to deploy SDN-based MPLS. MPLS naturaly decouples the service (i.e. IP unicast) from the transport by LSPs [START_REF] Szarkowicz | MPLS in the SDN Era[END_REF]. This decoupling is achieved by encoding instructions (i.e. MPLS lables) in packet headers. In [START_REF] Szarkowicz | MPLS in the SDN Era[END_REF] the authors propose to use MPLS as a "key enabler" to deploy SDN. In this work to achieve data centers connectivity authors propose to use the OpenContrail controller that allows to establish overlay network between Virtual Machine (VM)s based on BGP MPLS protocols.
OpenContrail Solution
OpenContrail [START_REF] Singla | Day One: Understanding OpenContrail Architecture[END_REF] is an open source controller developed based on BGP and MPLS service architecture. It decouples overlay network from underlay, and control plane from forwarding one by centralizing network policy management. -Control nodes, that propagate low-level model to and from network elements.
-Analytics nodes, that capture real-time data from network elements, abstract it, and present it in a form suitable for applications to consume. -Topology Manager: handles information about the network topology. At the boot time, it builds the topology of the network based on the notifications coming from the switches. This topology can be updated according to notifications coming from other modules like Device Manager and Switch Manager.
OpenFlow-based MPLS Networks
SDN-based MPLS
31
-Statistics Manager: sends statistics request to resources (switches), collects statistics and stores them in a data base. This component implements an API to retrieve information like meter, table, flow, etc.
-Forwarding Rules Manager: manages forwarding rules, resolves conflict and validates rules. This module communicates via the SBI with the equipment. It deploys the new rules in switches.
-Switch Manager: provides information for nodes (network equipment) and connectors (ports). When the controller discovers the new device, it stores the parameters in this module. The latter provides an API for retrieving information about nodes and discovered links.
-Host Tracker: provides information on end devices. This information can be switch type, port type, network address, etc. To retrieve this information, the Host Tracker uses ARP. The database of this module can also be manually enriched via the north API.
-Inventory Manager: retrieves the information about the switches and its ports for keeping its database up to date.
These modules provide some APIs at the NBI level allowing to program the controller to install flows. Using this API, the modules implemented in application layer are able to control the behavior of each equipment separately. Programming an OpenFlow switch consists of a tuple of two rules: match and action. Using this tuple, for each incoming packet, the controller can decide if the packet should be treated, if so which action should be applied on this packet. This programming capacity allows the appearance of a large API allowing to manipulate almost every packet types, including MPLS packets.
OpenDaylight native MPLS API
OpenDaylight proposes native APIs to make three MPLS actions, PUSH, POP, and SWAP, each LSR might apply on the packet. Using these APIs, the NBI application may install flows on the Ingress LSR pushing tag on a packet entering MPLS network. It may install flows on Transit LSRs allowing to swap tags along and routing the packet along the LSP.
This application may install a flow on Egress LSR to send the packet to its final destination by popping the tag.
In order to program the underlying networks behavior via this native API, the application needs to have a detailed perspective of the network and its topology, and a control on specified MPLS labels. Table 3.2 summarizes parameters that an application may control using the OpenDaylight native API.
OpenDaylight VPN Service project
Apart from OpenDaylight core functions, additional modules can be developed in this controller. In order to deploy a specific service, these modules benefit the information provided As discussed in this example, the VPN Service project and its interfaces are rich enough to deploy a VPN service via OpenDaylight. Nevertheless, in order to create a "sufficient" complex VPN service, the user must manage the information concerning the service, its sites and its equipments. Table 3.3 summarizes the information that a user should manage using this project. As it is shown in this table, the amount of manageable data, information about its BGP routers (local AS number and identifier) information about its BGP neighbor (AS number and IP address) information on VPN (VPN ID, RD and RT) and etc. can quickly increase exponentially. This large amount of information can make the service management more complex and reduce the QoS. It is important to note that the SDN controller of an operator manages a set of services on different network equipment shared between several clients.
MPLS VPN Parameters
That means that, for the sake of security, most of the listed information will not be made available to the customer.
Outsourcing problematics
Decoupling control plane and data plane of MPLS networks and outsourcing the second one into a controller brings several benefits in terms of service management, service agility and QoS [START_REF] Ali | MPLS-TE and MPLS VPNS with Openflow[END_REF][START_REF] Das | MPLS with a simple OPEN control plane[END_REF]. Centralized control layer offers a service management interface available to the customer. Nevertheless, this outsourcing and openness can create several challenges.
The MPLS backbone is a shared environment among the customers of an operator. To deploy a VPN network, as discussed recently, the operator configures a set of devices, situated in the core and the edge of the network. This equipment, mostly provide several services to customers in parallel. The latter ones use the VPN connection as a reliable means of transaction of their confidential data.
Outsourcing problematics 35
Outsourcing the control plane to an SDNC brings a lot of visibility on the traffic exchanged within the network. It is through this controller that a customer can create an on-demand service and manage this service dynamically. Tables 3.2 and 3.3 present in detail the information sent from the NBI to deploy a VPN service. These NBIs are proposed by two solutions 3.2.2.2 and 3.2.2.3. The granularity of this information gives to customer more freedom in the creation and management of his service. Moreover, beyond this freedom a customer having access to the NBI not only can modify the parameters of his own service (i.e. VPN) but also it can modify the parameters concerning the services of other customers.
In order to control the customers access to the services managed by the controller, while maintaining service management agility, we propose to introduce a service management framework beyond the SDNC. From bottom-up perspective, this framework provides an NBI abstracting all rich SDNC functions and control complexities, discussed in Section 2.5.3.
We strengthen this framework by adding the question of the access of the client to managed resources and services. Indeed, this framework must be able to provide a NBI of variable granularity, through which the customer is able to manage all three types of services discussed in Section 2.5.2:
-Type-1 applications: The service abstraction model brought by the framework's NBI allows the customers side application to configure a service with minimum of information communicated between the application and the framework. The restricted access provided by the framework prevent unintentional or intentional data leaking and service misconfiguration.
-Type-2 applications: On the southern side, internal blocks of the framework receive upcoming network events directly from the resources, or indirectly through the SDNC. On the northern side, these blocks open up an API to applications allowing them to subscribe to some metrics used for monitoring reasons. Based on receiving network events, these metrics are calculated by framework internal blocks and are sent to the appropriate application.
-Type-3 applications: The controlled access to SDN based functions assured by the framework provides not only a service management API, but also a service control one, opened to the customers application. The thin granularity control API allows customers to have a low-level access to network resources via the framework. Using this API customers receive upcoming network events sent by devices, based of which they reconfigure the service.
In order to provide a framework able to implement mentioned APIs, we need to analyze the service lifecycle in details. This analyze gives rise to all internal blocks of the framework and all steps they may take, from presenting a high-level service and control API to deploying a low-level resource allocation and configuration.
Chapter 4
Service lifecycle and Service Data Model In order to propose different level of abstractions on the top of the service providers platform a service orchestrator should be integrated at the top of the SDNC. This system allows third party actor, called user or customer, to participate to all or part of his network service lifecyle.
Nowadays, orchestrating an infrastructure based on SDN technology is one of the SDN challenges. This problematic has at our knowledge been once addressed by Tail-F which proposes a partial proprietary solution [START_REF] Chappell | Creating the Programmable Network, The Business case for Netconf/YANG in network devices[END_REF]. In order to reduce the Operation Support System (OSS) cost and also the TTM of services, Tail-F Network Control System (NCS) [START_REF]Tail-f Network Control System (NCS) -Datasheet[END_REF] introduces an abstraction layer on the top of the NBI in order to implement different services, including layer 2 or layer 3 VPN. It addresses an automated chain from the service request, on the one hand, to the device configuration deployment in the network, on the other hand. To transform the informal service model to a formal one this solution uses the YANG data model [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF]. The service model is mapped into device configurations as a data model transformation. The proposed work doesn't however cover all management phases of the service lifecycle, specially service monitoring, maintenance, etc. , and also it doesn't study the possibility of opening up a control interface to a third party actor. Due to the proprietary nature of this product it is not possible to precisely analyze its internal structure.
We present in this chapter a comprehensive solution to this problematic by identifying a reasonable set of capabilities of the NBI of the SDN together with the associated API. Our first contribution rests on a global analysis of an abstract model of the operator platform articulated to a generic but simple service lifecycle, described in Section 4.1, which takes into account the view of the user together with that of the operator. Tackling the service lifecycle The second part of this chapter, Section 4.2, is dedicated to service data model analysis,
where we describe data model(s) used on each service lifecycle phases, both for client side and operator side.
Service Lifecycle
The ability of managing the lifecycle of a service is essential to implement it in an operator platform. Existing service lifecycle frameworks are oriented on human-driven services. For example, if a client needs to introduce or change an existing service, the operator has to configure the service manually. This manual configuration may take hours or sometimes days. It may therefore significantly affect the operators OpEx. It clearly appears that the operator has to re-think about its service implementation in order to provision dynamically and also to develop on-demand services. There are proposals in order to enhance new ondemand network resource provisioning. For instance, the GYESERS project [START_REF] Demchenko | GYESERS Project, Service Delivery Framework and Services Lifecycle Management in on-demand services/resources provisioning[END_REF], proposed a complex service lifecycle model for on-demand service provisioning.
This model includes five typical stages, namely service requests/SLA negotiation, composition/reservation, deployment/register and synchronization, operation (monitoring), decommissioning. The main drawback of this model rests on its inherent complexity. We argue this one may be reduced by splitting the global service lifecycle in two complementary and manageable viewpoints: client and operator view. Each one of both views captures only the information useful for the associated actor. The global view may however be obtained by composing the two partial views.
In a fully virtualized network based on SDN, the SDNC is administratively managed by the Service Operator. This one provides a programmable interface, called NBI, at the top of this SDNC allowing the OSS and Service Client applications to configure on-demand services.
In order to analyze the service lifecycle, and to propose a global model of service lifecycle in this kind of networks, the application classification analysis is necessary. In Section 2.5.2 we made an intuitive classification of SDN applications. This classification allows us to analyze the service lifecycle on both operator and client sides.
Client side Service Lifecycle
Based on the application classification discussed in Section 2.5.2, we analyze the client side service lifecycle of the three main application types.
Client side Service Lifecycle managed by Type-1 applications
Type-1 applications consist of applications creating a network service using the NBI. This category doesn't monitor neither modify the service based on upcoming network events.
Service Creation Service Retirement
Client side Service Lifecycle managed by Type-2 applications
This category of applications, takes advantage of events coming up from NBI to monitor the service. It is worth to note that this service may be created by the same application which monitors the service. -Service monitoring: Once created, the service may be used by the client for the negotiated duration. During this time some network and service parameters will be
Service Creation
Client side Service Lifecycle managed by Type-3 applications
In a more complex case, an application may create the service through the NBI, monitors the service through this interface, and based on upcoming events reconfigure the network via the SDNC. This type of control adds a retroactive step to the client-side service lifecycle. This one is illustrated in Fig. 4
Global Client-side Service Lifecycle
A global client-side service lifecycle is illustrated in Fig. 4 -Service modification and update: The management of the operator's network may lead to the update of the service. This update can be issued because of a problem occurring during the service consummation or a modification of the network infrastructure. This update may be minimal, such as modifying a rule in one of the underlying devices, or it may impact the previous steps, with consequences on the service creation and/or on the service consummation.
-Service retirement: [Service retirement] cf. Section 4.1.1.1.
Operator Side Service Lifecycle
The Operator-side service lifecycle is illustrated in Fig. 4.5. This service lifecycle consists of six main steps:
-Service request: Once a service creation or modification request arrives from the users' service portal (through the NBI), the request manager negotiates the SLA and a high level service specification in order to implement it. It is worth noting that before agreeing the SLA the operator should ensure that the existing resources can cope with the requested service at the time it will be deployed. In case of unavailability, the request will be enqueued. -Service configuration: Based on the previous set of network resource configurations, several instances of corresponding virtual resources will be created, initialized and reserved 1 . The requested service can then be implemented on these created virtual resources by deploying network resource configurations generated by the compiler.
-Service maintain, monitoring and operation: Once a service is implemented, its availability, performance and capacity should be maintained automatically. In parallel, a service log manager will monitor all service lifecycle.
-Service update: During the service exploitation the network infrastructure may necessitate changes due to some execution problems or technical evolution requirements, etc. It leads to update which may impact the service in different way. The update may be transparent to the service or it may require to re-initiate a part of the first steps of the service lifecycle.
-Service retirement: the service configuration will be retired from the infrastructure as soon as a retirement request arrives to the system. The service retirement issued by the operator is out of the scope of this work. We argue that this service lifecycle on the provider side is generic enough to manage the three types of applications, discussed in Section 2.5.2.
The global view
The global service lifecycle is the combination of both service lifecycles explained in Sections -Service Monitoring ↔ Service Maintain, Monitoring and Operating: client-side service monitoring, which is executed during the service consummation, is in parallel with operator-side service maintain, monitoring and operation.
-Service Update ↔ Service Update: operator-side service maintain, monitoring and operation phase may lead to the service update phase in the client-side service lifecycle.
-Service Retirement ↔ Service Retirement: In the end of the service life, the client-side service retirement phase will be executed in parallel with the operator-side service retirement.
Chapter 4. Service lifecycle and Service Data Model
Following we describe service model(s) used during each step of operator side service lifecycle, discussed in Section 4.1.2:
-Service Request: to negotiate the service with the customer the operator relies on the service layer model. This model is the same as the model used on the client side service lifecycle. For example, for a negotiated VPN service, both Service Request step and client side service lifecycle, will use the same service layer model. An example of this model is discussed in Section 4.2.1 [START_REF] Moberg | A two-layered data model approach for network services[END_REF].
-Service Decomposition and Compilation: this step receives on the one hand, the service layer model and generates, on the other hand, device configuration sets. Comparing to proposed two-layered approach, this phases is equivalent to the intermediate layer transforming data models. A service layer model can be a fusion of several service models that for the sake of simplicity are merged into a global model. During the decomposition step this global model is broken down into elementary service models which are used in compilation step. They are finally transformed in sets of device models. The transformation of models can be done through two methods:
-Declarative method is a straightforward template that makes a one to one mapping of a source data model of parameters to a destination one. For example a service model describing a VPN can be transformed to device configuration sets by one-to-one mapping of values given within the service model. In this case it is sufficient that the transformer retrieves required values from the first model to construct the device model based on a given template.
-Imperative method is defined by an algorithmic expression used to map a data model to a second one. Usually this model contains some dynamic parameters, e.g. an unlimited list of interfaces. An example for this model can be a VPN service model in which each client's site, i.e. CE, has different number of up-links (1..n) connected to different number of PEs (1..m). In this case the transformation is not a simple one-to-one mapping any more, but rather an algorithmic process (here a loop) that creates one device model per service model.
Using one of these methods, i.e. declarative or imperative data transformation, has its own advantage or drawback, hardly one of these methods would be superior than the other [START_REF] Pichler | Imperative versus Declarative Process Modeling Languages: An Empirical Investigation[END_REF][START_REF] Fahland | Declarative versus Imperative Process Modeling Languages: The Issue of Understandability[END_REF]. We argue that the choice of the transformation method used on compilation phase rests on the service model, its related device model and the granularity of parameters within each model.
-Service configuration: to configure a resource, the device model generated by the transformation method of the previous step (i.e. compilation) is used. If this model is generated into the same data model known by the network element, no transformation method should be used. Otherwise another data transformation action should be done on the device model transforming the original device model to a network element compatible one. It is worth noting that since this transformation is a one-to-one mapping task, the data transformation can be done with the declarative method.
-Service maintain, monitoring and operation: since the service maintain and operation process is directly done on network elements, the data model used for this phase 4.2. Service Data Model 47 is device model. Although the service model used for the monitoring task of this phase relies on the nature of the monitored resource. For example, if the service engineers and operators need to monitor the status of a resource they might use a monitoring method such as SNMP, BGP signaling-based monitoring [START_REF] Di Battista | Monitoring the status of MPLS VPN and VPLS based on BGP signaling information[END_REF], etc. the result of which is described in device model. Otherwise, if a service customer needs to monitor its service, e.g. monitoring the latency of two endpoints of a VPN connection, the monitoring information sent from the operator to the customer is transformed to a service data model. This bottom-up transformation can be done by declarative or imperative method.
-Service update: updating a service consists in updating network elements configurations, hence the data model used on this phase is a device data model. Nevertheless this update may derive a modification on the service model represented to the customer. In this case, at the end of the service update process, a new service model will be generated based on the final state of network elements. This new model is the result of the bottom-up data transformation done through of declarative or imperative methods.
-Service retirement: decommissioning a service is made up of all tasks done to remove service related configurations from network elements, and eventually to remove the resource itself. In order to remove device configurations, device data models are used. But, during the retirement phase the service model is also used. The data model transformation done in this phase entirely depends on the source of retirement process. Indeed, if the service retirement is requested from the customer, hence the request is arrived from the client side described in a service model.
Conclusion
In this chapter we conducted an analysis of service lifecycle in an SDN-based ecosystem.
This analysis has led us to two general service lifecycles: client-side and service-side. On the first side we discussed how an application implementing network services using an SDNC can contribute to the client-side service lifecycle. For this reason, for each application category discussed in Section 2.5.2, we presented a client-side service lifecycle model, by discussing additional steps that each category may add to this model. Finally, a global clientside service lifecycle is presented. This global model, contains all steps needed to deploy each type of applications. We also presented a global model concerning the operator-side service lifecycle. It represents the model that an operator may take into account to manage a service from the service negotiation to the service retirement phases.
In the second part of this chapter we discussed the data model used by each service lifecycle phase. Through an example we explained in details the manner in which a data model is transformed from a source model into a destination one.
We argue that presenting a service lifecycle model on one side, allows the implementation of a global SDN orchestration model managed by an operator. On the other side, this model will help us to understand the behavior of applications. In this way it will simplify the specification of the NBI in forthcoming studies. Presenting the data model also describes in details the behavior of the management system on each service lifecycle step. It also permits the definition of operational blocks and their relations allowing to implement the operator side service lifecycle. In this chapter we present a framework involving a minimal set of functions required to
Orchestrator-based SDN Framework
Service management processes, as illustrated in the previous example, can be divided into two more generic families: the first one managing all steps executing service based tasks from service negotiation to service configuration and service monitoring, and the second one managing all resource based operations. These two families managing together all operatorside service lifecycle (discussed in 4.1.2) can be represented as a framework illustrated in Fig.
The model is composed of two main orchestration layers:
-Service Orchestrator (SO)
-Resource Orchestrator (RO)
The "Service Orchestrator" will be dedicated to the service part operations and is conform to the operator side service lifecycle, cf. The "Resource Orchestrator" will manage resource part operations:
-Resource Reservation
-Resource Monitoring
Service Orchestrator (SO): This orchestrator receives service orders and initiates the service lifecycle by decomposing complex and high level service requests to elementary service models. These models allow to derive the type and the size of resources needed to implement that service. The SO will demand the virtual resource reservation from the lower layer and deploy the service configuration on the virtual resources through an SDNC.
Resource Orchestrator (RO): This orchestrator, which manages physical resources, will reserve and initiate virtual resources. It maintains and monitors physical resources states using the southbound interface.
Internal structure of the Service Orchestrator
As mentioned in Fig. 5.3, the first orchestrator, SO, contains five main modules:
-SRM -SDCM -SCM Chapter 5. An SDN-based Framework For Service Provisioning
The SCM can be considered as a resource driver of the SO. This module is the interface between the orchestrator and resources. Creating such a module facilitates the processes run at upper layers of the orchestrator where the service can be managed independently of existing technologies, controllers and protocols implementing and controlling resources. On the one hand, this module communicates to different resources through its SBI. On the other hand, it exposes a universal resource model to other SO modules, specifically to SDCM.
Configuring a service by SCM requires a decomposition into two tasks: creating the resource on the first step (if the resource doesn't exist, cf. arrow 4 of Fig. 5.6), and configuring that resource at the second step (cf. arrow 5 of Fig. 5.6). In our example, once the PE3 ID and the required configuration is received from the SDCM side, the SCM, firstly fetches the management IP address of the PE3 from its database. Secondly if the requested vRouter is missing on the PE, it creates a vRouter (cf. arrow 4 of Fig. 5.6). And thirdly, it configures that vRouter to fulfill the requested service.
In order to create the required resource (i.e. to create the vRouter on the PE3), SCM sends a resource creation request to the RO (arrow 4 of Fig. 5.6). Once the virtual resource (vRouter) is initiated, the RO acknowledges the creation of the resource by sending the management IP address of that resource to SCM. All what this latter needs to do, is to push the generated configuration to that vRouter using its management IP address (arrow 6 of Fig. 5.6). The configuration of the vRouter can be done via different methods. In our example the vRouter is an OpenFlow-enabled device programmable via the NBI of an SDNC. To configure the vRouter, SCM uses its interface with the SDNC controlling this resource.
SCM -SDN Controller (SDNC) Interface
As we explained, the configuration of part or all of virtual resources used to fulfill a service can be done through an SDNC. In Section 3.2.2.1 we analyzed the architecture of the Open-Daylight controller providing a rich set of modules allowing to program network elements.
This controller exposes on its NBI some Representational State Transfer (REST) APIs allowing to program flows thanks to its internal Flow Programmer module. These APIs allow to program the behavior of a switch based on a "match" and "action" tuple. Among all actions done on a received packet, the Flow Programmer allows to push, pop and swap MPLS labels.
In order to program the behavior of the initiated vRouter, we propose to use the API provided by OpenDaylight Flow Programmer. The vRouters role is to push MPLS label into packets going out from Site D to other sites (A and C), and to pop the MPLS labels from incoming packets sent from these remote sites. To program each flow OpenDaylight requires the Datapath ID (DPID) of the vRouter, inbound and outbound port numbers, MPLS labels to be pushed, popped and swapped, and the IP address of next hops where the packet should be sent to. In the following we will discuss how these information is managed to be sent to the SDNC.
Orchestrator-based SDN Framework
57
DPID: During the resource creation time, the vRouter is programmed to be connected automatically to OpenDaylight. The connection establishment between these two entities is explained in OpenFlow specification, where the vRouter sends its DPID to the controller via OpenFlow features reply. This DPID, known by SCM and SDNC, is further used as the unique ID of this virtual resource.
Port numbers: Inbound and outbound port numbers are practically interface numbers of the vRouter created by the SO. To create a virtual resource, the SCM relies on a resource template explaining the interface ordering of that resource. This template describes which interface is used for management purpose, which interface is connected to the CE and which one is connected to the P router inside the MPLS network. This template is registered inside the database of the SCM, and this module uses this template to generate REST requests sent to the SDNC.
MPLS labels: MPLS labels are other parameters needed to program the flow inside the vRouter. These labels are generated and managed by SDCM. This module controls the consistency of labels inside a managed MPLS network. Labels are generated in this layer and are sent to the SCM to use in service deployment step.
Next hop IP address: When a packet enters to vRouter from the CE side, the MPLS label will be pushed into the packet and it will be sent to the next LSR. Knowing that the MPLS network, including LSRs, is managed and configured by the SO, this one has an updated vision of the topology of this network. The IP address of the P router directly connected to the PE is one of information that can be exported from the topology database of SO managed by SCM.
Once the vRouter is created and configured on the PE3, the LSP of the MPLS network also should be updated. At the end of the vRouter creation step, the customer owns three sites, each one connected to a PE hosting a vRouter. The SCM configures on each vRouter (1 and
2) the label that should be pushed to each packet sent to Site D and vice versa (cf. arrows 6, 8, 10 of Fig. 5.6). It configures also the P router connected directly to PE3 to take into account the label used by the vRouter3 (cf. arrow 12 of Fig. 5.6).
Service Monitoring Manager (SMM)
In parallel to the three main modules explained previously, the SO contains a monitoring system, called SMM, that monitors vertically the functionality of all orchestrators modules from the SRM to the SCM and its SDNC. This module has two interfaces to external part of the orchestrator. On the one hand, it receives upcoming alarms and statistics from the lower orchestrator, RO, and on the other hand it communicates the service statistics to the external application via the NBI.
Internal architecture of the Resource Orchestrator
As it is mentioned in previous sections, 4.1.2 and 5.2.1.3, during the service configuration phase, the RO will be called to initiate resources required to implement that service. In the service configuration step, if a resource is missing, the SCM will request the RO to initiate the resource on the specified location. The initiated resource can be virtual or physical according to the operator politic and/or negotiated service contract.
Existing cloud orchestration systems, such as OpenStack platform [START_REF] Sefraoui | OpenStack: Toward an Open-source Solution for Cloud Computing[END_REF], are good candidates to implement a RO. OpenStack is a modular cloud orchestrator that permits providing and managing a large range of virtual resources, from computing resource, using its Nova module, to L2/L3 LAN connection between the resources, using its Neutron module. The flexibility of this platform, the variety of supported hypervisors and its optimized resource management [START_REF] Huanle | An OpenStack-Based Resource Optimization Scheduling Framework[END_REF] can help us to automatically provision virtual resources, including virtual servers or virtual network units. We continue exploiting the proposed framework based on a RO implemented by the help of OpenStack.
In order to implement and manage required resources needed to bring up a network service, an interface will be created between the SO and the RO where the SCM can communicate to the underlying OpenStack platform providing the virtual resource pool. This interface provides a resource management abstraction to SO. The resource request will be passed through various internal blocks of the OpenStack, such as Keystone that controls the access to the platform. As our study is mostly focused on service management, in this proposal we don't describe the functionality of each OpenStack module in details. In general, the internal architecture of the required RO is composed of two main modules, one used to provide virtual resources, a composition of Nova, Cinder, Glance and Swift modules of OpenStack, and another one used to monitor these virtual resources, thanks to the Ceilometer module of OpenStack.
If the RO faces an issue it will inform the SO which is consuming the resource. The service run-time lifecycle and performance is monitored by the SO. When it faces an upcoming alarm sent by the RO or a service run-time problem occurring on virtual resources, it will either perform some task to resolve the problem autonomously or send an alarm to the service consumer application (service portal).
Creating a virtual resource requires a set of information, software and hardware specifications, such as the version of the firmware installed inside that resource, number of physical interfaces, the capacity of its Random-Access Memory (RAM), and its startup configurations like the IP address of the resource. For example to deploy a vRouter, the SO needs a software image which installs the firmware of this vRouter. Like all computing resources, a virtual one also requires some amount of RAM and Hard Disk space to use. In OpenStack world, these requirements are gathered within a Flavor. Fig. 5.7 illustrates a REST call, sent from the SCM to the RO, requesting the creation of the vRouter. In this example, the SCM requests the creation of the "vPE1", that is a vRouter, on 5.3. Implementation 59 a resource called "PE1" using an image called "cisco-vrouter" and the flavor "1".
curl -X POST -H "X-Auth-Token:\$1" -H "Content-Type: application/json" -d ' { "server": { "name": "vPE1", "imageRef": "cisco_vrouter", "flavorRef": "1", "availability-zone" : "SP::PE1", "key_name" : "OrchKeyPair" } } ' http://resourceorchestrator:8774/v2/admin/servers | python -m json.tool FIGURE 5.7: REST call allowing to reserve a resource
Framework interfaces
The composition of this framework requires the creation of three interfaces (cf. Fig. 5.3). The first one, the NBI, provides an abstracted service model enriched by some value-added services to the third party application or service portal. The second one, the SBI, interconnects the SO to the resource layer through the SDNC. This interface permits the SCM to configure and control virtual or physical resources. Inter-orchestrator (middle) interfaces, is the third interface that is presented for the first time in this framework. This interface interconnects the SO to the ROs. The modular aspect created by this interface permits to implement a distributed orchestration architecture. This architecture allows one or several SO(s) to control and communicate to one or several RO(s).
Implementation
In order to describe the internal architecture of the framework, we implement different layers of the Service Orchestrator through the MPLS VPN deployment example.
Hardware architecture
Fig. 5.8 shows the physical architecture of our implementation. This one is composed mainly by three servers each one implementing one of the main blocks:
-Server1 implements the Mininet Platform [START_REF] De Oliveira | Using Mininet for emulation and prototyping Software-Defined Networks[END_REF]. For the sake of simplicity and because of lack of resources, we implement the infrastructure of our implementation based on a Mininet platform. This one implements all resources, routers and hosts, needed to deploy our desired architecture.
-Server2 implements OpenDaylight SDN controller [START_REF]The OpenDaylight SDN Platform[END_REF]. For this implementation we use the Carbon version of the OpenDaylight. From its SBI this controller manages resources implemented by the Mininet platform based on OpenFlow protocol.
Implementation
61
For this implementation we study the case where all three customer sites are already connected to the core network and the physical connection between CE and PE routers is established.
Software architecture
Given that our analysis focuses on the architecture of the SO, in this implementation we study the case where the required resource already exists. In this case the deployment of the service relies on the SO and its related SDNC. Fig. 5.10 shows the internal architecture and the class diagram of the implemented SO.
The architecture of the orchestrator is based on the object oriented paradigm developed in Python 2.7. In our implementation each SOs layer is developed in a separated package:
Service Request Manager (SRM): contains several classes including Service_request, Customer and Service_model. On the one hand it implements a REST API used by the customer, on the other it manages all available services proposed to the customer and the service requested arrived from the customer. For this, it uses two other objects (classes) each one controlling the resources managed in this layer. The first one, Customer class, manages the customer, its subscribed services and available services to him. The second one, the Service_model, manages customer face service models. This model is used to make a representation of the service to the customer. In the first step, this module retrieves related PE list connected to each remote site from the Topology module. Using the integrated Dijkstra engine of the Topology module, it calculates the shortest path to reach other sites from each PE. And using the labels generated by the Label_manager, and device model templates managed by the Flow_manager, it generates a list of device models to be deployed on the underlying network devices to create the required LSP.
In our implementation we use a device model database containing all models needed to create a MPLS network on an OpenFlow based infrastructure. This database is managed by the Flow_manager module. The entries of this database are each one a flow template { " s e r v i c e _ t y p e " : " mpls_vpn " , " customer_id " : " customer_1 " , " p r o p e r t i e s " : { " c e _ l i s t " : [ { " c e _ i d " : " ce1 " , " l a n _ n e t " : " 1 9
Conclusion
In this chapter, we proposed a SDN framework derived from the operator-side service lifecycle discussed in 4.1.2. This framework which is structured in a modular way encapsulates SDNC with two orchestrators, SO and RO, dedicated respectively to the management of services and resources. The proposed framework is externally limited by NBI and SBI and internally clarifies the border between the two orchestrators by identifying an internal interface between them, called the middle interface, which provides a virtual resource abstraction layer on the top the RO. Our approach gives the foundation for the rigorous definition of the SDN architecture.
It is important to note the difference between the SO and its complementary system, RO. The RO provisions, maintains and monitors physical devices hosting several virtual resources.
It doesn't dispose any perspective of running configuration on each virtual resource. Unlike RO, the SO manages the internal behavior of each resource. It is also the responsible of interconnecting several virtual resources to conduct a required service. monitoring tasks, done at the operator-side service lifecycle, as potentially interesting candidates, some parts of which may be delegated to the GC. Such an outsourcing moreover leads to enrich in some ways the APIs described in Fig. 6.2.
Applying the BYOC concept to Type 1 services
Configuring a service in the SO is initiated after the Service compilation phase of the operator side service lifecycle 4.1.2. This one translates the abstracted network models into detailed network configurations thanks to integrated network topology and statement databases.
In order to apply the BYOC concept, all or a part of the service compilation phase may be outsourced to the application side represented by the GC. For example, the resource configuration set of a requested VPN service, discussed in Section 5.1, can be generated by a GC.
This delegation needs an interface between the SDCM and the GC. We suggest to enrich the first API with dedicated primitives allowing the GC to proceed to the delegated part of the complete compilation process (cf. the primitive "Outsourced (Service Compilation)" in Fig. 6.3).
It is worth pointing out that the compilation process assigned to the GC could be partial because the operator may want to maintain the confidentiality of sensitive information, as for example the topology of its infrastructure.
Applying the BYOC concept to Type 2 services
In this case the application may configure a service and monitor it via the NBI. This type involves the compilation phase, discussed earlier, and the monitoring one. Outsourcing the monitoring task from the Controller to the GC, thanks to the BYOC concept, requires an asynchronous API that permits to transfer the real-time network events to GC during the monitoring phase. The control application implemented in the GC observes the network state thanks to the real-time events sent from the Controller. A recent work [START_REF] Aflatoonian | An asynchronous push/pull communication solution for Northbound Interface of SDN based on XMPP[END_REF] proposed an XMPP-based push/pull solution to implement an NBI that permits to communicate the networks real-time events to the application for a monitoring purpose. The outsourced monitoring is located in the second API of Fig. 6.2 and could be expressed by refining some existing primitives of the API (cf. "Outsourced (Service Monitoring Req./Resp.)" of Fig. 6.3).
Applying the BYOC concept to Type 3 services
This type concerns the application that configures a service (Type 1) and monitors it (Type 2), according to which it may modify the network configuration and re-initiate the partial compilation process (Type 1). The second API of Fig. 6.2 should be sufficient to implement such type of service even if it may necessitate non trivial refinements or extensions in order to be able to collect the information needed by GC. The delegation of the control induced by this kind of GC comes exactly from the computation of the new configuration together with its re-injection in the network, through the SDNC, in order to modify it.
Northbound
Interface permitting the deployment of a BYOC service
Requirements for specification of the NBI
The GC is connected to the SO through the NBI. This is where the service operator communicates with the service customer and sometimes couples with the client side applications, orchestrators, and GC(s). In order to accomplish these functionalities certain packages should be implemented. These packages maintain two categories of tasks: 1) Service creation, configuration and modification, and 2) Service monitoring and BYOC service control.
Synchronous vs Asynchronous interactions
The former uses a synchronous interaction that implements a simple request/reply communication that permits the client-side application to send service requests and modifications, while the latter uses an asynchronous interaction where a notification will be pushed to the subscribed service application. The asynchronous nature of this package makes it useful for sending control messages to the GC.
From its SBI, the SO tracks the network events sent by resources. Based on the service profile related to the resources, it sends events to concerning modules implemented either inside the SO or within an external GC.
Push/Pull paradigm to structure the interactions
The communication between the GC and the SO is based on Push-and-Pull (PaP) algorithm [START_REF] Bhide | Adaptive Push-Pull: Disseminating Dynamic Web Data[END_REF] that is basically used for the HTTP browsing reasons. In this proposal we try to adapt this algorithm to determine the communication method of the NBI which will use publish/submit messaging paradigm. The GC subscribes to the SO.
To manage BYOC-type services, Decision Engine (DE) and Service Dispatcher (SD) modules are implemented within the SO. The DE receives messages sent by network elements and based on them it decides whether to treat the message inside the SO or forward the message to the GC. For messages needed to be analyzed within a GC, the DE sends them to the SD. The SD distributes these messages to every GC that has subscribed to the corresponding service. a: The service customer requests a service from the SRM, through the service portal.
In addition to the service request confirmation, the system sends the subscription details about the way that service is managed, internal or BYOC.
b: Using the subscription details, the user connects to the SD unit and subscribes to the relevant service.
c: When a control message, e.g. OpenFlow PackeIn message, is sent to the DE, the DE creates a notification and sends it to the SD.
d: The SD unit pushes the event to all subscribers of that specific service. The initiative concerning the WebSockets [START_REF] Fette | The WebSocket Protocol[END_REF] should eventually be interesting, but actually this solution is still under development. As mentioned in the work of Franklin and Zdonik [START_REF] Franklin | Data in Your Face: Push Technology in Perspective[END_REF], push systems are actually implemented with the help of a periodic pull that may cause an important charge in the network. Alternative solutions like Asynchronous JAvascript and Xml (AJAX) also rely on client's initiated messages. We argue that XMPP [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF] could be a good candidate due to its maturity and simplicity it may cope with all the previous requirements.
XMPP As An Alternative Solution
XMPP [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF], also known as Jabber, is originally developed as an Instant Messaging (IM) protocol by the Jabber community. This protocol, formalized by the IETF, uses an XML streaming technology in order to exchange XML elements, called stanza, between any two entities across the network, each one identified by a unique Jabber ID (JID). The JID format is composed of three elements: "node@domain/resource" where the "node" can be a username, the "domain" is a server and the "resource" can be a device identifier. XMPP Standard Foundation tries to enlarge the capability of this protocol by providing a collection of XMPP Extension Protocols (XEP)s [START_REF]XMPP Extensions[END_REF], XEP-0072 [START_REF]XEP-0072: SOAP Over XMPP[END_REF], for example, defines methods for transporting SOAP messages over XMPP. Thanks to its flexibility, the XMPP is used in a large domain, from a simple application such as instant messaging to a larger one such as remote computing and cloud computing [START_REF] Hornsby | From instant messaging to cloud computing, an XMPP review[END_REF]. The work [START_REF] Wagener | XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services[END_REF] shows how XMPP is a compelling solution for cloud services and how its push mechanism eliminates unnecessary polling. XMPP forms a push mechanism where nodes can receive messages and notifications whenever they occur on the server. This asynchronous nature eliminates the need for periodic pull messages.
Chapter 6. Bring Your Own Control (BYOC)
two main crucial packages listed in the beginning of this section; packages that execute 1) Service creation, configuration and modification, and 2) Service monitoring and BYOC service control. The NBI security problem also is considered in this proposal. The XMPP specifications describe security functions as the core parts of the protocol [START_REF] Saint | Extensible Messaging and Presence Protocol (XMPP): Core, RFC 3920[END_REF] and all XMPP libraries support these functionalities by default. XMPP provides a secure communication through an encrypted channel (Transport Layer Security (TLS)) and restricts the client access via the Simple Authentication and Security Layer (SASL) that permits XMPP servers to accept only encrypted connections. All this signifies that XMPP is well suited for constructing a secured NBI allowing to deploy a BYOC service.
NBI Data Model
In order to hide the service implementation complexity, services can be represented as a simple resource model described in a data modeling language. YANG data modeling language [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF] can be a good candidate for this purpose. A YANG data model is translated into an equivalent XML syntax called YANG Independent Notation (YIN) that, on the one hand, allows the use of a rich set of XML-based tools and, on the other hand, can be easily transported through the XMPP-based NBI.
Simulation results
In order to evaluate this proposal, we assessed the XMPP NBI implementation performance in term of delay and overhead costs by comparing a simple GC using an XMPP-based NBI with the same GC using a RESTful-based one. Once the term "near to real-time" is used to develop a system, the delay is the first parameter to be reduced. In a multi-tenant environment the system charge is also the other important parameter to take into account.
To measure these parameters and compare them in the XMPP case versus the REST one, we need to implement a simple GC that exploits the NBI to monitor packets belonging to some specific service, in our case HTTP filtering service. The underlying network is simulated thanks to Mininet [START_REF] De Oliveira | Using Mininet for emulation and prototyping Software-Defined Networks[END_REF] project. We implemented two NBIs, XMPP-based and RESTful) which accessed to the NBI, in parallel. We use the term "event" to describe control messages sent from the SO to the GC. In the XMPP-based NBI case this event is pushed near-to realtime, thanks to the XMPP protocol. But for the RESTful one, the event message should be stored in a temporary memory before the GC pull them up REST requests. In this case, to simulate a real-time process and to reduce the delay, REST requests are sent in a little time intervals.
In the case of the XMPP-based NBI, the event is sent in a delay of 0.28 ms. The NBI overhead of this NBI is 530 Bytes which is the size of an XMPP message needed to carry the event.
In the other case, with the RESTful NBI, the GC will pull periodic message to push this information, a request/response message that will be at least 293 Bytes. In order to reduce the delay, the time interval between each request should be scaled down. This periodic request/response messages will create a huge overhead in the NBI. The Fig.
Conclusion
In the first part of this chapter we introduced BYOC as a new concept providing a convenient framework structuring the openness of the SDN on its northbound side. We derived from the lifecycle characterizing the services deployed in an SDN, the parts of services the control of which may be delegated by the operator to external GC through dedicated APIs located in the NBI.
We presented EaYB business model through which the operator monetizes the openness of its SDN platform thanks to the BYOC concept. Several use cases are briefly presented, that have potential interest to be implemented by the BYOC concept.
In the second part we determined basic requirements to specify an NBI that tightly couples the SDN framework, presented recently, with the GC. We proposed an XMPP-based NBI conforming to previously discussed requirements and allowing to deploy the BYOC service.
Apart all the numerous advantages of the XMPP-based NBI, the main limitation concern the transfer of large service descriptions. These ones are restricted by the "maximum stanza site" value that limits the maximum size of the XMPP message processed and accepted by the server. This value can however be parameterized when deploying the XMPP server. This dissertation is setted out to investigate the role that SDN plays in various aspects of network service control and management, and to use an SDN based framework as service management system. In this final chapter, we will review the research contributions of this dissertation, as well as discuss directions for future research.
Contributions
The following are the main research contributions of this dissertation.
-A double-sided service lifecycle and data model (Chapter 4)
At the beginning of this dissertation the SDN based service management was one of the non-answered questions. service. The second type is the customer who monitors his service, and the third one is the customer who, using the management interface, receives some service parameters based on which he reconfigures or updates that service. Based on this analysis, the client-side service lifecycle can be modified. In this section we analyzed all phases that each service type might add to the service lifecycle. On the other side, the operator-side service lifecycle analysis presents a service lifecycle model representing all phases an operator should cross to deploy, configure and maintain a service.
This double-sided analysis allows to determine actions that each service customer and operator can take on a service that is the common object between a customer and an operator.
At the second time, we presented the data model of each lifecycle sides based on a double-layered data model approach. In this approach a service can be modeled in two data models: service and device, and an elementary model, called transformation, defines how one of these two models can be transformed to the other one. The In the first part of this chapter we introduce BYOC as a concept allowing to delegate, through the NBI, the control of all or a part of a service to an external controller, called "Guest Controller (GC)". The latter might be managed by the same customer requesting and consuming the service or by a third party operator.
Opening a control interface at the top of the SDN platform requires some specifications at the NBI level. We discussed at the second part of this chapter the requirements of the NBI allowing to open the BYOC API. Based on these requirements we proposed the use of XMPP as the protocol allowing to deploy such an API.
Future researches
The framework and its multilevel service provisioning interface introduced in this dissertation, provides a new service type, called BYOC, to future research. While this work has demonstrated the potential of opening a tuned control access to a service though a dynamic IPS service in Chapter 7, many opportunities for extending the scope of this thesis remain.
In this section we discuss some of these opportunities.
A detailed study of the theoretical and technical approach of the BYOC
Opening up the control interface to a GC by BYOC concept may create some new revenue resources. Indeed, BYOC allows not only to the service customer to implement its personalized control algorithm and fully managing its service, but also it allows the operator to monetize the openness of its SDN-based system. We presented the Earn as You Bring (EaYB) business model allowing the operator to resell a service to a customer controlled by third party GC [START_REF] Aflatoonian | BYOC: Bring Your Own Control a new concept to monetize SDN's openness[END_REF].
Opening the control platform and integrating an external Controller in a service production chain, however, may create some security and complexity problems. One of the fundamental issues concerns the impact of the BYOC concept on the performance of the network Chapter 8. Conclusions and Future Research controller. In fact, externalizing the control engine of a service to a GC may create a significant delay on decision step of the controller, the delay that will have a direct effect on the QoS. The second issue concerns the confidentiality of information available to the GC.
By opening its control interface, the operator provides the GC with information that may be confidential. To avoid this type of security problem, a data access control mechanism must take place, through which the operator controls all the data communicated between the controller and the GC while maintaining the flexibility of the BYOC model [START_REF] Jiang | A Secure Multi-Tenant Framework for SDN[END_REF].
The analysis of advantages of BYOC model and the complexity and security issues that BYOC may bring to the service management process can be the subject of a future work.
This analysis requires a more sophisticated study of this concept, the potential business model that it can introduce (ex. EaYB), the methods and protocols used to implement the northern interface and to control the access to resources exposed to the GC, and the real impact of this type of services on the performance of services.
BYOC as a key enabler to flexible NFV service chaining
A NFV SC defines a set of Service Function (SF)s and the order of these SF through which a packet should pass in the downlink and uplink traffic. Chaining network elements to create a service is not a new subject. Indeed, legacy network services are made of several network functions which are hardwired back-to-back. These solutions however remain difficult to deploy and expensive to change.
As soon as software-centric networking technologies, such as SDN and NFV brought the promise of programmability and flexibility to the network, the flexible service chaining became one of the academic challenges. The flexible service chaining consists in choosing the relevant SC through the analysis of traffic. There are several initiatives trying to propose an architecture for creation of Service Function Chaining (SFC) [START_REF] Onf | L4-L7 Service Function Chaining Solution Architecture[END_REF][START_REF] Halpern | Service Function Chaining (SFC) Architecture[END_REF][START_REF]ETSI. Network Functions Virtualisation (NFV); Architectural Framework. TS ETSI GS NFV 002[END_REF]. Among these solutions, IETF [START_REF] Halpern | Service Function Chaining (SFC) Architecture[END_REF] and ONF [START_REF] Onf | L4-L7 Service Function Chaining Solution Architecture[END_REF] propose to use a traffic classifier at the ingress point of
BYOC as a key concept leading to 5G dynamic network slicing
5th generation (5G) networks needs to support new demands from a wide variety of service groups from e-health to broadcast services [START_REF]5G white paper[END_REF]. In order to cover all these domains 5G networks need to support diverse requirements in terms of network availability, throughput, capacity and latency [START_REF] Salah | 5g service requirements and operational use cases: Analysis and metis ii vision[END_REF]. In order to deliver services to such wide domains and to answer these various requirements, network slicing has been introduced in 5G networks [START_REF] Ngmn Alliance | Description of network slicing concept[END_REF][START_REF] Galis | Autonomic Slice Networking-Requirements and Reference Model[END_REF][START_REF] Jiang | Network slicing management & prioritization in 5G mobile systems[END_REF]. Network slicing allows operators to establish different capabilities for each service group and serve multiple tenants in parallel.
SDN will play an important role in shifting to dynamic network slicing [START_REF] Ordonez-Lucena | Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges[END_REF]110,[START_REF] Hakiri | Leveraging SDN for the 5G networks: trends, prospects and challenges[END_REF]. The control and forwarding plane decoupling leads to separation of software from hardware, the concept that allows to share the infrastructure between different tenants each one using one or several slices of the network. In [112] "Dynamic programmability and control" brought by SDN, is presented as one of the key principles guiding the dynamic network slicing.
In this work the authors argue that "the dynamic programming of network slices can be accomplished either by custom programs or within an automation framework driven by analytics and machine learning."
Applying the BYOC concept to 5G networks leads to externalizing the control of one or several slices to a GC owned or managed by a customer, an Over The Top (OTT), or an OSS.
We argue that this openness is totally in line with the dynamic programmability and control principle of 5G networks presented in [112]. The innovative algorithms implemented within the GC controlling the slice of the network empowers promising value-added services and business models. However, this externalization creates some management and orchestration issues presented previously in [START_REF] Ordonez-Lucena | Network Slicing for 5G with SDN/NFV: Concepts, Architectures, and Challenges[END_REF]. Nous proposons de valoriser notre Framework en introduisant un modèle original de contrôle appelé BYOC (Bring Your Own Control) qui formalise, selon différentes modalités, la capacité d'externaliser un service à la demande par la délégation d'une partie de son contrôle à un tiers externe. Un service externalisé à la demande est structurée en une partie client et une partie SP. Cette dernière expose à la partie client des API qui permettent de demander l'exécution des actions induites par les différentes étapes du cycle de vie. Nous illustrons notre approche par l'ouverture d'une API BYOC sécurisée basée sur XMPP. La nature asynchrone de ce protocole ainsi que ses fonctions de sécurité natives facilitent l'externalisation du contrôle dans un environnement SDN multi-tenant. Nous illustrons la faisabilité de notre approche par l'exemple du service IPS (système de prévention d'intrusion) décliné en BYOC.
Mots clefs :
Réseau logiciel programmable, Interface nord, Interface de programmation applicative, Apporter votre propre contrôle, Externalisation / Délégation, Multi-client Abstract Over the past decades, Service Providers (SPs) have been crossed through several generations of technologies redefining networks and requiring new business models. The ongoing network transformation brings the opportunity for service innovation while reducing costs and mitigating the locking of suppliers. Digitalization and recent virtualization are changing the service management methods, traditional network services are shifting towards new on-demand network services. These ones allow customers to deploy and manage their services independently and optimally through a well-defined interface opened to the SP's platform. To offer this freedom to its customers, the SP must be able to rely on a dynamic and programmable network control platform. We argue in this thesis that this platform can be provided by Software-Defined Networking (SDN) technology.
We first characterize the perimeter of this class of new services. We identify the weakest management constraints that such services should meet and we integrate them in an abstract model structuring their lifecycle. This one involves two loosely coupled views, one specific to the customer and the other one to the SP. This double-sided service lifecycle is finally refined with a data model completing each of its steps.
The SDN architecture does not support all stages of the previous lifecycle. We extend it through an original Framework allowing the management of all the steps identified in the lifecycle. This Framework is organized around a service orchestrator and a resource orchestrator communicating via an internal interface. Its implementation requires an encapsulation of the SDN controller. The example of the MPLS VPN serves as a guideline to illustrate our approach. A PoC based on the OpenDaylight controller targeting the main parts of the Framework is proposed.
We propose to value our Framework by introducing a new and original control model called BYOC (Bring Your Own Control) which formalizes, according to various modalities, the capability of outsourcing an ondemand service by the delegation of part of its control to an external third party. An outsourced on-demand service is divided into a customer part and an SP one. The latter exposes to the former APIs which allow requesting the execution of the actions involved in the different steps of the lifecycle. We present an XMPP-based Northbound Interface (NBI) allowing opening up a secured BYOCenabled API. The asynchronous nature of this protocol together with its integrated security functions, eases the outsourcing of control into a multi-tenant SDN framework. We illustrate the feasibility of our approach through a BYOC-based Intrusion Prevention System (IPS) service example.
Keywords: Sofware Defined Networking, Northbound Interface, API, Bring Your Own Control, Outsourcing, Multi-tenancy
2. 1 SDNIntroduction
1 Controllers and their NBI . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 MPLS VPN configuration parameters . . . . . . . . . . . . . . . . . . . . . . . 3.2 MPLS VPN configuration parameters accessible via OpenDaylight API . . . . 3.3 MPLS VPN configuration parameters accessible via OpenDaylight VPN Service project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Service lifecycle phases and their related data models and transformation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. L'équilibre financier d'un SP dépend principalement des capacités de son réseau qui est valorisé par sa fiabilité, sa disponibilité et sa capacité à fournir de nouveaux services. La croissance des demandes d'accès au réseau conduisent les fournisseurs de services à rechercher des solutions rentables pour y répondre tout en réduisant la complexité et le cout du réseau et en accélérant l'innovation de service. Le réseau d'un opérateur est conçu sur la base d'équipements soigneusement développés, testés et configurés. En raison des enjeux critiques liés à ce réseau, les opérateurs limitent autant que faire se peut, sa modifications. Les éléments matériels, les protocoles et les services nécessitent plusieurs années de standardisation avant d'être intégrés dans les équipements par les fournisseurs. Ce verrouillage matériel réduit la capacité des fournisseurs de services à innover, intégrer et développer de nouveaux services. La transformation du réseau offre la possibilité d'innover en matière de service tout en réduisant les coûts et en atténuant les restrictions imposées par les équipementiers. Transformation signifie qu'il est possible d'optimiser l'exploitation des capacités du réseau grâce à la puissance des applications pour finalement donner au réseau du fournisseur de services une dimension de plate-forme de prestation de services numériques. L'émergence récente de la technologie Software Defined Networking (SDN) accompagné du modèle Network Function Virtualisation (NFV) permettent d'envisager l'accélération de la transformation du réseau. La promesse de ces approches se décline en terme de flexibilité et d'agilité du réseau tout en créant des solutions rentables. Le concept SDN introduit la possibilité de découpler les fonctionnalités de contrôle et de réacheminement des équipements réseau en plaçant les premières sur une unité centrale appelée contrôleur. Cette séparation permet de contrôler le réseau à partir d'une couche applicative centralisé, ce qui simplifie les tâches de contrôle et de gestion du réseau. De plus la programmabilité du contrôleur accélère la transformation du réseau des fournisseurs de services.
niveaux d'abstraction au client, chacun permettant d'offrir une partie des capacités nécessaires pour un service à la demande. Découpler le plan de contrôle et le plan de données des réseaux MPLS et localiser le premier dans un contrôleur apporte plusieurs avantages en termes de gestion de service, d'agilité de service et de contrôle de la QoS. La couche de contrôle centralisée offre une interface de gestion de service disponible pour le client. Néanmoins, cette localisation et cette ouverture peuvent créer plusieurs défis. Le backone MPLS est un environnement partagé entre les clients d'un opérateur. Pour déployer un réseau VPN l'opérateur configure un ensemble de périphériques, situés en coeur et en bordure de réseau. Ces équipements fournissent ainsi e, parallèle plusieurs services aux clients qui utilisent la connexion VPN comme moyen de transaction fiable de leurs données confidentielles. L'externalisation du plan de contrôle vers un contrôleur SDN (SDNC) apporte beaucoup de visibilité sur le trafic échangé au sein du réseau. C'est grâce à l'interface nord (NBI) de ce xxiii contrôleur qu'un client peut créer un service à la demande et gérer ce service dynamiquement. La granularité de cette information donne au client plus de liberté dans la création et la gestion de son service. xxiv Le cycle de vie du service côté client géré par ce type d'applications contient deux étapes principales :
de service jusqu'à sa configuration et sa surveillance, et le second gère toutes les opérations basées sur les ressources. Ces deux familles gérant ensemble tout le cycle de vie du service côté opérateur. Ce framework est composé de deux couches d'orchestration principales : -Orchestrateur de service (SO) -Orchestrateur de ressource (RO) L' "Orchestrateur de service" sera dédié aux opérations de la partie service et est conforme au cycle de vie du service côté opérateur : -Demande de service -Décomposition de service, compilation -Configuration de service xxvi -Maintenance et surveillance de service -Mise à jour de service -Retrait de service Cet orchestrateur reçoit les ordres de service et initie le cycle de vie du service en décomposant les demandes de service complexes et de haut niveau en modèles de service élémentaires.
Figure 2 .
2 Figure 2.4 shows a simplified view of the SDN's architecture based on this separation.
FIGURE 2 . 5 :
25 FIGURE 2.5: OpenFlow Protocol in practice
Fig. 2 .
2 [START_REF] Han | Network function virtualization: Challenges and opportunities for innovations[END_REF] shows three main components of this switch:
Fig. 2 .
2 Fig.2.9 illustrates our first analysis of different controllers, their core modules, NBI and applications. Proposing all control function required for implementing a service, may rely on the use of several SDNC. Managing the lifeceycle of a service also requires the use of several APIs proposed through the NBI.
FIGURE 2 .
2 FIGURE 2.10: The MANO architecture proposed by ETSI (source [62])
FIGURE 3 . 1 :
31 FIGURE 3.1: Label Switch Routers (LSR)s
Fig. 3 .
3 Fig. 3.2 shows the topology of a simple MPLS network. The network is designed with three types of equipment:
3. 1 .
1 Introduction to MPLS networks 27 the destination CE is connected directly to him. It then pops the label and forwards the IPv4 packet to the CE1.
FIGURE 3 . 5 :Fig. 3 .
353 FIGURE 3.5: OpenContrail control plane architecture (source [69])
3. 2 .
2 figure vRouters, control node uses the Extensible Messaging and Presence Protocol (XMPP) based interface. These nodes communicate with other control nodes using their east-west interfaces implemented in BGP.OpenContrail is a suitable solution used to interconnect VMs within one or multiple data centers. VMs are initiated inside a compute node that are general-purpose virtualized servers. Each compute node contains a vRouter implementing the forwarding plane of the OpenContrail architecture. Each VM contains one or several Virtual Network Interface Cart (vNIC)s, and each vNIC is connected to a vRouter's tap interface. In this architecture, the link connecting the VM to the tap interface is equivalent to the CE-PE link of VPN service. This interface is dynamically created as soon as the VM is spawned.In OpenContrail proposed architecture, XMPP performs the same function as MP-BGP in signaling overlay networks. After joining spawning a VM the vRouter assigns an MPLS label to the related tap interface connected to the VM. Next, it advertises the network prefix and the label to the control node, using a XMPP Publish Request message. This message, going from the vRouter to the Control node is equivalent to a BGP update from both semantic and structural point of view. The Control node, acts like a Route Reflector (RR) that centralizes route signaling and sends routes from one vRouter to another one by an XMPP Update Notification.Proposed OpenContrail architecture and its complementary blocs provide a turnkey solution suitable for public and private clouds. However, this solution covers mostly data center oriented use cases based on specific forwarding devices, called vRouters. The XMPP-based interface used by the latter creates "technological dependency" and reduces the openness of the solution, while the XMPP is not a commune interface usable by other existing SDN controllers.
3 . 2 . 2 . 1 MPLS
3221 Configuring and controlling MPLS networks via SDN controllers is one of challenges. Nowadays, SDN controllers propose to externalize MPLS control plane inside modules some of which are implemented within the controller or application layer. The work[START_REF] Ali | MPLS-TE and MPLS VPNS with Openflow[END_REF] proposes the implementation of MPLS Traffic Engineering (MPLS-TE) and MPLS-based VPN using OpenFlow and NOX. In this work authors discuss how the implementation of MPLS control plane becomes simple thanks to the consistent and up to date topology map of the controller. This externalization is done though the network applications implemented at the top of the SDN controllers, applications like Traffic Engineering (TE), Routing, VPN, Discovery, and Label Distribution. This work is an initiative to SDN-based MPLS networks, and the internal architecture of SDN controller and APIs allowing to configure an MPLS network is not explained in details. Chapter 3. SDN-based Outsourcing Of A Network Service In order to analyze the internal architecture of controllers proposing the deployment of MPLS networks, and also to study SDN APIs allowing to configure the underlying network, we try to orient our studies on one of the most developed open source SDN controllers, OpenDaylight. Networks in OpenDaylight controller With its large development community and it various projects the OpenDaylight controller is one of the most popular controllers in the SDN academic and open source world. Open-Daylight is an open source project under the Linux Foundation based on the microservices architecture. In this architecture, each core function of the controller is a microservice which can be activated or deactivated dynamically. OpenDaylight supports a large number of network protocols beyond OpenFlow, such as NETCONF, OVSDB, BGP, and SNMP.
FIGURE 3 . 6 :
36 FIGURE 3.6: OpenDaylight Architecture (source [32])
: Maintains the state of the Forwarding Information Base (FIB) that associates routes and NextHop for each VRF. This information is sent to the OVS by OpenFlow. The VPN Service project provides the NBI APIs for deploying an L3 VPN for the Data Center (DC) Cloud environment.
FIGURE 3 . 8 : 2 . 3 .
3823 FIGURE 3.8: VPN Configuration Using OpenDaylight VPN Service projetct
Contents 1 . 1 1 1. 2 2 1. 3 2 1. 4 3 1. 5
1112232435 Thesis context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Motivation and background . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contributions of this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . Document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chapter 4 .
4 Service lifecycle and Service Data Model following these two views simplifies the service abstraction design. The first viewpoint allows us to identify the APIs structuring the NBI and shared by both actors (operator and service consumer).
Fig. 4 .
4 Fig. 4.1 illustrates the client-side service lifecycle managed by this type of applications, containing two main steps: -Service creation: The application specifies the service characteristics it needs, it negotiates the associated SLA which will be available for limited duration and finally it requests a new service creation. In the reminder of the text we will mark it [Service creation]. -Service retirement: The application retires the service at the end of the negotiated duration. This step defines the end of the service life. In the reminder of the text we will mark it [Service retirement].
FIGURE 4 . 1 :
41 FIGURE 4.1: Client side Service Lifecycle of Type-1 applications
Fig. 4 .
4 2 illustrates the supplementary step added by this type of applications to the client-side service lifecycle. This lifecycle contains three main steps: -Service creation: [Service creation] cf. Section 4.1.1.1.
Chapter 4 .
4 Service lifecycle and Service Data Model monitored thanks to the upcoming events and statics sent from the SDNC to the application. In the reminder of the text we will mark it [Service monitoring]. -Service retirement: [Service retirement] cf. Section 4.1.1.1.
FIGURE 4 . 2 :
42 FIGURE 4.2: Client side Service Lifecycle of Type-2 applications
.3 and contains four main steps: -Service creation: [Service creation] cf. Section 4.1.1.1. -Service monitoring: [Service monitoring] cf. Section 4.1.1.2. -Service modification: Upcoming events and statistics may trigger an algorithm implemented inside the application (implemented at the top of the SDNC), the output of which reconfigures the underlying network resources through the SDNC. In the reminder of the text we will mark it [Service modification]. -Service retirement: [Service retirement] cf. Section 4.1.1.1.
FIGURE 4 . 3 :
43 FIGURE 4.3: Client side Service Lifecycle of Type-3 applications
FIGURE 4 . 4 :
44 FIGURE 4.4: Global Client Side Service Lifecycle
4. 1 .
1 1 and 4.1.2. The Fig. 4.6 illustrates the interactions between these two service lifecycles. During the service run-time the client and the operator interact with each other using the NBI. This interface interconnects different phases of each part, as described below: -Service Creation and Modification ↔ Service Request, Decomposition, Compilation and Configuration: the client-side service creation and specification phase leads to three first phases of the service lifecycle in the operator side; service request, decomposition, compilation and configuration.
TABLE 4 . 1 :
41 Consequently, the service model -device model transformation is a top-bottom model transformation. Otherwise, if the service retirement is triggered by the service operator, a new service model should be represented to the customer. This one requires a bottom-up model transformation done with one of explained methods. Service lifecycle phases and their related data models and transformation methods SM -Service model, DM -Device model, D -Declarative, I -Imperative
Chapter 5 .
5 manage any network service conform to the service lifecycle model presented in the previous chapter. We organize this set of functions in two orchestrators, one dedicated exclusively to the management of the resources: the resource orchestrator, and the other one grouping the remaining functions: the service orchestrator. The general framework structuring the internal architecture of SDN is presented in Section 5.2 and illustrated with an example. This framework is externally limited by NBI and SBI and internally clarifies the border between the two orchestrators by identifying an internal interface between them, called the middle interface. An SDN-based Framework For Service Provisioning SDNC NBI to program these resources by recently generated instructions (arrows 6, 10, 12 of Fig.5.2). Finally at the end of the service deployment, the client will be informed about the service implementation through the SRM (arrows "Service Creation Resp." of Fig.5.2).
Fig. 5 .
5 Fig. 5.11 shows the negotiated MPLS VPN service model requested by the customer. In this model the customer requests creating a VPN connection between three remote sites each one connected to a CE (ce1, ce2, and ce3).
Fig. 5 .
5 Fig.5.12 shows the simplified algorithm implemented within the MPLS_vpn_transformer.
FIGURE 5 . 12 :
512 FIGURE 5.12: Implemented MPLS VPN transformer simplified algorithm
Finally, we 1 29 3. 3
1293 described in 5.3 the implementation of the main components of the proposed framework based on OpenDaylight controller and Mininet platform. In this prototype we study the service data model transformation, discussed in 4.2, through a simple MPLS VPN service deployment.Introduction to MPLS networks . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 MPLS data plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.2 MPLS control plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.1.3 MPLS VPN Sample Configuration . . . . . . . . . . . . . . . . . . . . 26 3.1.4 MPLS VPN Service Management . . . . . . . . . . . . . . . . . . . . . 27 3.2 SDN-based MPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.1 OpenContrail Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.2 OpenFlow-based MPLS Networks . . . . . . . . . . . . . . . . . . . . Outsourcing problematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34NBI refers to the software interfaces between the controller and the applications running atop. These ones are abstracted through the application layer consisting in a set of applications and management systems acting upon the network behavior at the top of the SDN stack through the NBI. The centralized nature of this architecture brings large benefits to the network management domain, including the third party network programmability access. Network applications, on the highest layer of the architecture, achieve the desired network behavior without knowledge of detailed physical network configuration. The implementation of the NBI relies on the level of the network abstraction to be provided to the application and the type of the control that the application brings to the controller, called SO in our work.NBI appears as a natural administrative border between the SDN Orchestrator, managed by an operator, and its potential clients residing in the application layer. We argue that providing to the operator the capability of mastering SDN's openness on its northbound side should largely be profitable to both operator and clients. We introduce such a capability through the concept of BYOC: Bring Your Own Control which consists in delegating all or a part of the network control or management role to a third party application called Guest Controller (GC) and owned by an external client. An overall structure of this concept is presented in Fig.6.1, which shows the logical position of the Bring Your Own Control (BYOC) application in the traditional SDN architecture, that includes partly the Control Layer and the Application one.Chapter 6. Bring Your Own Control (BYOC)
Figure 6 .
6 Figure 6.4 illustrates the communication between components of the system in detail.
FIGURE 7 . 6 : 7 . 1 . 3 . 3 Fig. 7 .
7671337 FIGURE 7.6: A model of Decision Base implemented within the SO
FIGURE 7 . 7 : 1 59 5. 3 59 5. 3 . 2 60 5. 3 . 3 61 5. 4
77159359326033614 FIGURE 7.7: Decision Engine task flowchart
service model is a general and simplified model of the service presented to the service customer. And the device model is the technical definition of the device configuration generated based on the negotiated service model. The service object, shared between the operator and the customer is described in the service model. Consequently, the client-side service lifecycle is using the service model and all phases of the lifecycle are based on this model. The service model crosses down the operator-side lifecycle and is transformed to one or different device models. In Section 4.2 we discuss the model used or generated by each operator-side service lifecycle phase. In this section we discussed also the transformation type each step might do to convert a model from a service to a device one. -A service management framework based on SDN paradigm (Chapter 5) The service lifecycle analysis gives us a tool to determine all activities an operator should do to manage a service. In Chapter 5, based on the operator-side service lifecycle, we propose a framework through which a service model presented to the 8.2. Future researches 93 customer, is transformed to device models deployed on resources. The architecture of this framework is based on a double-layered system managing the service lifecycle through two orchestrators: service orchestrator and resource orchestrator. The first one puts together all functions allowing operator to manage a service vertically, and the second one manages resources needed by the first one to deploy a service. -Bring Your Own Control BYOC service (Chapter 6) The proposed framework gives rise to a system deploying and managing services. It opens an interface to the customers' side. In this chapter we present a new service control model, called Bring Your Own Control (BYOC) that follows the Type 3 applications model discussed in Section 2.5.2.
the SC allowing to classify traffic flows based on policies. This classification allows to specify a path ID to the flow used to forward the flow on a specific path, called Service Function Path (SFP). In the ONF proposal the SDNC has a central role, where it sets up SFPs by programming Service Function Forwarder (SFF) to steer the flow through the sequence of SF instances. It also locates and program the flow classifier through the SBI allowing to classify a flow.Applying the BYOC concept to the approach proposed by ONF consists in opening a control interface between the SDNC and a GC that implements all functions needed to classify the flow and reconfigure the SFF, and the flow classifier based on new flows arrived on the classifier. Delegating the control of the SFC to the customer, gives more flexibility, visibility and freedom to the customer to create a flexible SFC based on its customized path computation algorithms and its applications requirements. On the other hand, a BYOC based SFC allows the Service Provider to lighten the service OpEx.
une frontière administrative naturelle entre l'orchestrateur SDN, géré par un opérateur, et ses clients potentiels résidant dans la couche application. Fournir à l'opérateur la capacité de maîtriser l'ouverture de SDN sur son côté nord devrait être largement profitable à l'opérateur et aux clients. Nous introduisons une telle fonctionnalité à travers le concept de BYOC : Bring Your Own Control qui consiste à déléguer tout ou partie du contrôle et/ou de la gestion de réseau à une application tierce appelée Guest Controller (GC) et appartenant à un client extérieur. xxviii et qui permet à l'application côté client d'envoyer des demandes de service et des modifications, tandis que la seconde utilise une interaction asynchrone où une notification sera envoyée à l'application de service abonnée. La nature asynchrone de cette librairie la rend utile pour envoyer des messages de contrôle au GC. La communication entre le GC et le SO 'importance d'un système de provisionnement de service est basée sur son NBI qui connecte le portail de service au système de provisionnement de service (SO) dans un environnement SDN. Cette interface fournit une abstraction de la couche service et des fonctions essentielles pour créer, modifier et détruire un service, et, comme décrit ci-dessus, elle prend en compte
est basée sur l'algorithme Push-and-Pull (PaP) essentiellement utilisé dans les applications
web. Dans cette proposition, nous essayons d'adapter cet algorithme pour déterminer la mé-
thode de communication de la NBI qui utilisera le paradigme de publication / soumission
de messagerie.
NBI fait référence aux interfaces logicielles entre le contrôleur et ses applications. Celles-ci sont extraites à travers la couche application consistant en un ensemble d'applications et de systèmes de gestion agissant sur le comportement du réseau en haut de la pile SDN à travers la NBI.
La nature centralisée de cette architecture apporte de grands avantages au domaine de gestion de réseau. Les applications réseau, sur la couche supérieure de l'architecture, atteignent le comportement réseau souhaité sans connaître la configuration détaillée du réseau physique. L'implémentation de la NBI repose sur le niveau d"abstraction du réseau à fournir à l'application et sur le type de contrôle que l'application apporte au contrôleur, appelé SO dans notre travail. xxvii NBI apparaît comme Lles unités de contrôle externalisées appelées GC. Cette interface est un point d'accès partagé entre différents clients, chacun contrôlant des services spécifiques avec un abonnement associé à certains événements et notifications. Il est donc important que cette interface partagée implémente un environnement isolé pour fournir un accès multi-tenant. Celui-ci devrait être contrôlé à l'aide d'un système intégré d'authentification et d'autorisation.
Dans notre travail, nous introduisons une NBI basée sur le protocole XMPP. Ce protocole est développé à l'origine comme un protocole de messagerie instantané (IM) par la communauté. Ce protocole utilise une technologie de streaming pour échanger des éléments XML, appelés stanza, entre deux entités du réseau, chacune identifiée par un unique identifiant JID. La raison principale de la sélection de ce protocole pour implémenter la NBI du système de provisionnement de services repose sur son modèle d'interaction asynchrone qui, à l'aide de son système push intégré, autorise l'implémentation d'un service BYOC.
de vie du service représentant toutes les phases qu'un opérateur doit traverser pour déployer, configurer et maintenir un service. Cette analyse recto-verso permet de déterminer les actions que chaque client et opérateur de service peut effectuer sur un service qui est l'objet commun entre un client et un opérateur.Nous avons présenté pour la deuxième fois le modèle de données de chaque cycle de vie basé sur une approche de modèle de données à deux couches. Dans cette approche, un service peut être modélisé en deux modèles de données : service et dispositif, et un modèle élémentaire, appelé transformation, définit comment l'un de ces deux modèles peut être transformé en un autre. Le modèle de service est un modèle général et simplifié du service présenté au client du service. Et le modèle de périphérique est la définition technique de la configuration de périphérique générée sur la base du modèle de service négocié. L'objet de service partagé entre l'opérateur et le client est décrit dans le modèle de service. Par
Afin de définir un framework d'approvisionnement de service basé sur SDN permettant de définir les couches de contrôle et d'application, une analyse du cycle de vie du service devait avoir lieu. Nous avons organisé l'analyse du cycle de vie du service selon deux points de vue : client et opérateur. La première vue concerne le cycle de vie du service client qui traite les différentes phases dans lesquelles un client de service (ou client) peut être pendant le cycle de vie du service. Cette analyse est basée sur la classification des applications et des services que nous avons précédemment faite. Selon cette classification, un client de service peut utiliser l'interface de gestion de service pour gérer trois types de services. Le premier est le cas où le client demande et configure un service. Le deuxième type est le client qui surveille son service, et le troisième est le client qui, en utilisant l'interface de gestion, reçoit certains paramètres de service sur la base desquels il reconfigure ou met à jour ce service. Sur la base de cette analyse, le cycle de vie du service côté client peut être modifié. Nous avons analysé toutes les phases que chaque type de service pourrait ajouter au cycle de vie du service. D'un autre côté, l'analyse du cycle de vie du service côté opérateur présente un xxix modèle de cycle conséquent, le cycle de vie du service côté client utilise le modèle de service et toutes les phases du cycle de vie sont basées sur ce modèle. Le modèle de service traverse le cycle de vie côté opérateur et est transformé en un ou plusieurs modèles de ressource. L'analyse du cycle de vie du service nous donne un outil pour déterminer toutes les activités qu'un opérateur doit effectuer pour gérer un service. Basé sur le cycle de vie du service côté opérateur, nous proposons un framework à travers lequel un modèle de service présenté au client est transformé en modèles de ressources déployés sur des ressources. L'architecture de ce framework repose sur un système à deux couches gérant le cycle de vie du service via deux orchestrateurs : orchestrateur de service et orchestrateur de ressource. Le premier regroupe toutes les fonctions permettant à l'opérateur de gérer un service verticalement, et le second gère les ressources nécessaires au premier pour déployer un service. Le framework proposé donne lieu à un système de déploiement et de gestion de services. Il ouvre une interface du côté des clients. Nous présentons un nouveau modèle de contrôle de service, appelé Bring Your Own Control (BYOC) qui suit le modèle d'application de type 3. Nous introduisons BYOC comme un concept permettant de déléguer, à travers la NBI, le contrôle de tout ou partie d'un service à un contrôleur externe, appelé Guest Controller (GC). Ce dernier peut être géré par le même client demandant et consommant le service ou par un opérateur tiers. L'ouverture d'une interface de contrôle au nord de la plate-forme SDN nécessite certaines spécifications au niveau de NBI. Nous avons abordé dans la suite de notre travail les exigences de la NBI permettant d'ouvrir l'API de BYOC. Sur la base de ces exigences, nous avons proposé l'utilisation de XMPP comme le protocole permettant de déployer une telle API.
L'analyse des avantages du concept BYOC et les problèmes de complexité et de sécurité que BYOC peut apporter au processus de gestion des services peuvent faire l'objet d'un travail futur. Cette analyse nécessite une étude plus sophistiquée de ce concept, du modèle économique potentiel qu'il peut introduire (ex. Earn as You Bring EaYB), des méthodes et des protocoles utilisés pour implémenter l'interface nord et contrôler l'accès aux ressources exposées au GC, et l'impact réel de ce type de services sur la performance des services. xxx L'ouverture de l'interface de contrôle de type BYOC permet de créer des nouveaux modèles de service non seulement dans le domaine SDN mais aussi dans les domaines NFV et 5G.
service management framework based on SDN paradigm
The service lifecycle analysis gives us a tool to determine all activities an operator
should do to manage a service. Based on the operator-side service lifecycle, we
propose a framework through which a service model presented to the customer, is
transformed to device models deployed on resources. This framework is organized
into two orchestrator systems called respectively Service Orchestrator and Resource
Orchestrator interconnected by an internal interface. Our approach is illustrated
through the analysis of the MPLS VPN service, and a Proof Of Concept (POC) of
our framework based on the OpenDaylight controller is proposed.
-Bring
Your Own Control (BYOC) service model
Chapter 1. Introduction
illustrate our approach through the outsourcing of an Intrusion Prevention System
(IPS) service.
We exploit the proposed framework by introducing a new and original service control
model called Bring Your Own Control (BYOC). It allows the customer or a third
party operator to participate in the service lifecycle following various modalities. We
analyse the characteristics of interfaces allowing to deploy a BYOC service and we
Table Flow Table Group Table Group Table Port Port Port Port OpenFlow Protocol OpenFlow Switch
FlowGroupGroupPort
Datapath
Control Channel
Pipeline
FIGURE 2.6: OpenFlow Switch Components (source [23])
TABLE 2 .
2
1: SDN Controllers and their NBI This question has been raised several times and a common conclusion is that northbound APIs are indeed important, but it is too early to define a single standard at this time [38, 39, Chapter 2. Programming the network 40]
Net Manager" and "Host Manager". The first one, Flow Manager, controls and routes flows based on a specific load-balancing algorithm implemented in this module. This one implements necessary controller core functions and Layer 2 protocols such as Dynamic Host Configuration Protocol (DHCP), Address Resolution Protocol (ARP) and Spanning Tree Pro-
tocol (STP). The second module, Net Manager, keeps track of the network topology, link usages and links packet latency. The third module, Host Manager, monitors the state of each
TABLE 3 .
3
1: MPLS VPN configuration parameters
TABLE 3 .
3
OpenDaylight Native MPLS API
LAN IP address ✓
RT ✓
RD ✓
AS ✓
VRF name ✓
Routing protocols ✓
VPN ID ✓
MPLS labels ✓
3: MPLS VPN configuration parameters accessible via OpenDaylight VPN Service project
.[START_REF] Kim | Improving network management with software defined networking[END_REF]. This model contains all previous steps needed to manage three types of applications, discussed earlier. We introduce to this
Service
Creation
Service Service
Retirement Monitoring
Service
Modification
& update
Chapter 5 An SDN-based Framework For Service Provisioning Contents 2.1 Technological context
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Modeling programmable networks . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Fundamentals of programmable networks . . . . . . . . . . . . . . . . . . . 9 2.4 Software-Defined Networking (SDN) . . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.2 SDN Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.3 SDN Southbound Interface (SBI) . . . . . . . . . . . . . . . . . . . . . 12 2.4.4 SDN Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.5 SDN Northbound Interface (NBI) . . . . . . . . . . . . . . . . . . . . 15 2.5 SDN Applications Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5.1 SDN Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5.2 Intuitive classification of SDN applications . . . . . . . . . . . . . . . 18 2.5.3 Impact of SDN Applications on Controller design . . . . . . . . . . . 19 2.
6 Network Function Virtualization, an approach to service orchestration . . 20
Chapter 6. Bring Your Own Control (BYOC) a unique Uniform Resource Locator (URL). Resources are application's state and functionality which are represented by a uniform interface to transfer the state between the client and server. Unlike most of web services architecture, it is not necessary to use XML as a data interchange format in REST. The implementation of the REST is standard-less and the format of exchanged information can be in XML, JavaScript Object Notation (JSON),The simplicity, performance and scalability of REST are the reasons of its popularity in SDN controllers' world. REST is easier to use and is more flexible. In order to interact with the Web Services, no expensive tools are required. Comparing to our requirements explained in section 6.2.1, the fundamental limitation of this method rests on the absence of asynchronous capabilities and managing a secured multi-access.Traditional Web Services solutions, such as Simple Object Access Protocol (SOAP) have previously been used to specify NBI but quickly abandoned in favor of the RESTful approach.
Comma-Separated Values (CSV), plain text, Rich Site Summary (RSS) or even in HyperText
Markup Language (HTML), i.e. REST is ambivalent.
6.6 shows the Overhead charge of the NBI obtained during this test. The NBI Overhead charge rests constant for the XMPP-based case and varies for the RESTful one. The overhead charge of the NBI in the simulated real-time case (less that 1 ms of delay) for the RESTful NBI is about 3 MB. To reduce this charge and achieve the same Overhead as XMPP-based one, we need to increase the time interval up to 200 ms. This time interval will have a direct effect on the event transfer delay.
Overhead
Bytes
3500000
3000000
2500000
2000000 REST
1500000 XMPP
1000000
500000
0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 Interval ms
FIGURE 6.6: NBI Overhead Charge
There were several initiatives to define the perimeter of Chapter 8. Conclusions and Future Research their perimeter and aiming to define an NBI was almost impossible. ONF had just started the NBI group activities aiming to define an NBI answering requirements of most of applications. However, this work was far from being realized, because defining a standard NBI, that is an application interface, requires a careful analysis of several implementations and the feedback gained by all those implementations.In order to define a reference SDN-based service provisioning framework allowing to define the control and application layer edge, a service lifecycle analysis had to take place. At the first time, in Section 4.1, we presented the service lifecycle analysis in two point of views: client and operator. The fist view, client-side service lifecycle, discusses different phases in which a service customer (or client) can be during the service lifecycle. This analysis is held based-on the application and service classification that we have previously done in Section 2.5.2. According to this classification, a service customer can use the service management interface to manage three types of services. The first one is the case where the customer requests and configures a
SDN architecture layers, several SDN controllers were in the design and development
phase, and developed SDN controllers and frameworks were deployed each one for
specific research topics. Some of SDN-based services were deployed by internal SDN
controller's functions and some of them were controlled by applications developed
at the top of the controller programing the network via the controllers NBI. Due to the nature of ongoing projects, and the fact that there were not any clear definition of SDN controller core functions and northbound applications, defining the border of these two layers, i.e. SDN controller and SDN applications, helping to delimitate
Au cours des dernières décennies, les fournisseurs de services (SP) ont eu à gérer plusieurs générations de technologies redéfinissant les réseaux et nécessitant de nouveaux modèles économiques. Cette évolution continue du réseau offre au SP l'opportunité d'innover en matière de nouveaux services tout en réduisant les coûts et en limitant sa dépendance auprès des équipementiers. L'émergence récente du paradigme de la virtualisation modifie profondément les méthodes de gestion des services réseau. Ces derniers évoluent vers l'intégration d'une capacité « à la demande » dont la particularité consiste à permettre aux clients du SP de pouvoir les déployer et les gérer de manière autonome et optimale. Pour offrir une telle souplesse de fonctionnement, le SP doit pouvoir s'appuyer sur une plateforme de gestion permettant un contrôle dynamique et programmable du réseau. Nous montrons dans cette thèse qu'une telle plate-forme peut être fournie grâce à la technologie SDN (Software-Defined Networking).Nous proposons dans un premier temps une caractérisation de la classe de services réseau à la demande. Les contraintes de gestion les plus faibles que ces services doivent satisfaire sont identifiées et intégrées à un modèle abstrait de leur cycle de vie. Celui-ci détermine deux vues faiblement couplées, l'une spécifique au client et l'autre au SP. Ce cycle de vie est complété par un modèle de données qui en précise chacune des étapes.L'architecture SDN ne prend pas en charge toutes les étapes du cycle de vie précédent. Nous introduisons un Framework original qui encapsule le contrôleur SDN, et permet la gestion de toutes les étapes du cycle de vie. Ce Framework est organisé autour d'un orchestrateur de services et d'un orchestrateur de ressources communiquant via une interface interne. L'exemple du VPN MPLS sert de fil conducteur pour illustrer notre approche. Un PoC basé sur le contrôleur OpenDaylight ciblant les parties principales du Framework est proposé.
En plus de maintenir l'agilité de la gestion des services, nous proposons d'introduire un framework de gestion des services au-delà du SDNC. Ce framework fournit une vue d'ensemble de toutes les fonctions et commandes. Nous renforçons ce framework en ajoutant la question de l'accès du client aux ressources et services gérés. En effet, ce framework doit être capable de fournir une granularité variable, capable de gérer tous les types de services :-Applications de type 1 : Le modèle abstrait de service apporté par NBI du framework permet à l'application côté client de configurer un service avec un minimum d'informations communiquées entre l'application et le framework. L'accès restreint fourni par le framework empêche les fuites de données involontaires ou intentionnelles et la mauvaise configuration du service.-Applications de type 2 : Du côté sud, les blocs internes du framework reçoivent les événements de réseau venus directement à partir des ressources, ou indirectement via le SDNC. Du côté nord, ces blocs ouvrent une API aux applications leurs permettant de s'abonner à certaines métriques utilisées pour des raisons de surveillance. Sur la base d'événements remontés par les ressources, ces métriques sont calculées par des blocs internes du framework et sont envoyées à l'application appropriée.-Applications de type 3 : L'accès contrôlé aux fonctions basées sur SDN assurées par le framework fournit non seulement une API de gestion de service, mais aussi une interface de contrôle de service ouvert à l'application client. L'API de contrôle avec une granularité fine permet aux clients d'avoir un accès de bas niveau aux ressources réseau via le framework. En utilisant cette API les clients reçoivent les événements réseau envoyés par les équipements, à partir desquels ils reconfigurent le service. Afin de fournir un framework capable de mettre en oeuvre les API mentionnées, nous devons analyser le cycle de vie du service en détail. Cette analyse conduit à l'identification de tous les blocs internes du framework et à leurs articulations internes pour permettre aussi bien la présentation d'une API de service et de contrôle que le déploiement l'allocation et la configuration de ressources.Cycle de vie du service et modèle de données de serviceAfin de réduire la complexité de la gestion du cycle de vie, nous divisons le cycle de vie du service global en deux points de vue complémentaires : la vue du client et celle de l'opérateur. Chacune des deux vues capture uniquement les informations utiles pour l'acteur associé. La vue globale peut cependant être obtenue en composant les deux vues partielles.Sur la base de la classification des applications abordées dans nos études, nous analysons le cycle de vie du service côté client pour les trois principaux types d'applications. Les applications de type 1 sont constituées d'applications créant un service réseau à l'aide de la NBI.Cette catégorie ne surveille ni ne modifie le service en fonction des événements réseau.
BYOC devrait clairement permettre de réduire la charge de traitement du contrôleur. En effet, les architectures et les propositions SDN existantes centralisent la plupart du contrôle de réseau et de la logique de décision dans une seule entité. Celle-ci doit supporter une charge importante en fournissant un grand nombre de services tous déployés dans la même entité. Une telle complexité est clairement un problème que BYOC peut aider à résoudre en externalisant une partie du contrôle à une application tierce. La préservation de la confidentialité de l'application client de service est un autre point important apporté par BYOC.En fait, centraliser le contrôle du réseau dans un système et passer toutes les données de ce contrôleur peut créer des problèmes de confidentialité qui peuvent empêcher l'utilisateur final, que nous appelons SC, d'utiliser le SDNC. Enfin et surtout, BYOC peut aider l'opérateur à affiner sensiblement son modèle économique basé sur SDN en déléguant un contrôle presque "à la carte" via des APIs dédiés. Une telle approche peut être exploitée intelligemment selon le nouveau paradigme de "Earn as you bring" (EaYB) que nous présentons et décrivons ci-dessous.En effet, un client extérieur possédant un algorithme sophistiqué propriétaire peut vouloir commercialiser les traitements spécialisés associés, à d'autres clients via l'opérateur SDN qui pourrait apparaître comme un courtier de ce type de capacités. Il convient de souligner que ces avantages de BYOC peuvent en partie être compensés par la tâche non triviale de vérifier la validité des décisions prises par l'intelligence externalisée qui doivent être au moins conformes aux différentes politiques mises en oeuvre par l'opérateur dans le contrôleur. Ce point qui mérite plus d'investigation pourrait faire l'objet de recherches futures.L'externalisation d'une partie des tâches de gestion et de contrôle modifie le modèle de cycle de vie du service. Il s'agit en effet de traduire du côté client, des parties de certaines tâches appartenant initialement à l'opérateur. Une analyse minutieuse nous permet d'identifier les tâches de compilation et de surveillance, réalisées au niveau du cycle de vie du service côté opérateur, comme des candidats potentiellement intéressants, dont certaines parties peuvent être déléguées au GC. Le GC est connecté au SO à travers la NBI. C'est là que l'opérateur de service communique avec le client du service et parfois avec les applications côté client, les orchestrateurs et les GCs.Afin de réaliser ces fonctionnalités, certaines librairies devraient être implémentées. Ces dernières prennent en charge deux catégories de tâches : 1) création, configuration et modification de service, et 2) surveillance de service et contrôle de service BYOC. La première utilise une interaction synchrone qui implémente une simple communication requête / réponse
This aspect is not mentioned in this figure because it falls outside of the scope of the service lifecycle.
Acknowledgements
To formalize data model on each layer and the transformation allowing to map one layer onto the other one, Moberg et al. [START_REF] Moberg | A two-layered data model approach for network services[END_REF] proposed to use the YANG data model [START_REF] Bjorklund | YANG -A Data Modeling Language for the Network Configuration Protocol (NETCONF)[END_REF] as the formal modeling language. Using the unified YANG modeling for both service and device layers was also previously proposed by Wallin et al. [START_REF] Wallin | Automating Network and Service Configuration Using NETCONF and YANG[END_REF].
These three elements required to model a service, (i.e. service layer, device layer and transformation model), are illustrated in [START_REF] Moberg | A two-layered data model approach for network services[END_REF] where the authors specify with this approach IP VPN services. At the service layer the model consists in a formal description of VPN services presented to the customer, and derived from a list of VPN service parameters including BGP AS number, VPN name and a list of VPN endpoints (CE and/or PE). At the second layer the device model is the set of device configurations corresponding to a VPN service. This one is defined by all configurations done on PEs connected to all requested endpoints. This information includes PE IP address, RT, RD, BGP AS number, etc. Finally, for the third element which is the transformation template mapping one layer on the other one, the authors propose the use of a declarative model. In the example the template is based on Extensible Markup Language (XML) which according to service model parameters, generates a device model.
Applying the two-layered model approach on Service Lifecycle
Proposed model based on YANG data modeling language brings dynamicity and agility to the service management system. Its modular aspect allows to reduce the new service creation and modification costs. In this section we apply this model on the proposed service lifecycle, discussed in Section 4.1. This analysis aims to introduce a model allowing to formalize service lifecycle phases and their respective data models. An example of this analysis is presented in the Table 4.1 at the end of this Section.
Applying two-layered model on client side service lifecycle
The client side service representation is a minimal model containing informal information about the customers service. All steps of client side service lifecycle are relying on the negotiated service model, i. e. the service layer model of two-layered approach (Section 4.2.1) that is a representation of the service and its components. From the operator side point of view, data model used by this service lifecycle rests on the service layer model.
Applying two-layered model on operator side service lifecycle
The integration of new services and updating the service delivery platform involves the creation and updating data models used by the client side service lifecycle. Contrary to the client side, the operator side service lifecycle relies on several data models.
Illustrating Service Deployment Example
The operator side service lifecycle is presented in Section 4.1.2. This model represents all processes an operator may take into account to manage a service. We introduce a service and resource management platform which encapsulates an SDNC and provides through other functional modules, the capabilities to implement each step of the service lifecycle presented before. Fig. 5.1 illustrates this platform with the involved modules together with special data required and generated by each module. It shows the diversity of information needed to manage a service automatically. We will detail the different modules of this platform in the next section. We prefer now to illustrate this model by describing the main processes through the example of a VPN service connecting two remote sites of a client connected to physical routers: PE1 and PE2. In MPLS networks, each CE is connected to a VRF instance hosted in a PE. In our example we call these instances, (i.e. VRFs) respectively vRouter1 and vRouter2. The first step of the service lifecycle which consists in the "Service Creation" gives rise in the nominal case to a call flow the details of which are presented in Fig. 5.2.
In the first step (arrow 1 of Fig. In the last chapter we proposed a new service implementing BYOC-based services. Lastly in chapter 4 we analyzed the lifecycle of a service based on two service actors, the client and the operator, view points. Dividing the service lifecycle into two parts refines our analysis and helped us to present lastly a SDN framework in chapter 5, where we discussed a framework through which a negotiated service model can be vertically implemented. This framework is issued from the operator-side service lifecycle steps, where that permits not only to implement and control a service, but also to manage its lifecycle. The second part of service lifecycle, client-side, presents all steps that every applications types, presented in section 2.5.2, may take to deploy, monitor and reconfigure a service through the SDNC. In this chapter we rapidly showed how the lastly presented framework permits to deploy a BYOC-type service. We also presented an XMPP-based NBI allowing to open the interface to the GC.
IPS Control Plane as a Service
Referenced architecture
The architecture of proposed service is based on the Intrusion Prevention System (IPS) architecture divided into two entities. The first one is Intrusion Detection System (IDS)-end that is implemented in key points of the network and observes real-time traffics. The second one, called Security Manager (SM), is a central management system that, thanks to its This database is comparable with the database used normally within Firewalls where there are actions like: ACCEPT, REJECT, and DROP. The difference between these ones and the DB presented in BYOC is in fields "IPProto" and "Action" where we record the type of message log / alert (in the IPProto field) and the ID of the GC, gcId (in the Action field). The values stored in this database are configured by the network administrator (operator) that manages the entire infrastructure.
Service Dispatcher (SD) and NBI
The SD is the module directly accessible by GCs. It identifies GCs using the identifier of each one (gcId), registered in the Action field of the DB. We propose here to use the XMPP protocol to implement the interface between the SD and GCs where each endpoint is identified by a JID.
Detailed components of the Guest Controller (GC)
The Fig. 7.8 shows the detailed components of the SM implemented the GC. As stated recently the SD sends the log and alert messages arriving from IDS-ends to an appropriate GC. These messages contain the specific values (143, 144) in their IPProto field. By receiving a message, the SM needs to know the origin of this message, whatever it is: log and alert.
For this we propose to implement Security Proxy (SP). By examining the IPProto field, the SP decides whether a message relates to a log or an alert.
Applying the GC decision on the infrastructure
Once a decision is made by the GC, it sends a service update message to the SDCM. This decision may update a series of devices. In our example, to block an attacking traffic, the decision just updates the OpenFlow switch that is installed in front of the IDS-end. This new configuration, deployed on the switch, allows the GC to block the inbound traffic entering the customers sites (interface I.1 on the figure 7.5). The update message sent from the GC contains a service data model equivalent to the model presented in the service creation phase. Thanks to this homogeneity of models, the BYOC service update becomes transparent for the SO and the update process will be done through existing blocks of the SO.
Distributed IPS control plane
Opening a control interface on the IDS-end equipment through the SO allows to break down the inner modules of the SM between several GCs. The fig. 7.9 illustrates this example in details. In this example, an attack signature database is shared between multiple SMs. |
01758354 | en | [
"spi.meca.msmeca"
] | 2024/03/05 22:32:10 | 2017 | https://pastel.hal.science/tel-01758354/file/65301_TANNE_2017_archivage.pdf | Laura De Lorenzis
Claudia Comi
Jose Adachi
l'opportunité de rencontrer l'équipe R&D dont
Résumé
La défaillance d'une structure est souvent provoquée par la propagation de fissures dans le matériau initialement sain qui la compose. Une connaissance approfondie dans le domaine de la mécanique de la rupture est essentielle pour l'ingénieur. Il permet notamment de prévenir des mécanismes de fissurations garantissant l'intégrité des structures civiles, ou bien, de les développer comme par exemple dans l'industrie pétrolière. Du point de vue de la modélisation ces problèmes sont similaires, complexes et difficiles. Ainsi, il est fondamental de pouvoir prédire où et quand les fissures se propagent.
Ce travail de thèse se restreint à l'étude des fissures de type fragile et ductile dans les matériaux homogènes sous chargement quasi-statique. On adopte le point de vue macroscopique c'est-à-dire que la fissure est une réponse de la structure à une sollicitation excessive et est caractérisée par une surface de discontinuité du champ de déplacement. La théorie la plus communément admise pour modéliser les fissures est celle de Griffith. Elle prédit l'initiation de la fissure lorsque le taux de restitution d'énergie est égal à la ténacité du matériau le long d'un chemin préétabli. Ce type de critère requière d'évaluer la variation de l'énergie potentielle de la structure à l'équilibre pour un incrément de longueur de la fissure. Mais l'essence même de la théorie de Griffith est une compétition entre l'énergie de surface et l'énergie potentielle de la structure.
Cependant ce modèle n'est pas adapté pour des singularités d'entaille faible i.e. une entaille qui ne dégénère pas en pré-fissure. Pour pallier à ce défaut des critères de type contraintes critiques ont été développés pour des géométries régulières. Malheureusement ils ne peuvent prédire correctement l'initiation d'une fissure puisque la contrainte est infinie en fond d'entaille. Une seconde limitation de la théorie de Griffith est l'effet d'échelle. Pour illustrer ce propos, considérons une structure unitaire coupé par une fissure de longueur a. Le chargement critique de cette structure évolue en 1/ √ a, par conséquent le chargement admissible est infini lorsque la taille du défaut tend vers zéro.
Ceci n'a pas de sens physique et est en contradiction avec les expériences. Il est connu que cette limitation provient du manque de contrainte critique (ou longueur caractéristique) dans le modèle. Pour s'affranchir de ce défaut Dugdale et Barenblatt ont proposé dans leurs modèles de prendre en compte des contraintes cohésives sur les lèvres de la fissure afin d'éliminer la singularité de contraintes en fond d'entaille.
i Plus récemment, les modèles variationnels à champ de phase aussi connu sous le nom de modèles d'endommagements à gradient [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The Variational Approach to Fracture[END_REF] ont fait leurs apparition début des années 2000. Ces modèles permettent de s'affranchir des problèmes liés aux chemins de fissures et sont connus pour converger vers le modèle de Griffith lorsque le paramètre de régularisation tend vers 0. De plus les résultats numériques montrent qu'il est possible de faire nucléer une fissure sans singularité grâce à la présence d'une contrainte critique. Ces modèles à champ de phase pour la rupture sont-ils capables de surmonter les limitations du modèle de Griffith ?
Concernant les chemins de fissures, les modèles à champ de phase ont prouvé être redoutablement efficaces pour prédire les réseaux de fractures lors de chocs thermiques [START_REF] Sicsic | Initiation of a periodic array of cracks in the thermal shock problem: a gradient damage modeling[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. Dans cette thèse, les résultats obtenus montrent que les modèles d'endommagement à gradient sont efficaces pour prédire la nucléation de fissure en mode I et de tenir compte de l'effet d'échelle. Naturellement ces modèles respectent le critère d'initiation de la théorie de Griffith et sont étendus à la fracturation hydraulique comme illustré dans le deuxième volet de cette thèse. Cependant ils ne peuvent rendre compte de la rupture de type ductile tel quel. Un couplage avec les modèles de plasticité parfaite est nécessaire afin d'obtenir des mécanismes de rupture ductile semblables à ceux observés pour les métaux.
Le manuscrit est organisé comme suit: Dans le premier chapitre, une large introduction est dédiée à l'approche variationnelle de la rupture en partant de Griffith vers une approche moderne des champs de phase en rappelant les principales propriétés. Le second chapitre étudie la nucléation de fissures dans des géométries pour lesquels il n'existe pas de solution exacte. Des entailles en U-et V-montrent que le chargement critique évolue continûment du critère en contrainte critique au critère de ténacité limite avec la singularité d'entaille. Le problème d'une cavité elliptique dans un domaine allongé ou infini est étudié. Le troisième chapitre se concentre autour de la fracturation hydraulique en prenant en compte l'influence d'un fluide parfait sur les lèvres de la fissure. Les résultats numériques montrent que la stimulation par injection de fluide dans d'un réseau de fissures parallèles et de même longueur conduit à la propagation d'une seule des fissures du réseau. Il s'avère que cette configuration respecte le principe de moindre énergie. Le quatrième chapitre se focalise uniquement sur le modèle de plasticité parfaite en partant de l'approche classique vers une l'approche variationnelle. Une implémentation numérique utilisant le principe de minimisation alternée de l'énergie est décrite et vérifiée dans un cas simple de Von Mises. Le dernier chapitre couple les modèles d'endommagement à gradient avec les modèles de plasticité parfaite. Les simulations numériques montrent qu'il est possible d'obtenir des fissures de type fragile ou ductile en variant un seul paramètre uniquement. De plus ces simulations capturent qualitativement le phénomène de nucléation et de propagation de fissures en suivant les bandes de cisaillement.
Introduction
Structural failure is commonly due to fractures propagation in a sound material. A better understanding of defect mechanics is fundamental for engineers to prevent cracks and preserve the integrity of civil buildings or to control them as desired in energy industry for instance. From the modeling point of view those problems are similar, complex and still facing many challenges. Common issues are determining when and where cracks will propagate.
In this work, the study is restricted to brittle and ductile fractures in homogeneous materials for rate-independent evolution problems in continuum mechanics. We adopt the macroscopic point of view, such that, the propagation of a macro fracture represents a response of the structure geometry subject to a loading. A fracture à la Griffith is a surface of discontinuity for the displacement field along which stress vanishes. In this widely used theory the fracture initiates along an a priori path when the energy release rate becomes critical, this limit is given by the material toughness. This criterion requires one to quantify the first derivative of potential energy with respect to the crack length for a structure at the equilibrium. Many years of investigations were focused on the notch tips to predict when the fracture initiates, resulting to a growing body of literature on computed stress intensity factors. Griffith is by essence a competition between the surface energy and the recoverable bulk energy. Indeed, a crack increment reduces the potential energy of the structure while it is compensated by the creation of a surface energy.
However such a fracture criterion is not appropriate to account for weak singularity i.e. a notch angle which does not degenerate into a crack. Conversely many criteria based on a critical stress are adapted for smooth domains, but fail near stress singularities. Indeed, a nucleation criterion based solely on pointwise maximum stress will be unable to handle with crack formation at the singularity point i.e. σ → ∞. A second limitation of Griffith's theory is the scale effects. To illustrate this, consider a unit structure size cut by a pre-fracture of length a. The critical loading evolves as ∼ 1/ √ a, consequently the maximum admissible loading is not bounded when the defect size decays. Again this is physically not possible and is inconsistent with experimental observations. It is well accepted that this discrepancy is due to the lack of a critical stress (or a critical length scale) in Griffith's theory. To overcome these aforementioned issues in Griffith's theory, Dugdale and Barenblatt pioneers of cohesive and ductile fractures theory proposed to kill the stress singularity at the tip by accounting of stresses on fracture lips.
iii Recently, many variational phase-field models [START_REF] Bourdin | The Variational Approach to Fracture[END_REF] are known to converge to a variational Griffith -like model in the vanishing limit of their regularization parameter. They were conceived to handle the issues of crack path. Furthermore, it has been observed that they can lead to numerical solution exhibiting crack nucleation without singularities. Naturally, these models raise some interesting questions: can Griffith limitations be overcome by those phase-field models?
Concening crack path, phase-field models have proved to be accurate to predict fracture propagation for thermal shocks [START_REF] Sicsic | Initiation of a periodic array of cracks in the thermal shock problem: a gradient damage modeling[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF]. In this dissertation, numerical examples illustrate that Griffith limitations such as, nucleation and size effects can be overcome by the phase-field models referred to as gradient damage models in the Chapter 2. Naturally this models preserve Griffith's propagation criterion as shown in the extended models for hydraulic fracturing provided in Chapter 3. Of course Griffith's theory is unable to deal with ductile fractures, but in Chapter 5 we show that by coupling perfect plasticity with gradient damage models we are able to capture some of ductile fractures features, precisely the phenomenology of nucleation and propagation.
The dissertation is organized as follows: In Chapter 1, a large introduction of phasefield models to brittle fracture is exposed. We start from Griffith to the modern approach of phase-field models, and recall some of their properties. Chapter 2 studies crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U-and V-notches to show that the nucleation load varies smoothly from that predicted by a strength criterion to that of a toughness criterion, when the strength of the stress concentration or singularity varies. We present validation and verifications of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase-field models properly account for structural and material size effects. Chapter 3 focuses on fractures propagation in hydraulic fracturing, we extend the variational phase-field models to account for fluid pressure on the crack lips. We recover the closed form solution of a perfect fluid injected into a single fracture. For stability reason, in this example we control the total amount of injected fluid. Then we consider a network of parallel fractures stimulated. The numerical results show that only a single crack grows and this situation is always the best energy minimizer compared to a multi-fracking case where all fractures propagate. This loss of symmetry in the cracks patterns illustrates the variational structure and the global minimization principle of the phase-field model. A third example deals with fracture stability in a pressure driven laboratory test for rocks. The idea is to capture different stability regimes using linear elastic fracture mechanics to properly design the experiment. We test the phase-field models to capture fracture stability transition (from stable to unstable). Chapter 4 is concerned with the variational perfect plasticity models and its implementation and verification. We start by recalling main ingredients of the classic approach of perfect elasto-plasticity models and then recasting into the variational structure. Later the algorithm strategy is exposed with a verification example. The strength of the proposed algorithm is to solve perfect elasto-plasticity maiv terials by prescribing the yield surfaces without dealing with non differentiability issues. Chapter 5 studies ductile fractures, the proposed model couple gradient damage models with perfect plasticity independently exposed in Chapter 1 and 4. Numerical simulations show that transition from brittle to ductile fractures is recovered by changing only one parameter. Also the ductile fracture phenomenology, such as crack initiation at the center and propagation along shear bands are studied in plane strain specimens and round bars in three dimensions.
The main research contributions is in Chapter 2,3 and 5. My apologies to the reader perusing the whole dissertation which contains repetitive elements due to self consistency and independent construction of all chapters. v Chapter 1
Variational phase-field models of brittle fracture In Griffith's theory, a crack in brittle materials is a surface of discontinuity for the displacement field with vanishing stress along the fracture. Assuming an a priori known crack path, the fracture propagates when the first derivative of the potential energy with respect to the crack length at the equilibrium becomes critical. This limit called the fracture toughness is a material property. The genius of Griffith was to link the crack length to the surface energy, so the crack propagation condition becomes a competition between the surface energy and the recoverable bulk energy. By essence this criterion is variational and can be recast into a minimality principle. The idea of Francfort and Marigo in variational approach to fracture [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] is to keep Griffith's view and extend to any possible crack geometry and complex time evolutions. However cracks remain unknown and a special method needs to be crafted. The approach is to approximate the fracture by a damage field with a non zero thickness. In this region the material stiffness is deteriorated leading to decrease the sustainable stresses. This stress-softening material model is ill-posed mathematically [START_REF] Comi | On localisation in ductile-brittle materials under compressive loadings[END_REF] due to a missing term limiting the damage localization thickness size. Indeed, since the surface energy is proportional to the damage thickness size, we can construct a broken bar without paying any surface energy, i.e. by decaying the damaged area. To overcome this aforementioned issue, the idea is to regularize the surface energy. The adopted regularization takes its roots in Ambrosio and Tortorelli's [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] functionals inspired by Mumford-Shah's work [START_REF] Mumford | Optimal approximation by piecewise smooth functions and associated variational problem[END_REF] in image segmentation. Gradient damage models is closely related to Ambrosio and Tortorelli's functionals and have been adapted to brittle fracture. The introduction of a gradient damage term comes up with a regularized parameter. This parameter denoted is also called internal length and governs the damage thickness. Following Pham and Marigo [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF], the damage evolution problem is built on three principles, the damage irreversibility, stability and balance of the total energy. The beauty of the model is that the unknown discrete crack evolution is approximated by a regularized functional evolution which is intimately related to Griffith by its variational structure and its asymptotic behavior.
This chapter is devoted to a large introduction of gradient damage models which 1
Chapter 1. Variational phase-field models of brittle fracture constitute a basis of numerical simulations performed in subsequent chapters. The presentation is largely inspired by previous works of Bourdin-Maurini-Marigo-Francfort and many others. In the sequel, section 1.1 starts with the Griffith point of view and recasts the fracture evolution into a variational problem. By relaxing the pre-supposed crack path constraint in Griffith's theory, the Francfort and Marigo's variational approach to fracture models is retrieved. We refer the reader to [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] for a complete exposition of the theory. Following the spirit of the variations principle, gradient damage models are introduced and constitute the basis of numerical simulations performed. Section 1.2 focuses on the application to a relevant one-dimensional problem which shows up multiple properties, such as, nucleation, critical admissible stress, size effects and optimal damage profile investigated previously by [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF]. To pass from a damage model to Griffith-like models, connections need to be highlighted, i.e letting the internal length to zero. Hence, section 1.3 is devoted to the Γ-convergence in one-dimensional setting, to show that gradient damage models behave asymptotically like Griffith. Finally, the implementation of such models is exposed in section 1.4 .
Gradient damage models
From Griffith model to its minimality principle
The Griffith model can be settled as follow, consider a perfectly brittle-elastic material with A the Hooke's law tensor and G c the critical energy release rate occupying a region Ω ⊂ R n in the reference configuration. The domain is partially cut by a fracture set Γ of length l, which grows along an a priori path Γ. Along the fracture, no cohesive effects or contact lips are considered here, thus, it stands for stress free on Γ(l). The sound region Ω \ Γ is subject to a time dependent boundary displacement ū(t) on a Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ •ν on the remainder ∂ N Ω = ∂Ω\∂ D Ω, where ν denotes the appropriate normal vector. Also, for the sake of simplicity, body force is neglected. The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u such that e(u) = ∇u + ∇ T u 2 .
In linear elasticity the free energy is a differentiable convex state function given by ψ e(u) = 1 2 Ae(u) : e(u) . Thereby, the stress-strain relation naturally follows σ = ∂ψ(e) ∂e = Ae(u).
By the quasi-static assumption made, the cracked solid is, at each time, in elastic equilibrium with the loads that it supports at that time. The problem is finding the unknown displacement u = u(t, l) for a given t and l = l(t) that satisfies the following constitutive equations,
1.1. Gradient damage models div σ =0 in Ω \ Γ(l) u =ū(t) on ∂ D Ω \ Γ(l) σ • ν =g(t) on ∂ N Ω σ • ν =0 on Γ(l) (1.1)
At the time t and for l(t) let the kinematic field u(t, l) be at the equilibrium such that it solves (1.1). Hence, the potential energy can be computed and is composed of the elastic energy and the external work force, such that,
P(t, l) = Ω\Γ(l) 1 2
Ae(u) : e(u) dx -
∂ N Ω g(t) • u dH n-1
where dH n-1 denotes the Hausdorff n-1 -dimensional measure, i.e. its aggregate length in two dimensions or surface area in three dimensions. The evolution of the crack is given by Griffith's criterion:
Definition 1 (Crack evolution by Griffith's criterion)
i. Consider that the crack can only grow, this is the irreversibility condition, l(t) ≥ 0.
ii. The stability condition says that the energy release rate G is bounded from above by its critical value G c ,
G(t, l) = - ∂P(t, l) ∂l ≤ G c .
iii. The energy balance guarantee that the energy release rate is critical when the crack grows,
G(t, l) -G c l = 0
Griffith says in his paper [START_REF] Griffith | The phenomena of rupture and flow in solids[END_REF], the "theorem of minimum potential energy" may be extended so as to of predicting the breaking loads of elastic solids, if account is taken of the increase of surface energy which occurs during the formation of cracks. Following Griffith, let us demonstrate that crack evolution criteria are optimality conditions of a total energy to minimize. Provided some regularity on P(t, l) and l(t), let formally the minimization problem be: for any loading time t such that the displacement u is at the Chapter 1. Variational phase-field models of brittle fracture equilibrium, find the crack length l which minimizes the total energy composed of the potential energy and the surface energy subject to irreversibility, min l≥l(t)
P(t, l) + G c l (1.2)
An optimal solution of the above constraint problem must satisfy the KKT1 conditions. A common methods consist in computing the Lagrangian, given by, L(t, l, λ) := P(t, l) + G c l + λ(l(t)l) (1.3) where λ denotes the Lagrange multiplier. Then, apply the necessary conditions, Substitute the Lagrange multiplier λ given by the stationarity into the dual feasibility and complementary slackness condition to recover the irreversibility, stability and energy balance of Griffith criterion.
Futhermore, let the crack length l and the displacement u be an internal variables of a variational problem. Note that the displacement does not depend on l anymore. Provided a smooth enough displacement field and evolution of t → l(t) to ensure that calculations make sense, the evolution problem can be written as a minimality principle, such as, Definition 2 (Fracture evolution by minimality principle) Find stable evolutions of l(t), u(t) satisfying at all t: i. Initial conditions l(t 0 ) = l 0 and u(t 0 , l 0 ) = u 0 ii. l(t), u(t) is a minimizer of the total energy,
E(t, l, u) = Ω\Γ(l) 1 2
Ae(u) : e(u)dx -
∂ N Ω g(t) • u dH n-1 + G c l (1.5) amongst all l ≥ l(t) and u ∈ C t := u ∈ H 1 (Ω \ Γ(l)) : u = ū(t) on ∂ D Ω \ Γ(l) .
1.1. Gradient damage models
iii. The energy balance,
E(t, l, u) = E(t 0 , l 0 , u 0 ) + t t 0 ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds (1.6)
One observes that stability and irreversibility have been substituted by minimality, and the energy balance takes a variational form. To justify this choice, we show first that irreversibility, stability and kinematic equilibrium are equivalent to the first order optimality conditions of E(t, l, u) for u and l separately. Then, followed by the equivalence of the energy balance adopted in the evolution by minimality principle and within Griffith criterion.
Proof. For a fixed l, u is a local minimizer of E(t, l, u), if for all v ∈ H 1 0 (Ω \ Γ(l)), for some h > 0 small enough, such that u + hv ∈ C t ,
E(t, l, u + hv) = E(t, l, u) + hE (t, l, u) • v + o(h) ≥ E(t, l, u) (1.7)
thus,
E (t, l, u) • v ≥ 0 (1.8)
where E (t, l, u) denotes the first Gateaux derivative of E at u in the direction v. By standard arguments of calculus of variations, one obtains,
E (t, l, u) • v = Ω\Γ(l) 1 2 Ae(u) : e(v)dx - ∂ N Ω g(t) • v dH n-1 (1.9)
Integrating the term in e(v) by parts over Ω \ Γ(l), and considering both faces of Γ(l) with opposites normals, one gets,
E (t, l, u) • v = - Ω\Γ(l) div Ae(u) • v dx + ∂Ω Ae(u) • ν • v dH n-1 - Γ(l) Ae(u) • ν • v dH n-1 - ∂ N Ω g(t) • v dH n-1 (1.
E (t, l, u) • v = - Ω\Γ(l) div Ae(u) • v dx + ∂ N Ω Ae(u) • ν -g(t) • v dH n-1 - Γ(l) Ae(u) • ν • v dH n-1
(1.11)
Chapter 1. Variational phase-field models of brittle fracture
Taking v = -v ∈ H 1 0 (Ω\Γ(l))
, the optimality condition leads to E (t, l, u)•v = 0. Formally by a localization argument taking v such that it is concentrated around boundary and zero almost everywhere, we obtain that all integrals must vanish for any v. Since the stress-strain relation is given by σ = Ae(u), we recover the equilibrium constitutive equations,
div Ae(u) = 0 in Ω \ Γ(l) u = ū(t) on ∂ D Ω \ Γ(l) Ae(u) • ν = g(t) on ∂ N Ω Ae(u) • ν = 0 on Γ(l) (1.12)
Now consider u is given. For any l > 0 for some h > 0 small enough, such that l + h l ≥ l(t), the derivative of E(t, l, u) at l in the direction l is,
E (t, l, u) • l ≥ 0 ∂P(t, l, u) ∂l + G c ≥ 0 (1.13)
this becomes an equality, G(t, l, u) = G c when the fracture propagates.
To complete the equivalence between minimality evolution principle and Griffith, let us verify the energy balance. Provided a smooth evolution of l, the time derivative of the right hand side equation (1.6) is,
dE(t, l, u) dt = ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 (1.14)
and the explicit left hand side,
dE(t, l, u) dt = E (t, l, u) • u + E (t, l, u) • l - ∂ N Ω ġ(t) • u dH n-1 . (1.15)
The Gateaux derivative with respect to u have been calculated above, so E (t, l, u) • u stands for,
E (t, l, u) • u = - Ω\Γ div(Ae(u)) • u dx + ∂ D Ω Ae(u) • ν • u dH n-1 + ∂ N Ω Ae(u) • ν • u dH n-1 - ∂ N Ω g(t) • u dH n-1 .
(1.16) Since u respects the equilibrium and the admissibility u = u on ∂ D Ω, all kinematic contributions to the elastic body vanish and the energy balance condition becomes,
E (t, l, u) • l = 0 ⇔ ∂P ∂l + G c l = 0 (1.17)
Gradient damage models
At this stage minimality principle is equivalent to Griffith criterion for smooth evolution of l(t). Let's give a graphical interpretation of that. Consider a domain partially cut by a pre-fracture of length l 0 subject to a monotonic increasing displacement load, such that, ū(t) = tū on ∂ D Ω and stress free on the remainder boundary part. Hence, the elastic energy is ψ e(tu) = t 2 2 Ae(u) : e(u) and the irreversibility is l ≥ l 0 . The fracture stability is given by
t 2 ∂P (1, l) ∂l + G c ≥ 0
and for any loading t > 0, the energy release rate for a unit loading is bounded by
G(1, l) ≤ G c /t 2 .
Forbidden region The fracture evolution is smooth if G(1, l) is strictly decreasing in l, i.e. P(1, l) is strictly convex as illustrated on the Figure 1.1(left). Thus, stationarity and local minimality are equivalent. Let's imagine that material properties are not constant in the structure, simply consider the Young's modulus varying in the structure such that G(1, l) has a concave part, see Figure 1.1(right). Since G(1, l) is a deceasing function, the fracture grows smoothly by local minimality argument until being stuck in the local well for any loadings which is physically inconsistent. Conversely, considering global minimization allows up to a loading point, the nucleation of a crack in the material, leading to a jump of the fracture evolution.
Extension to Francfort-Marigo's model
In the previous analysis, the minimality principle adopted was a local minimization argument because it considers small perturbations of the energy. This requires a topology, which includes a concept of distance defining small transformations, whereas for global minimization principle it is topology-independent. Without going too deeply into details, arguments in favor of global minimizers are described below. Griffith's theory does not Chapter 1. Variational phase-field models of brittle fracture hold for a domain with a weak singularity. By weak singularity, we consider any free stress acute angle that does not degenerate into a crack (as opposed to strong singularity). For this problem, by using local minimization, stationary points lead to the elastic solution. The reason for this are that the concept of energy release rate is not defined for a weak singularity and there is no sustainable stress limit over which the crack initiates. Hence, to overcome the discrepancy due to the lack of a critical stress in Griffith's theory, double criterion have been developed to predict fracture initiation in notched specimen, more details are provided in Chapter 2. Conversely, global minimization principle has a finite admissible stress allowing cracks nucleation, thus cracks can jump from a state to another, passing through energy barriers. For physical reasons, one can blame global minimizers to not enforce continuity of the displacement and damage field with respect to time. Nevertheless, it provides a framework in order to derive the fracture model as a limit of the variational damage evolution presented in section 1.3. This is quite technical but global minimizers from the damage model converge in the sens of Γ-convergence to global minimizers of the fracture model. Finally, under the assumptions of a pre-existing fracture and strict convexity of the potential energy, global or local minimization are equivalent and follow Griffith. In order to obtain the extended model of Francfort-Marigo variational approach to fracture [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Marigo | Initiation of cracks in griffith's theory: An argument of continuity in favor of global minimization[END_REF][START_REF] Bourdin | Variational Models and Methods in Solid and Fluid Mechanics, chapter Fracture[END_REF] one has to keep the rate independent variational principle and the Griffith fracture energy, relax the constrain on the pre-supposed crack path by extending to all possible crack geometries Γ and consider the global minimization of the following total energy
E(u, Γ) := Ω\Γ 1 2
Ae(u) : e(u) dx -
∂ N Ω g(t) • u dH n-1 + G c H n-1 (Γ) (1.18)
associated to cracks evolution problem given by, Definition 3 (Crack evolution by global minimizers) u(t), Γ t satisfies the variational evolution associated to the energy E(u, Γ) if the following three conditions hold:
i. t → Γ t is increasing in time, i.e Γ t ⊇ Γ s for all t 0 ≤ s ≤ t ≤ T . ii. for any configuration (v, Γ) such that v = g(t) on ∂ D Ω \ Γ t and Γ ⊇ Γ t , E v, Γ ≥ E u(t), Γ t (1.19)
iii. for all t,
E u(t), Γ t = E u(t 0 ), Γ t 0 + t t 0 ∂ D Ω (σ • ν) • u(t) dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds (1.20)
Gradient damage models
It is convenient, to define the weak energy by extending the set of admissibility function to an appropriate space allowing discontinuous displacement field, but preserving "good" properties.
SBD(Ω) = u ∈ SBV (Ω); Du = ∇u + (u + -u -) • ν dH n-1 J(u) (1.21)
where, Du denotes the distributional derivative, J(u) is the jump set of u. Following De Giorgi in [START_REF] De Giorgi | Existence theorem for a minimum problem with free discontinuity set[END_REF], the minimization problem is reformulated in a weak energy form functional of SBV , such as,
min u∈SBV (Ω) Ω 1 2 Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c H n-1 J(u) (1.22)
For existence of solution in the discrete time evolution and time continuous refer to [START_REF] Francfort | Existence and convergence for quasi-static evolution in brittle fracture[END_REF][START_REF] Babadjian | Existence of strong solutions for quasi-static evolution in brittle fracture[END_REF]. The weak energy formulation will be recalled in section 1.3 for the Γ-convergence in one dimension.
Gradient damage models to brittle fracture
Because the crack path remains unknown a special method needs to be crafted. The approach is to consider damage as an approximation of the fracture with a finite thickness where material properties are modulated continuously. Hence, let the damage α being an internal variable which evolves between two extreme states, up to a rescaling α can be bounded between 0 and 1, where α = 0 is the sound state material and α = 1 refers to the broken part. Intermediate values of the damage can be seen as "micro cracking", a partial disaggregation of the Young's modulus. A possible choice is to let the damage variable α making an isotropic deterioration of the Hooke's tensor, i.e. a(α)A where a(α) is a stiffness function. Naturally the recoverable energy density becomes, ψ(α, e) = 1 2 a(α)Ae(u) : e(u), with the elementary property that ψ(α, e) is monotonically decreasing in α for any fixed u. The difficulty lies in the choice of a correct energy dissipation functional. At this stage of the presentation a choice would be to continue by following Marigo-Pham [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF][START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF][START_REF] Pham | Stability of homogeneous states with gradient damage models: Size effects and shape effects in the three-dimensional setting[END_REF] for a full and self consistent construction of the model. Their main steps are, assume a dissipation potential k(α), apply the Drucker-Ilushin postulate, then, introduce a gradient damage term to get a potential dissipation of the form k(α, ∇α). Instead, we will continue by following the historical ideas which arose from the image processing field with Mumford-Shah [START_REF] Mumford | Optimal approximation by piecewise smooth functions and associated variational problem[END_REF] where continuous functional was proposed to find the contour of the image in the picture by taking into account strong variations of pixels intensity across boundaries. Later, Ambrosio-Tortorelli [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] proposed the following functional which constitute main ingredients of the regularized damage models,
Ω 1 2 (1 -α) 2 |∇u| 2 dx + Ω α 2 + 2 |∇α| 2 dx
where > 0 is a regularized parameter called internal length. One can recognize the second term as the dissipation potential composed of two parts, a local term depending only on the damage state and a gradient damage term which penalizes sharp localization of the damage. The regularized parameter came up with the presence of the gradient damage term which has a dimension of the length. Following [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF], we define the regularized total energy of the gradient damage model for a variety of local dissipations and stiffness functions denoted w(α) and a(α), not only w(α) = α 2 and a(α
) = (1 -α) 2 by E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.23)
where G c is the critical energy release rate, c w = 1 0 w(α)dα with w(α) and a(α) following some elementary properties.
1. The local dissipation potential w(α) is strictly monotonically increasing in α. For a sound material no dissipation occurs hence w(0) = 0, for a broken material the dissipation must be finite, and up to a rescaling we have w(1) = 1.
2. The elastic energy is monotonically decreasing in α for any fixed u. An undamaged material should conserve its elasticity property and no elastic energy can be stored in a fully damaged material such that, the stiffness function a(α) is a decreasing function with a(0) = 1 and a(1) = 0.
3.
For numerical optimization reasons one can assume that a(α) and w(α) are continuous and convex.
A large variety of models with different material responses can be constructed just by choosing different functions for a(α) and w(α). A non exhaustive list of functions used in the literature is provided in Table 1.1. Despite many models used, we will mainly focus on AT 1 and sometimes refers to AT 2 for numerical simulations. Now let us focus on the damage evolution of E (u, α) defined in (1.23). First, remark that to get a finite energy, the gradient damage is in L 2 (Ω) space. Consequently, the trace can be defined at the boundary, so, damage values can be prescribed. Accordingly let the set of admissible displacements and admissible damage fields C t and D, equipped with their natural H 1 norm,
C t = u ∈ H 1 (Ω) : u = ū(t) on ∂ D Ω , D = α ∈ H 1 (Ω) : 0 ≤ α ≤ 1, ∀x ∈ Ω .
The evolution problem is formally similar to one defined in Definition 2 and reads as,
1.1. Gradient damage models Name a(α) w(α) AT 2 (1 -α) 2 α 2 AT 1 (1 -α) 2 α LS k 1 -w(α) 1 + (c 1 -1)w(α) 1 -(1 -α) 2 KKL 4(1 -α) 3 -3(1 -α) 4 α 2 (1 -α) 2 /4 Bor c 1 (1 -α) 3 -(1 -α) 2 + 3(1 -α) 2 -2(1 -α) 3 α 2 SKBN (1 -c 1 ) 1 -exp (-c 2 (1 -α) c 3 ) 1 -exp (-c 2 ) α
Table 1.1: Variety of possible damage models, where c 1 , c 2 , c 3 are constants. AT 2 introduced by Ambrosio Tortorelli and used by Bourdin [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF], AT 1 model initially introduced by Pham-Amor [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF], LS k in Alessi-Marigo [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF], KKL for Karma-Kessler-Levine used in dynamics [START_REF] Karma | Phase-field model of mode III dynamic fracture[END_REF], Bor for Borden in [START_REF] Borden | A phase-field description of dynamic brittle fracture[END_REF], SKBN for Sargadoa-Keilegavlena-Berrea-Nordbottena in [START_REF] Sargado | High-accuracy phase-field models for brittle fracture based on a new family of degradation functions[END_REF].
Definition 4 (Damage evolution by minimality principle) For all t find (u, α) ∈ (C t , D) that satisfies the damage variational evolution:
i. Initial condition α t 0 = α 0 and u t 0 = u 0 ii. (u, α) is a minimizer of the total energy, E (u, α)
E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.24)
amongst all α ≥ α(t)
iii. Energy balance,
E (u t , α t ) = E (u 0 , α 0 ) + t t 0 ∂ D Ω (σ • ν) • u dH n-1 - ∂ N Ω ġ(t) • u dH n-1 ds. (1.25)
This damage evolution is written in a weak form in order to obtain the damage criterion in a strong formulation, we have to explicit the first order necessary optimality conditions of the constraint minimization of E for (u, α) given by,
E (u, α)(v, β) ≥ 0 ∀(v, β) ∈ H 1 0 (Ω) × D (1.26)
Chapter 1. Variational phase-field models of brittle fracture
Using calculus of variation argument, one gets,
E (u, α)(v, β) = Ω a(α)Ae(u) : e(v) dx - ∂ N Ω (g(t) • ν) • v dH n-1 + Ω 1 2 a (α)Ae(u) : e(u)β dx + G c 4c w Ω w (α) β + 2 ∇α • ∇β dx. (1.27)
Integrating by parts the first term in e(v) and the last term in ∇α • ∇β, the expression leads to,
E (u, α)(v, β) = - Ω div a(α)Ae(u) • v dx + ∂ N Ω [a(α)Ae(u) -g(t)] • ν • v dH n-1 + Ω 1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α β dx + G c 4c w ∂Ω 2 ∇α • ν β dH n-1 .
(1.28)
This holds for all β ≥ 0 and for all v ∈ H 1 0 (Ω), thus, one can take β = 0 and v = -v. Necessary, the first two integrals are equal to zero. Again, we recover the kinematic equilibrium with the provided boundary condition since σ = a(α)Ae(u),
div a(α)Ae(u) = 0 in Ω a(α)Ae(u) = g(t) on ∂ N Ω u = ū(t) on ∂ D Ω (1.29)
The damage criteria and its associated boundary conditions arise for any β ≥ 0 and by taking v = 0 in (1.28), we obtain that the third and fourth integrals are non negative.
1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α ≥ 0 in Ω ∇α • ν ≥ 0 on ∂Ω (1.30)
The damage satisfies criticality when (1.30) becomes an equality.
Before continuing with the energy balance expression, let us focus a moment on the damage criterion. Notice that it is composed of an homogeneous part depending in w (α) and a localized contribution in ∆α. Assume the structure being at an homogeneous damage state, such that α is constant everywhere, hence the laplacian damage term vanishes. In that case, the elastic domain in a strain space is given by,
1.1. Gradient damage models Ae(u) : e(u) ≤ G c 2c w w (α) -a (α) (1.31)
and in stress space, by, .32) this last expression requires to be bounded such that the structure has a maximum admissible stress,
A -1 σ : σ ≤ G c 2c w w (α)a(α) 2 -a (α) (1
max α w (α) c (α) < C (1.33)
where c(α) = 1/a(α) is the compliance function.
If α → w (α)/c (α) is increasing the material response will be strain-hardening. For a decreasing function it is a stress-softening behavior. This leads to,
w (α)a (α) > w (α)a (α) (Strain-hardening) w (α)c (α) < w (α)c (α) (Stress-softening) (1.34)
Those conditions restrict proper choice for w(α) and a(α).
Let us turn our attention back to find the strong formulation of the problem using the energy balance. Assuming a smooth evolution of damage in time and space, the time derivative of the energy is given by,
dE (u, α) dt = E (u, α)( u, α) - ∂ N Ω ( ġ(t) • ν) dH n-1 (1.35)
The first term has already been calculated by replacing (v, β) with ( u, α) in (1.27), so that,
dE (u, α) dt = - Ω div a(α)Ae(u) • u dx + ∂ N Ω [a(α)Ae(u) -g(t)] • ν • u dH n-1 + ∂ D Ω a(α)Ae(u) • ν • u dH n-1 - ∂ N Ω ġ(t) • ν • u dH n-1 + Ω 1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α α dx + G c 4c w ∂Ω 2 ∇α • ν α dH n-1
(1.36)
Chapter 1. Variational phase-field models of brittle fracture
The first line vanishes with the equilibrium and boundary conditions, the second line is equal to the right hand side of the energy balance definition (1.25). Since the irreversibility α ≥ 0 and the damage criterion (1.30) hold, the integral is non negative, therefore the energy balance condition gives,
1 2 a (α)Ae(u) : e(u) + G c 4c w w (α) -2 ∆α • α = 0 in Ω (∇α • ν) • α = 0 on ∂Ω (1.37)
Notice that the first condition in (1.37) is similar to the energy balance of Griffith, in the sense that the damage criterion is satisfied when damage evolves. Finally, the evolution problem is given by the damage criterion (1.30), the energy balance (1.37) and the kinematic admissibility (1.29).
The next section is devoted to the construction of the optimal damage profile by applying the damage criterion to a one-dimensional traction bar problem for a given . Then, defined the critical energy release rate as the energy required to break a bar and to create an optimal damage profile.
Application to a bar in traction
The one-dimension problem
The aim of this section is to apply the gradient damage model to a one-dimensional bar in traction. Relevant results are obtained with this example such as, the role of critical admissible stress, the process of damage nucleation due to stress-softening, the creation of an optimal damage profile for a given and the role of gradient damage terms which ban spacial jumps of the damage.
In the sequel, we follow Pham-Marigo [START_REF] Pham | Construction et analyse de modèles d'endommagement à gradient[END_REF][START_REF] Pham | From the onset of damage until the rupture: construction of the responses with damage localization for a general class of gradient damage models[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF]] by considering a one-dimensional evolution problem of a homogeneous bar of length 2L stretched by a time controlled displacement at boundaries and no damage value is prescribed at the extremities, such that, the admissible displacement and damage sets are respectively,
C t := {u : u(-L) = -tL, u(L) = tL}, D := {α : 0 ≤ α ≤ 1 in [0, L]} (1.38)
with the initial condition u 0 (x) = 0 and α 0 (x) = 0. Since no external force is applied, the total energy of the bar is given by,
E (u, α) = L -L 1 2 a(α)Eu 2 dx + G c 4c w L -L w(α) + |α | 2 dx (1.39)
where E is the Young's modulus, > 0 and (•) = ∂(•)/∂x. For convenience, let the compliance being the inverse of the stiffness such that c(α) = a -1 (α). Assume that α is at least continuously differentiable, but a special treatment would be required for α = 1
1.2. Application to a bar in traction which is out of the scope in this example. The pair (u t , α t ) ∈ C t × D is a solution of the evolution problem if the following conditions holds:
1. The equilibrium,
σ t (x) = 0, σ t (x) = a(α t (x))Eu t (x), u t (-L) = -tL and u t (L) = tL
The stress is constant along the bar. Hence it is only a function of time, such that,
2tLE = σ t L -L c α t (x) dx (1.40)
Once the damage field is known. The equation (1.40) gives the stress-displacement response.
2. The irreversibility, αt (x) ≥ 0 (1.41)
3. The damage criterion in the bulk,
- c (α t (x)) 2E σ 2 t + G c 4c w w (α t (x)) -2 α t (x) ≥ 0 (1.42)
4. The energy balance in the bulk,
- c (α t (x)) 2E σ 2 t + G c 4c w w (α t (x)) -2 α t (x) αt (x) = 0 (1.43)
5. The damage criterion at the boundary, α t (-L) ≥ 0 and α t (L) ≤ 0 (1.44)
6. The energy balance at the boundary,
α t (±L) αt (±L) = 0 (1.45)
For smooth or brutal damage evolutions the first order stability enforce α t (±L) = 0 to respect E (u, α) = 0. Thus the damage boundary condition is replaced by α t (±L) = 0 when damage evolves. All equations are settled to solve the evolution problem. Subsequently, we study a uniform damage in the bar and then focus on the localized damage solution.
The homogeneous damage profile
Consider a case of a uniform damage in the bar α t (x) = α t , which is called the homogeneous solution. We will see that the damage response depends on the evolution of α → w (α)/c (α), i.e. for stress-hardening (increasing function) the damage evolves uniformly in the bar, and localizes for stress-softening configuration. Now suppose that the damage does not evolve and remains equal to its initial value, α t = α 0 = 0. Then using the damage criterion in the bulk (1.42) the admissible stress must satisfy,
σ 2 t ≤ 2EG c 4c w w (0) c (0) (1.46)
and the response remains elastic until the loading time t e , such that,
t 2 ≤ - G c 2E c w w (0) a (0) = t 2 e (1.47)
Suppose the damage evolves uniformly belongs to the bar, using the energy balance (1.43) and the damage criterion (1.42) we have,
σ 2 t ≤ 2EG c 4c w w (α t ) c (α t ) , σ 2 t - 2EG c 4c w w (α t ) c (α t ) αt = 0 (1.48)
The homogeneous damage evolution is possible only if α t → w (α)/c (α) is growing, this is the stress-hardening condition. Since αt > 0, the evolution of the stress is given by,
σ 2 t = 2EG c 4c w w (α t ) c (α t ) ≤ max 0<α<1 2EG c 4c w w (α t ) c (α t ) = σ 2 c (1.49)
where σ c is the maximum admissible stress for the homogeneous solution. One can define the maximum damage state α c obtained when σ t = σ c . This stage is stable until the loading time t c ,
t 2 ≤ - G c 2E c w w (α c ) a (α c ) = t 2 c (1.50)
Since w (α)/c (α) is bounded and > 0, a fundamental property of gradient damage model is there exists a maximum value of the stress called critical stress, which allows crack to nucleate using the minimality principle.
The localized damage profile
The homogeneous solution is no longer stable if the damage α t → w (α)/c (α) is decreasing after α c . To prove that, consider any damage state such that, α t (x) > α c and the stress-softening property, leading to, 1.2. Application to a bar in traction
0 ≤ 2EG c 4c w w (α t (x)) c (α t (x)) ≤ 2EG c 4c w w (α c ) c (α c ) = σ 2 c (1.51)
By integrating the damage criterion (1.42) over (-L, L) and using (1.44), we have,
σ 2 t 2E L -L c (α t (x)) dx ≤ G c 4c w L -L w (α t (x)) dx + 2 α t (L) -α t (-L) ≤ G c 4c w L -L w (α t (x)) dx (1.52)
then, put (1.51) into (1.52) to conclude that σ t ≤ σ c and use the equilibrium (1.40) to obtain σ t ≥ 0. Therefore using (1.52) we get that α t (x) ≥ 0, consequently the damage is no longer uniform when stress decreases 0 ≤ σ t ≤ σ c . Assume α t (x) is monotonic over (-L, x 0 ) with α t (-L) = α c and the damage is maximum at x 0 , such that, α t (x 0 ) = max x α t (x) > α c . Multiplying the equation (1.42) by α t (x), an integrating over [-L, x) for x < x 0 we get,
α 2 t (x) = - 2c w σ 2 t EG c c(α t (x)) -c(α c ) + w(α t (x)) -w(α c ) (1.53)
Plugging this above equation into the total energy restricted to the (-L, x 0 ) part,
E (u t (x), α t (x)) (-L,x 0 ) = x 0 -L σ 2 t 2a(α t (x))E dx + G c 4c w x 0 -L w(α t (x)) + α 2 t (x) dx = x 0 -L σ 2 t 2a(α c )E dx + G c 4c w x 0 -L 2w(α) -w(α c ) dx (1.54)
Note that the energy does not depend on α anymore, we just have two terms: the elastic energy and the surface energy which depends on state variation of w(α).
The structure is broken when the damage is fully localized α(x 0 ) = 1. From the equilibrium (1.40), the ratio stress over stiffness function is bounded such that |σ t c(α)| < C, thus, |σ 2 t c(1)| → 0 and (1.53) becomes,
α 2 t (x) = w(α t (x)) -w(α c ) , ∀x ∈ (-L, x 0 )
Remark that, the derivative of the damage and u across the point x 0 where α = 1 is finite. By letting the variable β = α t (x), the total energy of the partial bar (-L, x 0 ) is Chapter 1. Variational phase-field models of brittle fracture
E (u t (x), α t (x)) (-L,x 0 ) = lim x→x 0 G c 4c w x -L 2w(α t (x)) -w(α c ) dx = lim β→1 G c 4c w β αc 2w(α) -w(α c ) β dβ = lim β→1 G c 4c w β αc 2w(β) -w(α c ) w(β) -w(α c ) dβ = lim β→1 G c 4c w β αc 2 w(β) -w(α c ) + w(α c ) w(β) -w(α c ) dβ = G c 2c w k(α c ) (1.55) with, k(α c ) := 1 αc w(β) -w(α c ) dβ + w(α c ) D 4 ,
where D is the damage profile size between the homogeneous and fully localized state, given by,
D = L -L dx α (x) = 1 αc 2 w(β) -w(α c ) dβ. (1.56)
Note that the right side of the bar (x 0 , L) contribute to the exact same total energy than the left one (-L, x 0 ). Different damage response is observed depending on the choice of w(α) and a(α). The model AT 1 for instance has an elastic part, thus α c = 0 and the energy release during the breaking process of a 1d bar is equal to G c . Models with an homogeneous response before localization, AT 2 for example, overshoot G c due to the homogeneous damage profile. A way to overcome this issue, is to consider that partial damage do not contribute to the dissipation energy, it can be relaxed after localization by removing the irreversibility. Another way is to reevaluate c w such as, c w = k(α c ).
Limit of the damage energy
From inception to completion gradient damage models follows the variational structure of Francfort-Marigo's [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] approach seen as an extension of Griffith, but connections between both need to be highlighted. Passing from damage to fracture, i.e. letting → 0 requires ingredients adapted from Ambrosio Tortorelli [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] on convergence of global minimizers of the total energy. A framework to study connections between damage and fracture variational models is that of Γ-convergence which we briefly introduce below. We refer the reader to [START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF][START_REF] Braides | Gamma-convergence for Beginners[END_REF][START_REF] Dal | An introduction to Γ-convergence[END_REF] for a complete exposition of the underlying theory.
Limit of the damage energy
In the sequel, we restrict the study to a 1d case structure of interval Ω ⊂ R whose size is large compare to the internal length and with a unit Young's modulus. We prescribe a boundary displacement ū on a part ∂ D Ω and stress free on the remaining part ∂ N Ω := ∂Ω \ ∂ D Ω. We set aside the issue of damage boundary conditions for now and we define the weak fracture energy,
E(u, α, Ω) = F(u, Ω) if u ∈ SBV (Ω) +∞ otherwise (1.57)
and
F(u, Ω) := 1 2 Ω (u ) 2 dx + G c #(J(u)) (1.58)
where #(J(u)) denotes the cardinality of jumps in the set of u. Derived from E(u, α, Ω) its associated regularized fracture energy is,
E (u, α, Ω) = F (u, α, Ω) if u ∈ W 1,2 (Ω), α ∈ W 1,2 (Ω; [0, 1]) +∞ otherwise (1.59)
and
F (u, α, Ω) := 1 2 Ω a(α)(u ) 2 dx + G c 4c w Ω w(α) + α 2 dx (1.60)
To prove that up to a subsequence minimizers for E converge to global minimizers of E we need the fundamental theorem of the Γ-convergence given in the Appendix A.
We first show the compactness of the sequence of minimizers of E , then the Γconvergence of E to E. Before we begin, let the truncation and optimal damage profile lemma be, Lemma 1 Let u (resp. (u, α)) be a kinematically admissible global minimizer of F (resp.
F ). Then u L ∞ (Ω) ≤ ū L ∞ (Ω) Proof. Let M = ū L ∞ , and u * = inf {sup{-M, u}, M }. Then F(u * ) ≤ F(u) with equality if u = u * .
Lemma 2 Let α be the optimal profile of
S (α ) := I w(α ) + (α ) 2 dx
where I ⊂ R, then S (α ) = 4c w .
Proof. In order to construct α we solve the optimal profile problem: Let γ be the solution of the following problem: find γ ∈ C 1 [-δ, x 0 ) such that γ(-δ) = 0 and lim x→x 0 γ(x) = ϑ, and which is a minimum for the function,
F (γ) = x 0 -δ f (γ(x), γ (x), x)dx (1.61)
where
f (γ(x), γ (x), x) := w(γ(x)) + γ 2 (x) (1.62)
Note that the first derivative of f is continuous. We will apply the first necessary optimality condition to solve the optimization problem described above, if γ is an extremum of F , then it satisfies the Euler-Lagrange equation,
2γ = w (γ) 2 and γ (-δ) = 0 (1.63)
Note that w (γ) ≥ 0 implies γ convex, thus γ is monotonic in [-δ, x 0 ). Multiplying by γ and integrating form -δ to x, we obtain,
γ 2 (x) -γ 2 (-δ) = w(γ(x)) -w(γ(-δ)) 2 (1.64)
Since γ (-δ) = 0 and w(γ(-δ)) = 0, one gets,
γ (x) = w(γ(x)) 2 (1.65) Let us define, α (x) = γ (|x -x 0 |) then, α (x) := γ (|x -x 0 |) if |x -x 0 | ≤ δ 0 otherwise (1.66)
Note that α is continuous at x 0 and values ϑ, we have that,
S (α ) = I w(α ) + (α ) 2 dx = 2 x 0 -δ w(γ ) + (γ ) 2 dx (1.67)
Plug (1.65) into the last integral term, and change the variables β = γ (x), it turns into
S (α ) = 2 x 0 -δ w(γ ) + (γ ) 2 dx = 2 γ(x 0 ) γ(-δ) w(β) β dβ = 4 ϑ 0 w(β) dβ (1.68)
The fully damage profile is obtained once ϑ → 1, we get, This will be usefull for the recovery sequence in higer dimensions.
S (α ) = lim
Compactness
Theorem 1 Let (x) :=
x 0 w(s)ds, and assume that there exists C > 0 such that 1 -(s) ≤ C a(s) for any 0 ≤ s ≤ 1. Let (u , α ) be a kinematic admissible global minimizer of E . Then, there exists a subsequence (still denoted by (u , α ) ), and a function u ∈ SBV (Ω) such that u → u in L 2 (Ω) and α → 0 a.e. in Ω as → 0 Proof. Note that the technical hypothesis is probably not optimal but sufficient to account for the AT 1 and AT 2 functionals. Testing α = 0 and an arbitrary kinematically admissible displacement field ũ, we get that,
E (u , α ) ≤ E (ũ, 0) ≤ 1 2 Ω |ũ | 2 dx ≤ C (1.69)
So that E (u , α ) is uniformly bounded by some C > 0. Also, this implies that w(α ) → 0 almost everywhere in Ω, and from properties of w, that α → 0 almost everywhere in Ω.Using the inequality a 2 + b 2 ≥ 2|ab| on the surface energy part, we have that,
Ω 2 w(α )|α | dx ≤ Ω w(α ) + (α ) 2 dx ≤ C (1.70)
In order to obtain the compactness of the sequence u , let v := (1 -(α )) u and using the truncation Lemma 1, v is uniformly bounded in L ∞ (Ω). Then,
v = (1 -(α ))u -(α )α u ≤ (1 -(α ))|u | + w(α )|α ||u | ≤ a(α )|u | + w(α )|α ||u | (1.71)
From the uniform bound on E (u , α ), we get that the first term is bounded in L 2 (Ω), while (1.70) and the truncation Lemma 1 show that the second term is bounded in
L 1 (Ω) thus in L 2 (Ω). Finally, i. v is uniformly bounded in L ∞ (Ω)
ii. v is uniformly bounded in L 2 (Ω)
Chapter 1. Variational phase-field models of brittle fracture iii. J(v ) = ∅ invoking the Ambrosio's compactness theorem in SBV (in the Appendix A), we get that there exists v ∈ SBV (Ω) such that v → v strongly in L 2 (Ω). To conclude, since u = v (1-(α )) and α → 0 almost everywhere, we have,
u → u in L 2 (Ω)
Remark the proof above applies unchanged to the higher dimension case.
Gamma-convergence in 1d
The second part of the fundamental theorem of Γ-convergence requires that E Γ-converges to E. The definition of the Γ-convergence is in the Appendix A. The first condition means that E provides an asymptotic common lower bound for the E . The second condition means that this lower bound is optimal. The Γ-convergence is performed in 1d setting and is decomposed in two steps as follow: first prove the lower inequality, then construct the recovery sequence.
Lower semi-continuity inequality in 1d
We want to show that for any u ∈ SBV (Ω), and any (u , α ) such that u → u and α → 0 almost everywhere in Ω, we have,
lim inf →0 E (u , α , Ω) ≥ 1 2 Ω (u ) 2 dx + G c #(J(u)) (1.72)
Proof. Consider any interval I ⊂ Ω ⊂ R, such that,
lim inf →0 E (u , α , I) ≥ 1 2 I (u ) 2 dx if u ∈ W 1,2 (I) (1.73)
and,
lim inf →0 E (u , α , I) ≥ G c otherwise (1.74)
If lim inf →0 E (u , α , I) = ∞, both statements are trivial, so we can assume that there exist 0 ≤ C < ∞ such that,
lim inf →0 E (u , α , I) ≤ C (1.75)
We focus on (1.73) first, and assume that u ∈ W 1,2 (I). From (1.75) we deduce that w(α ) → 0 almost everywhere in I. Consequently, α → 0 almost everywhere in I. By Egoroff's theorem, for any > 0 there exists I ⊂ I such that |I | < and such that α → 0 uniformly on I \ I . For any δ > 0, thus we have,
1.3. Limit of the damage energy 1 -δ ≤ a(α ) on I \ I ,
for all and small enough, so that,
I\I (1 -δ) (u ) 2 dx ≤ I\I a(α ) (u ) 2 dx ≤ I a(α ) (u ) 2 dx (1.76)
Since u → u in W 1,2 (I) , and taking the lim inf on both sides, one gets,
(1 -δ) 2 I\I (u ) 2 dx ≤ lim inf →0 1 2 I a(α ) (u ) 2 dx (1.77)
we obtain the desired inequality (1.73) by letting → 0 and δ → 0.
To prove the second assertion (1.74), we first show that lim →0 sup x∈I α = 1, proceeding by contradiction. Suppose there exists δ > 0 such that α < 1δ on I. Then,
I a(1 -δ) (u ) 2 dx ≤ I a(α ) (u ) 2 dx
Taking the lim inf on both sides and using (1.75) , we get that,
lim inf →0 I (u ) 2 dx ≤ C a(1 -δ)
So u is uniformly bounded in W 1,2 (I), and therefore u ∈ W 1,2 (I), which contradicts our hypothesis. Reasoning as before, we have that α → 0 almost everywhere in I. Proceeding the same way on the interval (b , c ), one gets that,
lim inf →0 G c 4c w I w(α ) + (α ) 2 dx ≥ G c
which is (1.74). In order to obtain (1.72), we apply (1.74) on arbitrary small intervals centered around each points in the jump set of u and (1.73) on each remaining intervals in I.
Recovery sequence for the Γ-limit in 1d
The construction of the recovery sequence is more instructive. Given (u, α) we need to buid a sequence (u , α ) such that lim sup F (u , α ) ≤ F(u, α).
Proof. If F (u, α) = ∞, we can simply take u = u and α = α, so that we can safely assume that F(u, α) < ∞. As in the lower inequality, we consider the area near discontinuity points of u and away from them separately. Let (u, α) be given, consider an open interval I ⊂ R and a point x 0 ∈ J(u) ∩ I. Without loss of generality, we can assume that x 0 = 0 and I = (-δ, δ) for some δ > 0 . The construction of the recovery sequence is composed of two parts, first the recovery sequence for the damage, then one for the displacement.
The optimal damage profile obtained in the Lemma 2, directly gives,
lim sup →0 G c 4c w δ -δ w(α ) + (α ) 2 dx ≤ G c , (1.81)
this is the recovery sequence for the damage. Now, let's focus on the recovery sequence for the bulk term. We define b and
u (x) := x b u(x) if -b ≤ x ≤ b u(x) otherwise (1.82) Since a(α ) ≤ 1, we get that, -b -δ a(α ) (u ) 2 dx ≤ -b -δ (u ) 2 dx (1.
a(α ) (u ) 2 dx ≤ b -b (u ) 2 dx ≤ b -b u b + xu b 2 dx ≤ 2 b -b u b 2 dx + 2 b -b xu b 2 dx ≤ 2 b 2 b -b |u| 2 dx + 2 b -b (u ) 2 dx (1.85)
Since |u| ≤ M , the first term vanish when b → 0. Combining (1.83),(1.85) and (1.84).
Then, taking the lim sup on both sides and using I |u | 2 dx < ∞, we get that,
lim sup →0 1 2 δ -δ a(α ) (u ) 2 dx ≤ 1 2 δ -δ (u ) 2 dx (1.86)
Finally combining (1.81) and (1.86), one obtains
lim sup →0 δ -δ 1 2 a(α ) (u ) 2 + G c 4c w δ -δ w(α ) + (α ) 2 dx ≤ 1 2 δ -δ (u ) 2 dx + G c (1.87)
For the final construction of the recovery sequence, notice that we are free to assume that #(J(u)) is finite and chose δ ≤ inf{|x ix j |/2 s.t. x i , x j ∈ J(u), x i = x j }. For each x i ∈ J(u), we define I i = (x iδ, x i + δ) and use the construction above on each I i whereas on I \ I i we chose u = u and α linear and continuous at the end points of the I i . With this construction, is easy to see that α → 1 uniformly in I \ I i and that, lim sup
→0 I\ I i 1 2 a(α )(u ) 2 dx ≤ I (u ) 2 dx, (1.88)
and, lim sup
→0 I\ I i w(α ) + (α ) 2 dx = 0 (1.89)
Altogether, we obtain the upper estimate for the Γ-limit for pairs (u, 1) of finite energy, i.e.
lim sup →0 F (u , α ) ≤ F (u , 1) (1.90)
Extension to higher dimensions
To extend the Γ-limit to higher dimensions the lower inequality part is technical and is not developed here. But, the idea is to use Fubini's theorem, to build higher dimension by taking 1d slices of the domain, and use the lower continuity on each section see [START_REF] Ambrosio | Existence theory for a new class of variational problems[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. The recovery sequence is more intuitive, a possible construction is to consider a smooth Γ ⊂ Ω and compute the distance to the crack J(u), such that,
d(x) = dist(x, J(u)) (1.91)
and let the volume of the region bounded by p-level set of d, such that,
s(y) = |{x ∈ R n ; d(x) ≤ y}| (1.92)
Figure 1.2: Iso distance to the crack J(u) for the level set b and δ
Following [START_REF] Evans | Measure theory and fine properties of functions[END_REF][START_REF] Evans | On the partial regularity of energy-minimizing, areapreserving maps[END_REF], the co-area formula from Federer [START_REF] Federer | Geometric measure theory[END_REF] is,
Ω f (x) ∇g(x) dx = +∞ -∞ g -1 (y) f (x)dH n-1 (x) dy (1.93)
In particular, taking g(x) = d(x) which is 1-Lipschitz, i.e. ∇d(x) = 1 almost everywhere. We get surface s(y),
s(y) = s(y) ∇d(x) dx = y 0 H n-1 ({x; d(x) = t})dt (1.94)
and
s (y) = H n-1 ({x; d(x) = y}) (1.95)
In particular,
s (0) = lim y→0 s(y) y = 2H n-1 (J(u)) (1.96)
Limit of the damage energy
Consider the damage,
α (d(x)) := 1 if d(x) ≤ b γ (d(x)) if b ≤ d(x) ≤ δ 0 otherwise (1.97)
The surface energy term is,
Ω w(α ) + |∇α | 2 dx = 1 d(x)≤b dx + b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx (1.98)
The first integral term, is the surface bounded by the iso-contour distant b from the crack, i.e
s(b ) = d(x)≤b dx = b 0 H n-1 ({x; d(x) = y}) dy (1.
≤ δ/ 0 w(α (x )) + α (x ) 2 s (x ) dx (1.102)
Passing the limit → 0 and using the Remark 1 on the optimal profile invariance, we get,
lim sup →0 G c 4c w Ω w(α (x)) + |∇α (x)| 2 dx ≤ G c H n-1 (J(u)) (1.103)
For the bulk term, consider the displacement,
u (x) := d(x) b u(x) if d(x) ≤ b u(x) otherwise (1.104)
Similarly to the 1d, one gets,
lim sup →0 Ω 1 2 a(α )(∇u ) 2 dx ≤ Ω 1 2 (∇u ) 2 dx (1.105)
Therefore,
lim sup →0 Ω 1 2 a(α )(∇u ) 2 dx + G c 4c w Ω w(α + |∇α | 2 dx ≤ Ω 1 2 (∇u ) 2 dx + G c H n-1 (J(u))
(1.106)
Numerical implementation
In a view to numerically implement gradient damage models, it is common to consider time and space discretization. Let's first focus on the time-discrete evolution, by considering a time interval [0, T ] subdivided into (N + 1) steps, such that,
0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T .
At any step i, the sets of admissible displacement and damage fields C i and D i are, For any i find (u i , α i ) ∈ (C i , D i ) that satisfies the discrete evolution by local minimizer if the following hold: i. Initial condition α t 0 = α 0 and u t 0 = u 0 ii. For some
C i := u ∈ H 1 (Ω) : u = ūi on ∂ D Ω D i := β ∈ H 1 (Ω) : α i-1 (x) ≤ β ≤ 1, ∀x ∈ Ω , (1.107
h i > 0, find (u i , α i ) ∈ (C i , D i ), such that, (v, β) -(u i , α i ) ≤ h i , E (u i , α i ) ≤ E (v, β) (1.108)
where,
E (u, α) = Ω 1 2 a(α)Ae(u) : e(u) dx - ∂ N Ω g(t) • u dH n-1 + G c 4c w Ω w(α) + |∇α| 2 dx (1.109)
One observes that our time-discretization evolution do not enforce energy balance. Since a(α) and w(α) are convex, the total energy E (u, α) is separately convex with respect to u and α, but that is not convex. Hence, a proposed alternate minimization algorithm guarantees to converge to a critical point of the energy satisfying the irreversibility condition [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Burke | An adaptive finite element approximation of a variational model of brittle fracture[END_REF]. The idea is for each time-step t i , we minimize the problem with respect to any kinematic admissible u for a given α, then, fixed u and minimize E (u, α) with respect to α subject to the irreversibility α i ≥ α i-1 , repeat the procedure until the variation of the damage is small. This gives the following algorithm see Algorithm 1, where δ α is a fixed tolerance parameter.
For the space discretization of E (u, α), we use the finite element methods considering linear Lagrange elements for u and α. To solve the elastic problem preconditioned conjugate gradient solvers is employed, and the constraint minimization with respect to the damage is implemented using the variational inequality solvers provided by PETSc [START_REF] Balay | PETSc Web page[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF]. All computations were performed using the open source mef902 .
Due to the non-convexity of E , solution satisfying irreversibility and stationarity might not be unique.
For remainder solutions, a study selection can be performed. For instance looking at solutions which satisfy the energy balance, or selecting displacement and damage fields which are continuous in time. Another way is to compare results with all previous one in order to avoid local minimizers solution (see [START_REF] Bourdin | The Variational Approach to Fracture[END_REF][START_REF] Bourdin | The variational formulation of brittle fracture: numerical implementation and extensions[END_REF] for more details on the backtracking idea). This method will select global minimizers from the set of solutions.
1: Let j = 0 and α 0 := α i-1 2: repeat 3:
Compute the equilibrium,
u j+1 := argmin u∈C i E (u, α j ) 4:
Compute the damage,
α j+1 := argmin α∈D i α≥α i-1 E (u j+1 , α) 5: j := j + 1 6: until α j -α j-1 L ∞ ≤ δ α 7: Set, u i := u j and α i := α j
Conclusion
The strength of the phase fields models to brittle fracture is the variational structure of the model conceived as an approximation of Griffith and its evolution based on three principles: irreversibility of the damage, stability and energy balance of the total energy. A fundamental property of the model is the maximum admissible stress illustrated in the one dimensional example. This also constrained the damage thickness size, since it governs . Numerically the fracture path is obtained by alternate searching of the damage trajectory which decreases the total energy and the elastic solution of the problem.
Appendix A
Theorem 2 (Ambrosio's compactness and lower semicontinuity on SBV)
Let (f n ) n be a sequence of functions in SBV (Ω) such that there exists non-negative constants
C 1 , C 2 and C 3 such that, i. f n is uniformly bounded in L ∞ (Ω) ii. ∇f n is uniformly bounded in L q (Ω, R n ) with q > 1 iii. H n-1 (J(f n )) is uniformly bounded Then, there exists f ∈ SBV (Ω) and a subsequence f k(n) such that, i. f k(n) → f strongly in L p (Ω), for all p < ∞ ii. ∇f k(n) → ∇f weakly in L q (Ω; R n ) iii. H n-1 (J(f )) ≤ lim inf n H n-1 (J(f n )) Theorem 3 (Fundamental theorem of Γ-convergence)
If E Γ -converges to E, u is a minimizer of E , and (u ) is compact in X, then there exists u ∈ X such that u is a minimizer of E, u → u, and E (u ) → E(u).
Definition 6 (Γ-convergence)
Let E : X → R and E : X → R, where X is a topological space. Then E Γ converges to E if the following two conditions hold for any u ∈ X i. Lower semi continuity inequality: for every equence (u ) ∈ Xsuch that u → u
E(u) ≤ lim inf →0 E (u ),
ii. Existence of a recovery sequence: there exists a sequence (u )
∈ X with u → u such that lim sup →0 E (u ) ≤ E(u).
Chapter 2
Crack nucleation in variational phase-field models of brittle fracture
Despite its many successes, Griffith's theory of brittle fracture [START_REF] Griffith | The phenomena of rupture and flow in solids[END_REF] and its heir, Linear Elastic Fracture Mechanics (LEFM), still faces many challenges. In order to identify a crack path, additional branching criteria whose choice is still unsettled have to be considered. Accounting for scale effects in LEFM is also challenging, as illustrated by the following example: consider a reference structure of unit size rescaled by a factor L.
The critical loading at the onset of fracture scales then as 1/ √ L, leading to a infinite nucleation load as the structure size approaches 0, which is inconsistent with experimental observation for small structures [START_REF] Bažant | Scaling of quasibrittle fracture: asymptotic analysis[END_REF][START_REF] Issa | Size effects in concrete fracture: Part I, experimental setup and observations[END_REF][START_REF] Chudnovsky | Slow crack growth, its modeling and crack-layer approach: A review[END_REF].
It is well accepted that this discrepancy is due to the lack of a critical stress (or a critical lengthscale) in Griffith's theory. Yet, augmenting LEFM to account for a critical stress is very challenging. In essence, the idea of material strength is incompatible with the concept of elastic energy release rate near stress singularity, the pillar of Griffith-like theories, as it would imply crack nucleation under an infinitesimal loading. Furthermore, a nucleation criterion based solely on pointwise maximum stress will be unable to handle crack formation in a body subject to a uniform stress distribution.
Many approaches have been proposed to provide models capable of addressing the aforementioned issues. Some propose to stray from Griffith fundamental hypotheses by incorporating cohesive fracture energies [START_REF] Ortiz | Finite-deformation irreversible cohesive elements for three-dimensional crack-propagation analysis[END_REF][START_REF] Del Piero | A diffuse cohesive energy approach to fracture and plasticity: the one-dimensional case[END_REF][START_REF] De Borst | Cohesive-zone models, higher-order continuum theories and reliability methods for computational failure analysis[END_REF][START_REF] Charlotte | Initiation of cracks with cohesive force models: a variational approach[END_REF] or material non-linearities [START_REF] Gou | Modeling fracture in the context of a strain-limiting theory of elasticity: A single plane-strain crack[END_REF]. Others have proposed dual-criteria involving both elastic energy release rate and material strength such as [START_REF] Leguillon | Strength or toughness? A criterion for crack onset at a notch[END_REF], for instance. Models based on the peridynamics theory [START_REF] Silling | Reformulation of elasticity theory for discontinuities and long-range forces[END_REF] may present an alternative way to handle these issues, but to our knowledge, they are still falling short of providing robust quantitative predictions at the structural scale.
Francfort and Marigo [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] set to devise a formulation of brittle fracture based solely on Griffith's idea of competition between elastic and fracture energy, yet capable of handling the issues of crack path and crack nucleation. However, as already pointed-out in [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF], their model inherits a fundamental limitation of the Griffith theory and LEFM: the lack of an internal length scale and of maximum allowable stresses.
Amongst many numerical methods originally devised for the numerical implemen-tation of the Francfort-Marigo model [START_REF] Bourdin | Implementation of an adaptive finite-element approximation of the Mumford-Shah functional[END_REF][START_REF] Negri | Numerical minimization of the Mumford-Shah functional[END_REF][START_REF] Fraternali | Free discontinuity finite element models in two-dimensions for inplane crack problems[END_REF][START_REF] Schmidt | Eigenfracture: An eigendeformation approach to variational fracture[END_REF], Ambrosio-Tortorelli regularizations [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF], originally introduced in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF], have become ubiquitous. They are known nowadays as phase-field models of fracture, and share several common points with the approaches coming from Ginzburg-Landau models for phase-transition [START_REF] Karma | Phase-field model of mode III dynamic fracture[END_REF]. They have been applied to a wide variety of fracture problems including fracture of ferro-magnetic and piezo-electric materials [START_REF] Abdollahi | Phase-field modeling of crack propagation in piezoelectric and ferroelectric materials with different electromechanical crack conditions[END_REF][START_REF] Wilson | A phase-field model for fracture in piezoelectric ceramics[END_REF], thermal and drying cracks [START_REF] Maurini | Crack patterns obtained by unidirectional drying of a colloidal suspension in a capillary tube: experiments and numerical simulations using a two-dimensional variational approach[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF], or hydraulic fracturing [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF][START_REF] Wheeler | An augmented-lagrangian method for the phase-field approach for pressurized fractures[END_REF][START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF][START_REF] Wilson | Phase-field modeling of hydraulic fracture[END_REF] to name a few. They have been expended to account for dynamic effects [START_REF] Larsen | Existence of solutions to a regularized model of dynamic fracture[END_REF][START_REF] Bourdin | A time-discrete model for dynamic fracture based on crack regularization[END_REF][START_REF] Borden | A phase-field description of dynamic brittle fracture[END_REF][START_REF] Hofacker | A phase field model of dynamic fracture: Robust field updates for the analysis of complex crack patterns[END_REF], ductile behavior [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Miehe | Phase field modeling of fracture in multi-physics problems. Part II. coupled brittle-to-ductile failure criteria and crack propagation in thermo-elastic-plastic solids[END_REF][START_REF] Ambati | Phase-field modeling of ductile fracture[END_REF], cohesive effects [START_REF] Crismale | Viscous approximation of quasistatic evolutions for a coupled elastoplastic-damage model[END_REF][START_REF] Conti | Phase field approximation of cohesive fracture models[END_REF][START_REF] Freddi | Numerical insight of a variational smeared approach to cohesive fracture[END_REF], large deformations [START_REF] Ambati | A phase-field model for ductile fracture at finite strains and its experimental verification[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF][START_REF] Borden | A phasefield formulation for fracture in ductile materials: Finite deformation balance law derivation, plastic degradation, and stress triaxiality effects[END_REF], or anisotropy [START_REF] Li | Phase-field modeling and simulation of fracture in brittle materials with strongly anisotropic surface energy[END_REF], for instance.
Although phase-field models were originally conceived as approximations of Francfort and Marigo's variational approach to fracture in the vanishing limit of their regularization parameter, a growing body of literature is concerned with their links with gradient damage models [START_REF] Frémond | Damage, gradient of damage and principle of virtual power[END_REF][START_REF] Lorentz | Analysis of non-local models through energetic formulations[END_REF]. In this setting, the regularization parameter is kept fixed and interpreted as a material's internal length [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF][START_REF] Freddi | Regularized variational theories of fracture: A unified approach[END_REF][START_REF] Del | A variational approach to fracture and other inelastic phenomena[END_REF]. In particular, [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF] proposed an evolution principle for an Ambrosio-Tortorelli like energy based on irreversibility, stability and energy balance, where the regularization parameter is kept fixed and interpreted as a material's internal length. This approach, which we refer to as variational phase-field models, introduces a critical stress proportional to 1/ √ . As observed in [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF][START_REF] Nguyen | On the choice of parameters in the phase field method for simulating crack initiation with experimental validation[END_REF], it can potentially reconcile stress and toughness criteria for crack nucleation, recover pertinent size effect at small and large length-scales, and provide a robust and relatively simple approach to model crack propagation in complex two-and three-dimensional settings. However, the few studies providing experimental verifications [START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF][START_REF] Nguyen | On the choice of parameters in the phase field method for simulating crack initiation with experimental validation[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF] are still insufficient to fully support this conjecture.
The goal of this chapter is precisely to provide such evidences, focusing on nucleation and size-effects for mode-I cracks. We provide quantitative comparisons of nucleation loads near stress concentrations and singularities with published experimental results for a range of materials. We show that variational phase-field models can reconcile strength and toughness thresholds and account for scale effect at the structural and the material length-scale. In passing, we leverage the predictive power of our approach to propose a new way to measure a material's tensile strength from the nucleation load of a crack near a stress concentration or a weak singularity. In this study, we focus solely on the identification of the critical stress at the first crack nucleation event and are not concerned by the post-critical fracture behavior.
The chapter is organized as follows: in Section 2.1, we introduce variational phasefield models and recall some of their properties. Section 2.2 focuses on the links between stress singularities or concentrations and crack nucleation in these models. We provide validation and verification results for nucleation induced by stress singularities using Vshaped notches, and concentrations using U-notches. Section 2.3 is concerned with shape and size effects. We investigate the role of the internal length on nucleation near a defect, focusing on an elliptical cavity and a mode-I crack, and discussing scale effects at the material and structural length scales.
Chapter 2. Crack nucleation in variational phase-field models of brittle fracture
Variational phase-field models
We start by recalling some important properties of variational phase-field models, focussing first on their construction as approximation method of Francfort and Marigo's variational approach to fracture, then on their alternative formulation and interpretation as gradient-damage models.
Regularization of the Francfort-Marigo fracture energy
Consider a perfectly brittle material with Hooke's law A and critical elastic energy release rate G c occupying a region Ω ⊂ R n , subject to a time dependent boundary displacement ū(t) on a part ∂ D Ω of its boundary and stress-free on the remainder ∂ N Ω. In the variational approach to fracture, the quasi-static equilibrium displacement u i and crack set Γ i at a given discrete time step t i are given by the minimization problem (see also [START_REF] Bourdin | The variational approach to fracture[END_REF])
(u i , Γ i ) = argmin u=ū i on ∂ D Ω Γ⊃Γ i-1 E(u, Γ) := Ω\Γ 1 2 Ae(u) • e(u) dx + G c H n-1 (Γ ∩ Ω \ ∂ N Ω), (2.1)
where H n-1 (Γ) denotes the Hausdorff n -1-dimensional measure of the unknown crack Γ, i.e. its aggregate length in two dimensions or surface area in three dimensions, and e(u) := 1 2 (∇u + ∇ T u) denotes the symmetrized gradient of u. Because in (2.1) the crack geometry Γ is unknown, special numerical methods had to be crafted. Various approaches based for instance on adaptive or discontinuous finite elements were introduced [START_REF] Bourdin | Implementation of an adaptive finite-element approximation of the Mumford-Shah functional[END_REF][START_REF] Giacomini | A discontinuous finite element approximation of quasi-static growth of brittle fractures[END_REF][START_REF] Fraternali | Free discontinuity finite element models in two-dimensions for inplane crack problems[END_REF]. Variational phase-field methods, take their roots in Ambrosio and Tortorelli's regularization of the Mumford-Shah problem in image processing [START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF][START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF], adapted to brittle fracture in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF]. In this framework, a regularized energy E depending on a regularization length > 0 and a "phase-field" variable α taking its values in [0, 1] is introduced. A broad class of such functionals was introduced in [START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF]. They are
E (u, α) = Ω a(α) + η 2 Ae(u) • e(u) dx + G c 4c w Ω w(α) + |∇α| 2 dx, (2.2)
where a and w are continuous monotonic functions such that a(0) = 1, a(1) = 0, w(0) = 0, and w(1) = 1, η = o( ), and c w := 1 0 w(s) ds is a normalization parameter. The approximation of E by E takes place with the framework of Γ-convergence (see [START_REF] Maso | An introduction to Γ-convergence[END_REF][START_REF] Braides | Gamma-convergence for Beginners[END_REF] for instance). More precisely, if E Γ-converges to E, then the global minimizers of E converge to that of E. The Γ-convergence of a broad class of energies, including the ones above was achieved with various degrees of refinement going from static scalar elasticity to time discrete and time continuous quasi-static evolution linearized elasticity, and their finite element discretization [START_REF] Bellettini | Discrete approximation of a free discontinuity problem[END_REF][START_REF] Bourdin | Image segmentation with a finite element method[END_REF][START_REF] Braides | Approximation of Free-Discontinuity Problems[END_REF][START_REF] Giacomini | A discontinuous finite element approximation of quasi-static growth of brittle fractures[END_REF][START_REF] Chambolle | An approximation result for special functions with bounded variations[END_REF][START_REF] Chambolle | Addendum to "An Approximation Result for Special Functions with Bounded Deformation[END_REF][START_REF] Giacomini | Ambrosio-Tortorelli approximation of quasi-static evolution of brittle fractures[END_REF][START_REF] Burke | An adaptive finite element approximation of a variational model of brittle fracture[END_REF][START_REF] Burke | An adaptive finite element approximation of a generalized Ambrosio-Tortorelli functional[END_REF][START_REF] Iurlano | A density result for gsbd and its application to the approximation of brittle fracture energies[END_REF].
Throughout this chapter, we focus on two specific models:
E (u, α) = Ω (1 -α) 2 + η 2 Ae(u) • e(u) dx + G c 2 Ω α 2 + |∇α| 2 dx, (AT 2 )
2.1. Variational phase-field models introduced in [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF] for the Mumford-Shah problem and in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] for brittle fracture, and
E (u, α) = Ω (1 -α) 2 + η 2 Ae(u) • e(u) dx + 3G c 8 Ω α + |∇α| 2 dx (AT 1 )
used in [START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF].
The "surfing" problem introduced in [START_REF] Hossain | Effective toughness of heterogeneous media[END_REF] consists in applying a translating boundary displacement on ∂Ω given by ū(x, y) = ūI (x-V t, y), where ūI denotes the asymptotic farfield displacement field associated with a mode-I crack along the x-axis with tip at (0, 0), V is a prescribed loading "velocity", and t a loading parameter ("time"). ] with an initial crack Γ 0 = [0, l 0 ] × {0} for several values of . The AT 1 model is used, assuming plane stress conditions, and the mesh size h is adjusted so that /h = 5, keeping the "effective" numerical toughness G eff := G c 1 + h 4cw fixed (see [START_REF] Bourdin | The variational approach to fracture[END_REF]). The Poisson ratio is ν = 0.3, the Young's modulus is E = 1, the fracture toughness is G c = 1.5, and the loading rate V = 4. As expected, after a transition stage, the crack length depends linearly on the loading parameter with slope 3.99, 4.00 and 4.01 for =0.1, 0.05 and 0.025 respectively. The elastic energy release rate G, computed using the G θ method [START_REF] Destuynder | Sur une interprétation mathématique de l'intégrale de Rice en théorie de la rupture fragile[END_REF][START_REF] Sicsic | From gradient damage laws to Griffith's theory of crack propagation[END_REF][START_REF] Li | Gradient damage modeling of brittle fracture in an explicit dynamics context[END_REF] is very close to G eff . Even though Γ-convergence only mandates that the elastic energy release rate in the regularized energy converges to that of Griffith as → 0, we observe that as long as is "compatible" with the discretization size and domain geometry, its influence on crack propagation is insignificant. Similar observations were reported in [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF][START_REF] Zhang | Numerical evaluation of the phasefield model for brittle fracture with emphasis on the length scale[END_REF][START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF]. Figure 2.1(right) repeats the same experiment for a curve propagating along a circular path. Here, the boundary displacement is given by Muskhelishvili's exact solution for a crack propagating in mode-I along a circular path [START_REF] Muskhelishvili | Some Basic Problems of the Mathematical Theory of Elasticity: Fundamental Equations, Plane Theory of Elasticity, Torsion, and Bending (translated from Russian)[END_REF]. The Young's modulus, fracture toughness, and loading rate are set to 1. Again, we see that even for a fixed regularization length, the crack obeys Griffith's criterion.
Chapter 2. Crack nucleation in variational phase-field models of brittle fracture When crack nucleation is involved, the picture is considerably different. Consider a one-dimensional domain of length L, fixed at one end and submitted to an applied displacement ū = e L at the other end. For the lack of an elastic singularity, LEFM is incapable of predicting crack nucleation here, and predicts a structure capable of supporting arbitrarily large loads without failing. A quick calculation shows that the global minimizer of (2.1) corresponds to an uncracked elastic solution if e < e c := 2Gc EL , while at e = e c , a single crack nucleates at an arbitrary location (see [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF]). The failure stress is σ c = 2G c E/L, which is consistent with the scaling law σ c = O 1/ √ L mentioned in the introduction. The uncracked configuration is always a stable local minimizer of (2.1), so that if local minimization of (2.1) is considered, nucleation never takes place. Just as before, one can argue that for the lack of a critical stress, an evolution governed by the generalized Griffith energy (2.1) does not properly account for nucleation and scaling laws.
When performing global minimization of (2.2) using the backtracking algorithm of [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF] for instance, a single crack nucleates at an -dependent load. As predicted by the Γ-convergence of E to E, the critical stress at nucleation converges to 2G c E/L as → 0. Local minimization of (2.2) using the alternate minimizations algorithm of [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF], or presumably any gradient-based monotonically decreasing scheme, leads to the nucleation of a single crack at a critical load e c , associated with a critical stress σ c = O G c E/ , as described in [START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF] for example. In the limit of vanishing , local and global minimization of (2.2) inherit therefore the weaknesses of Griffith-like theories when dealing with scaling properties and crack nucleation.
Variational phase-field models as gradient damage models
More recent works have sought to leverage the link between σ c and . Ambrosio-Tortorelli functionals are then seen as the free energy of a gradient damage model [START_REF] Frémond | Damage, gradient of damage and principle of virtual power[END_REF][START_REF] Lorentz | Analysis of non-local models through energetic formulations[END_REF][START_REF] Benallal | Bifurcation and stability issues in gradient theories with softening[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF] where α plays the role of a scalar damage field. In [START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], a thorough investigation of a one-dimensional tension problem led to interpreting as a material's internal or characteristic length linked to a material's tensile strength. An overview of this latter approach, which is the one adopted in the rest of this work, is given below.
In all that follows, we focus on a time-discrete evolution but refer the reader to [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF] for a time-continuous formulation which can be justified within the framework of generalized standard materials [START_REF] Halphen | Sur les matériaux standard généralisés[END_REF] and rate-independent processes [START_REF] Mielke | Evolution of rate-independent systems[END_REF]. At any time step i > 1, the sets of admissible displacement and damage fields C i and D i , equipped with their natural H 1 norm, are
C i = u ∈ H 1 (Ω) : u = ūi on ∂ D Ω , D i = β ∈ H 1 (Ω) : α i-1 (x) ≤ β(x) ≤ 1, ∀x ∈ Ω ,
where the constraint α i-1 (x) ≤ β(x) ≤ 1 in the definition of D i mandates that the damage be an increasing function of time, accounting for the irreversible nature of the 2.1. Variational phase-field models damage process. The damage and displacement fields (u i , α i ) are then local minimizers of the energy E , i.e. there exists h i > 0 such that
∀(v, β) ∈ C i × D i such that (v, β) -(u i , α i ) ≤ h i , E (u i , α i ) ≤ E (v, β), (2.3)
where • denotes the natural H 1 norm of C i × D i . We briefly summarize the solution of the uniaxial tension of a homogeneous bar [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], referring the reader to the recent review [START_REF] Marigo | An overview of the modelling of fracture by gradient damage models[END_REF] for further details: As one increases the applied strain, the damage field remains 0 and the stress field constant until it reaches the elastic limit
σ e = G c E c w w (0) 2s (0) . (2.4)
where E is the Young modulus of the undamaged material, and s(α) = 1/a(α). If the applied displacement is increased further, the damage field increases but remains spatially constant. Stress hardening is observed until peak stress σ c , followed by stress softening. A stability analysis shows that for long enough domains (i.e. when L ), the homogeneous solution is never stable in the stress softening phase, and that a snapback to a fully localized solution such that max x∈(0,L) α(x) = 1 is observed. The profile of the localized solution and the width D of the localization can be derived explicitly from the functions a and w. With the choice of normalization of (2.2), the surface energy associated to the fully localized solution is exactly G c and its elastic energy is 0, so that the overall response of the bar is that of a brittle material with toughness G c and strength σ c .
Knowing the material's toughness G c and the Young's modulus E, one can then adjust in such a way that the peak stress σ c matches the nominal material's strength.
Let us denote by
ch = G c E σ 2 c = K 2 Ic σ 2 c (2.5)
the classical material's characteristic length (see [START_REF] Rice | The mechanics of earthquake rupture[END_REF][START_REF] Falk | A critical evaluation of cohesive zone models of dynamic fracture[END_REF], for instance), where E = E in three dimensions and in plane stress, or E = E/(1ν 2 ) in plane strain, and K Ic = √ G c E is the mode-I critical stress intensity factor. The identification above gives
1 := 3 8 ch ; 2 := 27 256 ch , (2.6)
for the AT 1 and AT 2 models, respectively. Table 2.1 summarizes the specific properties of the AT 1 and AT 2 models. The AT 1 model has some key conceptual and practical advantages over the AT 2 model used in previous works, which were leveraged in [START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF] for instance: It has a non-zero elastic limit, preventing diffuse damage at small loading. The length localization band D is finite so that equivalence with Griffith energy is obtained even for a finite value of , and not only in the limit of → 0, as predicted by Γ-convergence [START_REF] Sicsic | From gradient damage laws to Griffith's theory of crack propagation[END_REF]. By remaining quadratic in the α and u variables, its numerical implementation using alternate minimizations originally introduced in [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] is very efficient.
Chapter 2. Crack nucleation in variational phase-field models of brittle fracture
Model w(α) a(α) c w σ e σ c D ch AT 1 α (1 -α) 2 2 3 3GcE 8 3GcE 8 4 8 3 AT 2 α 2 (1 -α) 2 1 2 0 3 16 3GcE ∞ 256 27
Table 2.1: Properties of the gradient damage models considered in this work: the elastic limit σ e , the material strength σ c , the width of the damage band D, and the conventional material length ch defined in (2.5). We use the classical convention E = E in three dimension and in plane stress, and
E = E/(1 -ν 2 ) in plane strain.
In all the numerical simulations presented below, the energy (2.2) is discretized using linear Lagrange finite elements, and minimization performed by alternating minimization with respect to u and α. Minimization with respect to u is a simple linear problem solved using preconditioned gradient conjugated while constrained minimization with respect to α is reformulated as a variational inequality and implemented using the variational inequality solvers provided by PETSc [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF]. All computations were performed using the open source implementations mef901 and gradient-damage2 .
Effect of stress concentrations
The discussion above suggests that variational phase-field models, as presented in Section 2.1.2, can account for strength and toughness criteria simultaneously, on an idealized geometry. We propose to investigate this claim further by focusing on more general geometries, a V-shaped notch to illustrate nucleation near stress singularities and a Ushaped notch for stress concentrations. There is a wealth of experimental literature on crack initiation in such geometries using three-point bending (TPB), four-point bending (FPB), single or double edge notch tension (SENT and DENT) allowing us to provide qualitative validation and verification simulations of the critical load at nucleation.
Initiation near a weak stress singularity: the V-notch
Consider a V-shaped notch in a linear elastic isotropic homogeneous material. Let (r, θ) be the polar coordinate system emanating from the notch tip with θ = 0 corresponding to the notch symmetry axis, shown on Figure 2.2(left). Assuming that the notch lips Γ + ∪ Γ -are stress-free, the mode-I component of the singular part of the stress field in 2.2. Effect of stress concentrations plane strain is given in [START_REF] Leguillon | Computation of Singular Solutions in Elliptic Problems and Elasticity[END_REF]:
σ θθ = kr λ-1 F (θ), σ rr = kr λ-1 F (θ) + (λ + 1)F (θ) λ(λ + 1) , σ rθ = -kr λ-1 F (θ) (λ + 1) , (2.7)
where
F (θ) = (2π) λ-1 cos((1 + λ)θ) -f (λ, ω) cos((1 -λ)θ) 1 -f (λ, ω) , (2.8)
and
f (λ, ω) = (1 + λ) sin((1 + λ)(π -ω)) (1 -λ) sin((1 -λ)(π -ω)) , (2.9)
and the exponent of the singularity λ ∈ [1/2, 1], see (2.11)
Note that this definition differs from the one often encountered in the literature by a factor (2π) λ-1 , so that when ω = 0 (i.e. when the notch degenerates into a crack), k corresponds to the mode-I stress intensity factor whereas when ω = π/2, k is the tangential stress, and that the physical dimension of [k] ≡ N/m -λ -1 is not a constant but depends on the singularity power λ.
If ω < π/2 (i.e. ω > π/2), the stress field is singular at the notch tip so that a nucleation criterion based on maximum pointwise stress will predict crack nucleation for any arbitrary small loading. Yet, as long as ω > 0 (ω < π), the exponent of the singularity is sub-critical in the sense of Griffith, so that LEFM forbids crack nucleation, regardless of the magnitude of the loading.
ūr = r λ E (1 -ν 2 )F (θ) + (λ + 1)[1 -νλ -ν 2 (λ + 1)]F (θ) λ 2 (λ + 1) ūθ = r λ E (1 -ν 2 )F (θ) + [2(1 + ν)λ 2 + (λ + 1)(1 -νλ -ν 2 (λ + 1)]F (θ) λ 2 (1 -λ 2 ) .
(2.12)
In the mode-I Pac-Man test, we apply a boundary displacement on the outer edge of the domain ∂ D Ω of the form tū on both components of u, t being a monotonically increasing loading parameter. We performed series of numerical simulations varying the notch angle ω and regularization parameter for the AT 1 and AT 2 models. Up to a rescaling and without loss of generality, it is always possible to assume that E = 1 and G c = 1. The Poisson ratio was set to ν = 0.3. We either prescribed the value of the damage field on Γ + ∪ Γ -to 1 (we refer this to as "damaged notch conditions") or let it free ("undamaged notch conditions"). The mesh size was kept at a fixed ratio of the internal length h = /5.
For "small" enough loadings, we observe an elastic or nearly elastic phase during which the damage field remains 0 or near 0 away from an area of radius o( ) near the notch tip. Then, for some loading t = t c , we observed the initiation of a "large" add-crack associated with a sudden jump of the elastic and surface energy. Figure 2.3 shows a typical mesh, the damage field immediately before and after nucleation of a macroscopic crack and the energetic signature of the nucleation event.
Figure 2.4 shows that up to the critical loading, the generalized stress intensity factor can be accurately recovered by averaging σ θθ (r, 0)/(2π r) λ-1 along the symmetry axis of the domain, provided that the region r ≤ 2 be excluded.
Figure 2.5(left) shows the influence of the internal length on the critical generalized stress intensity factor for a sharp notch (ω = 0.18°) for the AT 1 and AT 2 models, using damaged and undamaged notch boundary conditions on the damage field. In this case, with the normalization (2.11), the generalized stress intensity factor coincides with the standard mode-I stress intensity factor K Ic . As suggested by the surfing experiment in
t := k AT c K Ic = √ G c E .
As reported previously in [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF] for instance, undamaged notch conditions lead to overestimating the critical load. We speculate that this is because with undamaged notch condition, the energy barrier associated with bifurcation from an undamaged (or partially damaged) state to a fully localized state needs to be overcome. As expected, this energy barrier is larger for the AT 1 model than for the AT 2 model for which large damaged areas ahead of the notch tip are observed.
For flat notches (2ω = 179.64°) as shown in Figure 2.5(right), the generalized stress intensity factor k takes the dimension of a stress, and crack nucleation is observed when k c reaches the -dependent value σ c given in Table 2.1, i.e. when σ θθ | θ=0 = σ c , as in the uniaxial tension problem. In this case the type of damage boundary condition on the notch seems to have little influence. For intermediate values of ω, we observe in Figure 2.6 that the critical generalized stress intensity factor varies smoothly and monotonically between its extreme values and remains very close to K Ic for opening angles as high as 30°, which justifies the common numerical practice of replacing initial cracks with slightly open sharp notches and damaged notch boundary conditions. See Table 2.3 for numerical data.
k c /(K Ic ) ef f ω = 0.18 • AT 1 -U AT 1 -D AT 2 -U AT 2 -D 10 -1 10 0 ch 1 2 4 6 8 k c 1 -0.5 ω = 89, 82 • AT 1 -U AT 1 -D AT 2 -U AT 2 -D
Validation
For intermediate values 0 < 2ω < π, we focus on validation against experiments from the literature based on measurements of the generalized stress intensity factor at a V-shaped notch. Data from single edge notch tension (SENT) test of soft annealed tool steel, (AISI O1 at -50 • C) [START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF], four point bending (FPB) experiments of Divinycell® H80, H100, H130, and H200 PVC foams) [START_REF] Grenestedt | On cracks emanating from wedges in expanded PVC foam[END_REF], and double edge notch tension (DENT) experiments of poly methyl methacrylate (PMMA) and Duraluminium [START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF], were compiled in [START_REF] Gómez | A fracture criterion for sharp V-notched samples[END_REF]. We performed a series of numerical simulations of Pac-Man tests using the material properties reported in [START_REF] Gómez | A fracture criterion for sharp V-notched samples[END_REF] and listed in Table 2.2. In all cases, the internal length was computed using (2.6). Plexiglass DENT (Seweryn) numerical simulations with experimental values reported in the literature for V-notch with varying aperture. The definition (2.11) for k is used. For the AT 1 model, we observe a good agreement for the entire range of notch openings, as long as damaged notch conditions are used for small notch angles and undamaged notch conditions for large notch angles. For the AT 2 model, the same is true, but the agreement is not as good for large notch angles, due to the presence of large areas of distributed damage prior to crack nucleation.
Effect of stress concentrations
Material E ν K Ic σ c source [MPa] [MPa √ m] [MPa] Al 2 O 3 -
k c [MPa.m 1-λ ] Steel SENT (Strandberg) AT 1 -U AT 1 -D AT 2 -U AT 2 -D 0
AT 1 -U AT 1 -D AT 2 -U AT 2 -D
The numerical values of the critical generalized stress intensity factors for the AT 1 models and the experiments from the literature are included in Tables 2.4, 2.5, 2.6, and 2.7 using the convention of (2.11) for k. As suggested by Figure 2.5 and reported in the literature see [START_REF] Klinsmann | An assessment of the phase field formulation for crack growth[END_REF], nucleation is best captured if damaged notch boundary conditions are used for sharp notches and undamaged notch conditions for flat ones.
These examples strongly suggest that variational phase-field models of fracture are capable of predicting mode-I nucleation in stress and toughness dominated situations, as seen above, but also in the intermediate cases. Conceptually, toughness and strength (or equivalently internal length) could be measured by matching generalized stress intensity factors in experiments and simulations. When doing so, however, extreme care has to be exerted in order to ensure that the structural geometry has no impact on the measured generalized stress. Similar experiments were performed in [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF][START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] for three and four point bending experiments on PMMA and Aluminum oxide-Zirconia ceramics samples. While the authors kept the notch angle fixed, they performed three and four point bending experiments or varied the relative depth of the notch as a fraction of the sample height (see Figure 2.9). Figure 2.9: Schematic of the geometry and loading in the four point bending experiments of [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] (left) and three point bending experiments of [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF] (right). The geometry of the three point bending experiment of [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] is identical to that of their four point bending, up to the location of the loading devices.
Figure 2.10 compares numerical values of the generalized stress intensity factor using the AT 1 model with experimental measurements, and the actual numerical values are included in Table 2.8 and 2.9.
For the Aluminum oxide-Zirconia ceramic, we observe that the absolute error between measurement and numerical prediction is typically well within the standard deviation of the experimental measurement. As expected, damaged notch boundary conditions lead Chapter 2. Crack nucleation in variational phase-field models of brittle fracture to better approximation of k c for small angles, and undamaged notches are better for larger values of ω.
k c [MPa.m 1-λ ] Al 2 O 3 -7%ZrO 2 FPB (Yoshibash) Al 2 O 3 -7%ZrO 2 TPB (Yoshibash) AT 1 -U AT 1 -D 20
For the three point bending experiments in PMMA of [START_REF] Dunn | Fracture initiation at sharp notches: correlation using critical stress intensities[END_REF] later reported in [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF], the experimental results suggest that the relative depth a/h of the notch has a significant impact on k c . We therefore performed full-domain numerical simulation using the geometry and loading from the literature, and compared the critical force upon which a crack nucleates in experiments and simulations. All computations were performed using the AT 1 model in plane strain with undamaged notch boundary conditions. Figure 2.11 compares the experimental and simulated value of the critical load at failure, listed in Table 2.10 and 2.11.
These simulations show that a robust quantitative prediction of the failure load in geometries involving a broad range of stress singularity power can be achieved numerically with the AT 1 model, provided that the internal length be computed using (2.6), which involves only material properties. In other words, our approach is capable of predicting crack nucleation near a weak stress singularity using only elastic properties, fracture toughness G c , the tensile strength σ c , and the local energy minimization principle (2.3).
In light of Figure 2.11, we suggest that both toughness and tensile strength (or equivalently toughness and internal length) can be measured by matching full domain or Pac-Man computations and experiments involving weak elastic singularity of various power (TPB, FPB, SENT, DENT with varying notch depth or angle) instead of measuring σ c directly. We expect that this approach will be much less sensitive to imperfections than the direct measurement of tensile strength, which is virtually impossible. Furthermore, since our criterion is not based on crack tip asymptotics, using full domain computations do not require that the experiments be specially designed to isolate the notch tip singularity from structural scale deformations. PMMA TPB (Dunn)
Al 2 O 3 -7%ZrO 2 FPB (Yoshibash) Al 2 O 3 -7%ZrO 2 TPB (Yoshibash) AT 1 -U FPB AT 1 -U TPB
a h = .1 PMMA TPB (Dunn) a h = .2 PMMA TPB (Dunn) a h = .3 PMMA TPB (Dunn) a h = .4 AT 1 -U a h = .1 AT 1 -U a h = .2 AT 1 -U a h = .3 AT 1 -U a h = .4 Figure
Initiation near a stress concentration: the U-notch
Crack nucleation in a U-shaped notch is another classical problem that has attracted a wealth of experimental and theoretical work. Consider a U-shaped notch of width ρ and length a ρ subject to a mode-I local loading (see Figure 2.12 for a description of notch geometry in the context of a double edge notch tension sample). Assuming "smooth" loadings and applied boundary displacements, elliptic regularity mandates that the stress field be non-singular near the notch tip, provided that ρ > 0. Within the realm of Griffith fracture, this of course makes crack nucleation impossible. As it is the case for the Vnotch, introducing a nucleation principle based on a critical stress is also not satisfying as it will lead to a nucleation load going to 0 as ρ → 0, instead of converging to that of an infinitely thin crack given by Griffith's criterion. There is a significant body of literature on "notch mechanics", seeking to address this problem introducing stress based criteria, generalized stress intensity factors, or intrinsic material length and cohesive zones. A survey of such models, compared with experiments on a wide range of brittle materials is given [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF].
In what follows, we study crack nucleation near stress concentrations in the AT 1 and AT 2 models and compare with the experiments gathered in [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. The core of their analysis consist in defining a generalized stress intensity factor
K U = K t σ ∞ c πρ 4 , (2.13)
where K t , the notch stress concentration factor, is a parameter depending on the local (a and ρ), as well as global sample geometry and loading. Through a dimensional analysis, they studied the dependence of the critical generalized stress intensity factor at the onset Chapter 2. Crack nucleation in variational phase-field models of brittle fracture 25, and 0.5 for which the value K t , computed in [START_REF] Lazzarin | A generalized stress intensity factor to be applied to rounded v-shaped notches[END_REF] is respectively 5.33, 7.26, and 11.12. In each case, we leveraged the symmetries of the problem by performing computations with the AT 1 and AT 2 models on a quarter of the domain for a number of values of the internal length corresponding to ρ/ ch between 0.05 and 20. In all cases, undamaged notch boundary conditions were used.
In Figure 2.13, we overlay the outcome of our simulations over the experimental results gathered in [START_REF] Gómez | Failure criteria for linear elastic materials with U-notches[END_REF]. As for the V-notch, we observe that the AT 2 model performs poorly for weak stress concentrations (large values of ρ/ ch ), as the lack of an elastic phase leads to the creation of large partially damaged areas. For sharp notches (ρ 0), our simulations concur with the experiments in predicting crack nucleation when K U = K Ic . As seen earlier, the AT 1 slightly overestimates the critical load in this regime when undamaged notch boundary conditions are used. In light of Figure 2.13, we claim that numerical simulations based on the variational phase-field model AT 1 provides a simple way to predict crack nucleation that does not require the computation of a notch stress concentration factors K t or the introduction of an ad-hoc criterion.
Size effects in variational phase-field models
Variational phase-field models are characterized by the intrinsic length , or ch . In this section, we show that this length-scale introduces physically pertinent scale effects, corroborating its interpretation as a material length. To this end, we study the nucleation of a crack in the uniaxial traction of a plate (-W, W ) × (-L, L) with a centered elliptical hole with semi-axes a and ρa (0 ≤ ρ ≤ 1) along the x-and y-axes respectively, see Figure 2.14. In Section 2.3.1, we study the effect of the size and shape of the cavity, assumed to be small with respect to the dimension of the plate (a W, L). In Section 2.3.2, we investigate material and structural size effects for a plate of finite width in the limit case of a perfect crack (ρ = 0). For a small hole (a W, L), up to a change of scale, the problem can be fully characterized by two dimensionless parameters: a/ , and ρ. For a linear elastic and isotropic material occupying an infinite domain, a close form expression of the stress field as a function of the hole size and aspect ratio is given in [START_REF] Inglis | Stresses in plates due to the presence of cracks and sharp corners[END_REF]. The stress is maximum at the points A = (a, 0) and A = (-a, 0), where the radial stress is zero and the hoop stress is given by:
σ max = t 1 + 2 ρ , (2.14)
t denoting the applied tensile stress along the upper and lower edges of the domain, i.e. the applied macroscopic stress at infinity. We denote by ū the corresponding displacement field for t = 1, which is given in [START_REF] Gao | A general solution of an infinite elastic plate with an elliptic hole under biaxial loading[END_REF].
As for the case of a perfect bar, (2.14) exposes a fundamental issue: if ρ > 0, the stress remains finite, so that Griffith-based theories will only predict crack nucleation if ρ = 0. In that case the limit load given by the Griffith's criterion for crack nucleation is
t = σ G := G c E aπ . (2.15)
However, as ρ → 0, the stress becomes singular so that the critical tensile stress σ c is exceeded for an infinitesimally small macroscopic stress t.
Following the findings of the previous sections, we focus our attention on the AT 1 model only, and present numerical simulations assuming a Poisson ratio ν = 0.3 and plane-stress conditions. We perform our simulations in domain of finite size, here a disk of radius R centered around the defect. Along the outer perimeter of the domain, we apply a boundary displacement u = tū, where ū is as in [START_REF] Inglis | Stresses in plates due to the presence of cracks and sharp corners[END_REF], and we use the macroscopic stress t a loading parameter. Assuming a symmetric solution, we perform our computations on a quarter domain. For the circular case ρ = 1, we use a reference mesh size h = min /10, where min is the smallest value of the internal length of the set of simulations. For ρ < 1, we selectively refine the element size near the expected nucleation site (see Figure 2.14-right). In order to minimize the effect of the finite size of the domain, we set R = 100a.
We performed numerical simulations varying the aspect ratio a/ from 0.1 to 50 and the ellipticity ρ from 0.1 to 1.0. In each case, we started from an undamaged state an monotonically increased the loading. In all numerical simulations, we observe two critical loading t e and t c , the elastic limit and structural strength, respectively. For 0 ≤ t < t e the solution is purely elastic, i.e. the damage field α remains identically 0 (see Figure 2.15left). For t e ≤ t < t c , partial distributed damage is observed. The damage field takes its maximum value α max < 1 near point A (see Figure 2.15-center). At t = t c , a fully developed crack nucleates, then propagates for t > t c (see Figure 2.15-right). As for the Pac-Man problem, we identify the crack nucleation with a jump in surface energy, and focus on loading at the onset of damage. From the one-dimensional problem of Section 2.1.2 and [START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | The issues of the uniqueness and the stability of the homogeneous response in uniaxial tests with gradient damage models[END_REF], we expect damage nucleation to take place when the maximum stress σ max reaches the nominal material strength σ c = 3G c E /8 (see Table 2.1), i.e. for a critical load
t e = ρ 2 + ρ σ c = ρ 2 + ρ 3G c E 8 . (2.16)
Figure 2.16-left confirms this expectation by comparing the ratio t e /σ c to its expected value ρ/(2 + ρ) for ρ ranging from 0.1 to 1. Figure 2.16-right highlights the absence of size effect on the damage nucleation load, by comparing t e /σ c for multiple values of a/ while keeping ρ fixed at 0.1 and 1.
Figure 2.17 focuses on the crack nucleation load t c , showing its dependence on the defect shape (left) and size (right). Figure 2.17-right shows the case of circular hole (ρ = 1) and an elongated ellipse, which can be identified to a crack (ρ = 0.1). It clearly highlights a scale effect including three regimes: i. For "small" holes (a ), crack nucleation takes place when t = σ c , as in the uniaxial traction of a perfect bar without the hole: the hole has virtually no effect on crack nucleation. In this regime the strength of a structure is completely determined by that of the constitutive material. Defects of this size do not reduce the structural strength and can be ignored at the macroscopic level.
ii. Holes with length of the order of the internal length (a = O( )), have a strong impact on the structural strength. In this regime the structural strength can be approximated by
log(t c /σ c ) = D log(a/ ) + c, (2.17)
where D is an dimensionless coefficient depending on the defect shape. For a circular hole ρ = 1, we have D ≈ -1/3.
iii. When a , the structural failure is completely determined by the stress distribution surrounding the defect. We observe that for weak stress singularities (ρ ≡ 1), nucleation takes place when the maximum stress reaches the elastic limit σ e , whereas the behavior as ρ ≡ 0 is consistent with Griffith criterion, i.e. the nucleation load scales as 1/ √ a.
Figure 2.17-right shows that the shape of the cavity has a significant influence on the critical load only in the latter regime, a
. Indeed, for a/ of the order of the unity or smaller, the critical loads t c for circular and highly elongated cavities are almost indistinguishable. This small sensitivity of the critical load on the shape is the result of the stress-smoothing effect of the damage field, which is characterized by a cut-off length of the order of . Figure 2.17-left shows the critical stress t c at nucleation when varying the aspect ratio ρ for a/ = 48, for which σ G /σ c = 2/15. As expected, the critical stress varies smoothly from the value σ G (2.15) predicted by the Griffith theory for a highly elongated cavity identified to a perfect crack, to t e (2.16) for circular cracks, where the crack nucleates as soon as the maximum stress σ max attains the elastic limit.
This series of experiments is consistent with the results of Section 2.2.2 showing that variational phase-field models are capable of simultaneously accounting for critical elastic energy release rate and critical stress. Furthermore, they illustrate how the internal length can be linked to critical defect size as the nucleation load for a vanishing defect of size less than approaches that of a flawless structure.
Competition between material and structural size effects
We can finally conclude the study of size effects in variational phase-field models by focusing on the competition between material and structural size effects. For that matter, we study the limit case ρ = 0 of a perfect crack of finite length 2a in a plate of finite width 2W (see Figure 2.18-left). Under the hypotheses of LEFM, the critical load upon which the crack propagates is
σ G (a/ ch , a/W ) = G c E cos( aπ 2W ) aπ = σ c 1 π ch a cos aπ 2W , (2.18)
which reduces to (2.15) for large plate (W/a → ∞). As before, we note that σ G /σ c → ∞ as a/ ch → 0, so that for any given load, the material's tensile strength is exceeded for short enough initial crack. We performed series of numerical simulations using the AT 1 model on a quarter of the domain with W = 1, L = 4, ν = 0.3, = W/25, h = /20, and the initial crack's halflength a ranging from from 0.025 to 12.5 (i.e. 0.001W to 0.5W ). The pre-existing crack was modeled as a geometric feature and undamaged crack lip boundary conditions were prescribed. The loading was applied by imposing a uniform normal stress of amplitude t to its upper and lower edge. theories linking size-effect on the strength of the material [START_REF] Bažant | Scaling of Structural Strength[END_REF]. When a , i.e. when the defect is large compared to the material's length, crack initiation is governed by Griffith's criterion (2.18). As noted earlier, the choice of undamaged notch boundary conditions on the damage fields leads to slightly overestimating the nucleation load. Our numerical simulations reproduce the structural size effect predicted by LEFM when the crack length is comparable to the plate width W .
When a , we observe that the macroscopic structural strength is very close to the material's tensile strength. Again, below the material's internal length, defects have virtually no impact on the structural response. LEFM and Griffith-based models cannot account for this material size-effect. These effects are introduced in variational phase-field model by the additional material parameter .
In the intermediate regime a = O( ), we observe a smooth transition between strength and toughness criteria, where the tensile strength is never exceeded. When a , our numerical simulations are consistent with predictions from Linear Elastic Fracture Mechanics shown as a dashed line in Figure 2.18, whereas when a , the structural effect of the small crack disappear, and nucleation takes place at or near the material's tensile strength, i.e. t c /σ c 1.
Conclusion
In contrast with most of the literature on phase-field models of fracture focusing validation and verification in the context of propagation "macroscopic" cracks [START_REF] Mesgarnejad | Validation simulations for the variational approach to fracture[END_REF][START_REF] Pham | Experimental validation of a phase-field model for fracture[END_REF], we have studied crack nucleation and initiation in multiple geometries. We confirmed observations reported elsewhere in the literature that although they are mathematically equivalent in the limit of → 0, damaged notch boundary conditions lead to a more accurate computation near strong stress singularities whereas away from singularities, undamaged notch boundary conditions are to be used. Our numerical simulations also highlight the superiority of phase-field models such as AT 1 which exhibit an elastic phase in the one-dimensional tension problem over those who don't (such as AT 2 ), when nucleation away from strong singularity is involved. Our numerical simulations suggest that it is not possible to accurately account for crack nucleation near "weak" singularities using the AT 2 model. We infer that a strictly positive elastic limit σ e is a required feature of a phase-field model that properly account for crack nucleation.
We have shown that as suggested by the one-dimensional tension problem, the regularization parameter must be understood (up to a model-dependent multiplicative constant) as the material's characteristic or internal length ch = G c E/σ 2 c , and linked to the material strength σ c . With this adjustment, we show that variational phasefield models are capable of quantitative prediction of crack nucleation in a wide range of geometries including three-and four-point bending with various type of notches, single and double edge notch tests, and a range of brittle materials, including steel and Duraluminium at low temperatures, PVC foams, PMMA, and several ceramics.
We recognize that measuring a material's tensile strength is difficult and sensitive to the presence of defect, so that formulas (2.6) may not be a practical way of computing a material's internal length. Instead, we propose to perform a series of experiments such as three point bending with varying notch depth, radius or angle, as we have demonstrated in Figure 2.11 that with a properly adjusted internal length, variational phase-field models are capable of predicting the nucleation load for any notch depth or aperture. Furthermore, since variational phase-field models do not rely on any crack-tip asymptotic, this identification can be made even in a situation where generalized stress or notch intensity factors are not known or are affected by the sample's structural geometry.
We have also shown that variational phase-field models properly account for size effects that cannot be recovered from Griffith-based theories. By introducing the material's internal length, they can account for the vanishing effect of small defects on the structural response of a material, or reconcile the existence of a critical material strength with the existence of stress singularity. Most importantly, they do not require introducing ad-hoc criteria based on local geometry and loading. On the contrary, we see that in most situation, criteria derived from the asymptotic analysis of a micro-geometry can be recovered a posteriori. Furthermore, variational phase-field models are capable of quantitative prediction of crack path after nucleation. Again, they do so without resolving to introduce additional ad-hoc criteria, but only rely on a general energy minimization principle.
In short, we have demonstrated that variational phase-field models address some of the most vexing issues associated with brittle fracture: scale effects, nucleation, existence of a critical stress, and path prediction.
Of course, there are still remaining issues that need to be addressed. Whereas the models are derived from irreversibility, stability and energy balance, our numerical simulations do not enforce energy balance as indicated by a drop of the total energy upon crack nucleation without string singularities. Note that to this day, devising an evolution principle combining the strength of (2.3) while ensuring energy balance is still an open
Appendix B
Tables of experimental an numerical data for V-notch experiments [START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three point bending experiments of a PMMA sample compared to full domain numerical simulations using the AT 1 model with undamaged notch boundary conditions. The value a/h refers to the ratio depth of the notch over sample thickness. See Figure 2.9 for geometry and loading.
ω λ k c k c k c k c (AT 1 -U) (AT 1 -D) (AT 2 -U) (AT 2 -D) 0 01°0.
Chapter 3
A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures
Hydraulic fracturing is a process to initiate and to extend fractures by injecting fluid into subsurface. Mathematical modeling of hydraulic fracturing requires coupled solution of models for fluid flows and reservoir-fracture deformation. The governing equations for these processes are fairly well understood and includes, for example, the Reynold's equation, cubic law, diffusivity equation and Darcy's law for fluid flow modeling, linear poro-elasticity equation for reservoir-fracture deformation and Griffith's criterion for fracture propagation. Considering that fracture propagation is a moving boundary problem, the numerical and computational challenges of solving these governing equations on the fracture domain limit the ability to comprehensively model hydraulic fracturing. These challenges include but are not limited to, finding efficient ways of representing numerically the fracture and reservoir domains in the same computational framework while still ensuring hydraulic and mechanical coupling between both subdomains. To address these issues, several authors have assumed a known propagation path that is limited to a coordinate direction of the computational grid [START_REF] Carrier | Numerical modeling of hydraulic fracture problem in permeable medium using cohesive zone model[END_REF][START_REF] Boone | A numerical procedure for simulation of hydraulically driven fracture propagation in poroelastic media[END_REF] while some others simply treated fractures as external boundaries of the reservoir computational domain [START_REF] Ji | A novel hydraulic fracturing model fully coupled with geomechanics and reservoir simulation[END_REF][START_REF] Dean | Hydraulic-fracture predictions with a fully coupled geomechanical reservoir simulator[END_REF]. Special interface elements called zero-thickness elements have also been used to handle fluid flow in fractures embedded in continuum media [START_REF] Carrier | Numerical modeling of hydraulic fracture problem in permeable medium using cohesive zone model[END_REF][START_REF] Segura | On zero-thickness interface elements for diffusion problems[END_REF][START_REF] Segura | Coupled hm analysis using zero-thickness interface elements with double nodes. part I: Theoretical model[END_REF][START_REF] Segura | Coupled hm analysis using zero-thickness interface elements with double nodes part II: Verification and application[END_REF][START_REF] Boone | A numerical procedure for simulation of hydraulically driven fracture propagation in poroelastic media[END_REF][START_REF] Lobão | Modelling of hydrofracture flow in porous media[END_REF]. Despite the simplicity of these approaches and contrary to field evidence of complex fracture geometry and propagation paths, they have limited ability to reproduce realistic fracture behaviors.
Where attempts have been made to represent fractures and reservoir in the same computational domain, for instance using the extended finite element method (XFEM) [START_REF] Mohammadnejad | An extended finite element method for hydraulic fracture propagation in deformable porous media with the cohesive crack model[END_REF][START_REF] Dahi | Analysis of hydraulic fracture propagation in fractured reservoirs: an improved model for the interaction between induced and natural fractures[END_REF] and the generalized finite element method (GFEM) [START_REF] Gupta | Simulation of non-planar three-dimensional hydraulic fracture propagation[END_REF], the computational cost is high and the numerics cumbersome, characterized by continuous remeshing to provide grids 3.1. A phase fields model for hydraulic fracturing that explicitly match the evolving fracture surface. Some of these challenges can be overcome using a phase field representation for fractures as evident in the work of [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] and [START_REF] Bourdin | A variational approach to the modeling and numerical simulation of hydraulic fracturing under in-situ stresses[END_REF]. This chapter extends the works of [START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] by applying the variational phase field model to a network of fractures. The hydraulic fracture model is developed by incorporating fracturing fluid pressure in Francfort and Marigo's variational approach to fracture [START_REF] Bourdin | The Variational Approach to Fracture[END_REF]. Specifically, the fracture model recast Griffith's propagation criteria into a total energy minimization problem, where the global energy is the sum of the elastic and fracture surface energies, the fracturing fluid pressure force and the work done by in-situ stresses. We assume quasi static fracture propagation and in this setting, the fractured state of the reservoir is the solution of a series of minimizations of this total energy with respect to any kinematically admissible crack sets and displacement field. Numerical implementation of the model is based on a phase field representation of the fracture and subsequent regularization of the total energy function. The phase field technique avoids the need for explicit knowledge of fracture location, it permits the use of a single computational domain for fracture and reservoir representation. The strength of this method is to provide a unified setting for handling path determination, nucleation and growth of arbitrary number of stable cracks in any dimensions based on the energy minimization principle. This work focuses on the fracture propagation stability through various examples such as, a pressurized single fracture stimulated by a controlled injected volume in a large domain, a network of multiple parallel fractures and a pressure driven laboratory experiment to measure rocks toughness.
The Chapter is organized as follows: Section 3.1 is devoted to recall phase field models for hydraulic fracturing in the toughness dominated regime with no fluid loss to the impermeable elastic reservoir [START_REF] Detournay | The near tip region of a fluid driven fracture propagating in a permeable elastic solid[END_REF]. Then, our numerical implementation scheme and algorithm for volume driven hydraulic fracturing simulations is exposed in section 3.1.3. Tough the toughness dominated regime may not cover the whole spectrum of fracture propagation but provides an appropriate framework for verifications since it does not require the solution of a flow model. Therein, section 3.2 is concerned with comparisons between our numerical results and the closed form solutions provided by Sneddon [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF][START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] for the fluid pressure, fracture length/radius and fracture volume in a single crack case. Section 3.3 focuses on the propagation of infinite pressurized parallel fractures and it is compared with the derived solution. Section 3.4 is devoted to study the pre-fracture stability in the burst experiment at a controlled pressure. This test proposed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] is designed to measure the fracture toughness of the rock and replicates situations encountered downhole with a borehole and bi-wing fracture.
A phase fields model for hydraulic fracturing
A variational model of fracture in a poroelastic medium
Consider a reservoir consisting of a perfectly brittle isotropic homogeneous linear poroelastic material with A the Hooke's law tensor and G c the critical energy release rate Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures occupying a domain Ω ⊂ R n , n = 2 or 3 in its reference configuration. The domain is partially cut by a sufficiently regular crack set Γ ⊂ Ω with Γ ∩ ∂Ω = ∅. A uniform pressure denoted by p applies on both faces of the fracture lips i.e. Γ = Γ + ∪ Γ -and pore pressure denoted by p p applies in the porous material which follows the Biot poroelastic coefficient λ. The sound region Ω \ Γ is subject to a time independent boundary displacement ū(t) = 0 on the Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ • ν on the remainder ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. For the sake of simplicity body forces are neglected such that at the equilibrium, the stress satisfies, div σ = 0
where the Cauchy stress tensor follows Biot's theory [START_REF] Biot | General theory of three-dimensional consolidation[END_REF], i.e. σ = σλp p I, σ being the effective stress tensor. The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u,
e(u) = ∇u + ∇ T u 2 .
The stress-strain relation is σ = Ae(u), so that,
σ = A e(u) - λ 3κ p p I ,
where 3κ is the material's bulk modulus. Those equations can be rewritten in a variational form, by multiplying the equilibrium by the virtual displacement v ∈ H 1 0 (Ω \ Γ; R n ) and using Green's formula over Ω \ Γ. After calculation, we get that,
Ω\Γ σ : e(v) dx - ∂ N Ω g(t) • v dH n-1 - Γ p v • ν dH n-1 = 0 (3.1)
where H n-1 denotes the n -1-dimensional Hausdorff measure, i.e. its aggregate length in 2 dimensions and surface area in 3 dimensions. Finally, we remark that the above equation (3.1) can be seen as the Euler-Lagrange equation for the minimization of the elastic energy,
E(u, Γ) = Ω\Γ 1 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 - Γ p u • ν dH n-1 (3.2) amongst all displacement fields u ∈ H 1 (Ω \ Γ; R n ) such that u = 0 on ∂ D Ω.
A phase fields model for hydraulic fracturing
Remark 2 Of course, fluid equilibrium mandates continuity of pressure so that p p = p along Γ. Our choice to introduce two pressure fields is motivated by our focus on lowpermeability reservoirs. In this situation, assuming very small leak-off, it is reasonable to assume that for short injection time, the pore pressure is "almost" constant away from the crack, hence that p = p p .
We follow the formalism of [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF] and propose a time-discrete variational model of crack propagation. To any crack set Γ ⊂ Ω and any kinematically admissible displacement field u, we associate the fracture energy,
E(u, Γ) = Ω\Γ 1 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 - Γ p u • ν dH n-1 + G c H n-1 (Γ) (3.3)
Considering then a time interval [0, T ] and a discrete set of time steps 0 = t 0 < t 1 < • • • < t N = T , and denoting by p i , p p i and g i , the crack pressure, pore pressure and external stress at time t i (i > 0), we postulate that the displacement and crack set (u i , Γ i ) are minimizers of E amongst all kinematically admissible displacement fields u and all crack sets Γ satisfying a growth condition Γ j ⊂ Γ for all j < i, with Γ 0 possibly representing pre-existing cracks.
It is worth emphasizing that in this model, no assumptions are made on the crack geometry Γ i . As in Francfort and Marigo's pioneering work [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF], minimization of the total fracture energy is all that is needed to fully identify the crack geometry (path) and topology (nucleation, merging, branching).
Variational phase-field approximation
Several techniques have been proposed for the numerical implementation of the fracture energy E, the main difficulty being to handle discontinuous displacements along unknown surfaces. In recent years, variational phase-field models, originally devised in [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Ambrosio | Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence[END_REF], and extended to brittle fracture [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] have become very popular.
We follow this approach by introducing a regularization length , an auxiliary field α with values in [0, 1] representing the unknown crack surface, and the regularized energy.
E (u, α) = Ω 1 2 A (1 -α)e(u) - λ 3κ p p I : (1 -α)e(u) - λ 3κ p p I dx - ∂ N Ω g(t) • u dH n-1 + Ω pu • ∇α dx + 3G c 8 Ω α + |∇α| 2 dx (3.4)
where α = 0 is the undamaged state material and α = 1 refers to the broken part. One can recognize the AT 1 model introduced in the Chapter 1 which differs from one used in [START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF].
Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures
At each time step, the constrained minimization of the fracture energy E is then replaced with that of E , with respect to all (u i , α i ) such that u i is kinematically admissible and 0 ≤ α i-1 ≤ α i ≤ 1.
The Γ-convergence of (3.4) to (3.3), which constitutes the main justification of variational phase-field models is a straightforward extension of [START_REF] Chambolle | An approximation result for special functions with bounded variations[END_REF][START_REF] Chambolle | Addendum to "An Approximation Result for Special Functions with Bounded Deformation[END_REF], or [START_REF] Iurlano | A density result for gsbd and its application to the approximation of brittle fracture energies[END_REF]. It is quite technical and not quoted here. The form of the regularization of the surface energy in (3.4) is slightly different from the one originally proposed in [START_REF] Bourdin | A variational approach to the modeling and numerical simulation of hydraulic fracturing under in-situ stresses[END_REF][START_REF] Bourdin | A variational approach to the numerical simulation of hydraulic fracturing[END_REF] but this choice is motivated by the work of [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF][START_REF] Bourdin | Morphogenesis and propagation of complex cracks induced by thermal shocks[END_REF].
In the context of poro-elasticity, the regularization of the elastic energy of the form of,
Ω 1 2 A (1 -α)e(u) - λ 3κ p p I : (1 -α)e(u) - λ 3κ p p I dx
is different from that of [START_REF] Mikelic | A quasistatic phase field approach to fluid filled fractures[END_REF] and follow-up work, or [START_REF] Miehe | Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture[END_REF][START_REF] Wilson | A phase-field model for fracture in piezoelectric ceramics[END_REF] which use a regularization of the form
Ω 1 2 (1 -α) 2 A e(u) - λ 3κ p p I : e(u) - λ 3κ p p I dx.
This choice is consistent with the point of view that damage takes place at the sub-pore scale, so that the damage variable α should impact the Cauchy stress and not the effective poro-elastic stress. Note that as → 0, both expressions will satisfy Γ-convergence to E.
A fundamental requirement of hydraulic fracturing modeling is volume conservation, that is the sum of the fracture volume and fluid lost to the surrounding reservoir must equal the amount of fluid injected denoted V . In the K-regime, the injected fluid is inviscid and no mass is transported since the reservoir is impermeable. Of course, reservoir impermeability means no fluid loss from fracture to reservoir and this lack of hydraulic communication means that the reservoir pressure p p and fracture fluid pressure p are two distinct and discontinuous quantities. Furthermore, the zero viscosity of the injected fluid is incompatible with any fluid flow model, leaving global volume balance as the requirement for computing the unknown fracturing fluid pressure p. In the sequel we set aside the reservoir pressure p p and consider this as a hydrostatic stress offset in the domain, which can be recast by applying a constant pressure on the entire boundary of the domain.
Numerical implementation
The numerical implementation of the variational phase-field model is well established. In the numerical simulations presented below, we discretized the regularized fracture energy using linear or bilinear finite elements. We follow the classical alternate minimizations approach of [START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF] and adapt to volume-driven fractures where main steps are: i. For a given (α, p) the minimization of E with respect to u is an elastic problem with the prescribed boundary condition. To solve this, we employed preconditioned conjugate gradient methods solvers.
3.2. Numerical verification case of a pressurized single fracture in a two and three dimensions
ii. The minimization of E with respect to α for fixed (u, p) and subject to irreversibility (α ≥ α i-1 ) is solved using variational inequality solvers provided by PETCs [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF].
iii. For a fixed (u, α), the total volume of fluid can be computed, such that,
V = - Ω u • ∇α dx.
The idea is to rescale the fluid pressure using the secant method (a root-finding algorithm) based on a recurrence relation.
A possible algorithm to solve volume-driven hydraulic fracturing is to use nested loops. The inner loop solves the elastic problem i. and rescale the pressure iii. until the error between the target and the computed volume is below a fixed tolerance. The outer loop is composed of ii. and the previous procedure and the exit is triggered once the damage has converged. This leads to the following Algorithm 2 where δ V and δ α are fixed tolerances. Remark that the inner loop solves a linear problem, hence, finding the pressure p associated to the target volume V should converge in strictly less than four iterations. All computations were performed using the open source mef90 1 .
In-situ stresses play a huge role in hydraulic fracture propagation and the ability to incorporate them in a numerical model is an important requirement for robust hydraulic fracturing modeling. Our numerical model easily accounts for these compressive stresses on boundaries of the reservoir. However in-situ stresses simulated cannot exceeded the maximum admissible stress of the material given by σ c = 3EG c /8 . We run a series of two-and three-dimensions computations to verify our numerical model and investigate stability of fractures.
Numerical verification case of a pressurized single fracture in a two and three dimensions
Using the Algorithm 2 a pressurized line and penny shape fractures have been respectively simulated in two-and three-dimensions, and their results compared with the closed form solutions. Both problems have a symmetric axis, i.e. its aggregate a reflexion axis in 2d and a rotation in 3d, leading to a invariant geometry drawn on Figure 3.1. Also, all geometric and material parameters are identically set up for both problems and summarized in the Table 3.1. The closed form solutions provided by Sneddon in [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF][START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] are recalled in the Appendix C and assume an infinite domain with vanishing stress and displacement at the boundary. To satisfy those boundary conditions we performed simulations on a huge domain clamped at the boundary, where the reservoir size is 100 times larger than the pre-fracture length as reported in the Table 3.1. To moderate the number of elements in the domain, a casing (W, H) with a constant refined mesh size of resolution h is encapsulated around the fracture. Outside the casing a coarsen mesh is spread out see Figure 3.1.
Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures Algorithm 2 Volume driven hydraulic fracturing algorithm at the step i 1: Let j = 0 and α 0 := α i-1 2: repeat 3:
Set, p k-1 i = p k i and V k-1 i = V k i 4:
p k+1 i := p k i -V k i (p k i -p k-1 i )/(V k i -V k-1 i ) 6:
Compute the equilibrium,
u k+1 := argmin u∈C i E (u, α j ) 7:
Compute volume of fractures,
V k+1 i := - Ω u k+1 • ∇α j dx 8: k := k + 1 9: until V k i -V i L ∞ ≤ δ V 10:
Compute the damage,
α j+1 := argmin α∈D i α≥α i-1 E (u j+1 , α)
11:
j := j + 1 12: until α j -α j-1
L ∞ ≤ δ α 13: Set, u i := u j and α i := α j refined mesh, size h coarsen mesh symmetry axis 3.1: Parameters used for the simulation of a single fracture in two and three dimensions.
A loading cycle is preformed by pressurizing the fracture until propagation, then, pumping all the fluid out of the crack. The pre-fracture of length l 0 is measured by a isovalues contour plot for α = .8 before refilling the fracture of fluid again. The reason of this is we do not have an optimal damage profile at the fracture tips, leading to underestimate the critical pressure p c . Similar issues have been observed during the nucleation process in [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF] where G c is overshoot due to the solution stability. Snap-shots of the damage before and after the loading cycle in the Figure 3.3 illustrate differences between damage profiles at the crack tips. Since the critical crack pressure is a decreasing function with respect to the crack length, the maximum value is obtained at the loading point when the crack initiates (for the pre-fracture). One can see on the Figure 3.4 that the penny shape fracture growth is not necessary symmetrical with respect to the geometry but remains a disk shape which is consistent with the invariant closed form solution.
We know from prior work see [START_REF] Bourdin | The variational approach to fracture[END_REF] that the "effective" numerical toughness is quantified by (G c ) eff = G c (1 + 3h/(8 ) ) in two dimensions. However, for the penny shape crack (G c ) eff = G c (1 + 3h/(8 ) + 2h/l ), where 2h is the thickness of the crack and l the radius. The additional term of 2h/l comes from the lateral surface contribution which becomes negligible for thin fractures.
The fluid pressure p and the fracture length l closed form solution with respect to the total injected volume of fluid V is provided by [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF] and is recalled in the Appendix C. Figure 3.2 shows a perfect match between the numerical results and the closed solution for the line fracture and penny shape crack. In both cases as long as the V ≤ V c the crack does not grow, and since V > V c the pressure drop as p ∼ V -1/3 (line fracture) and p ∼ V -1/5 (penny shape crack). Notice that the pressure decreases when the crack grows, therein a pressure driven crack is necessary unstable, indeed there is no admissible pressure over the maximum value p c .
Remark 3 The Griffith regime requires σ
c = 3E G c /(8 ) ≥ πE G c /(4l) = p c
in two dimensions, leading to l ≥ 2π /3. Therefore, the pre-fracture must be longer than twice the material internal length to avoid any size effects phenomena as reported in Chapter 2.
Those simulations show that the variational phase field model to hydraulic fracturing recovers Griffith's initiation and propagation for a single pressurized crack. Even if this can be seen as a toy example because the fracture propagation is rectilinear, without any changes on the implementation multi-fracking can be simulated as illustrated in the Figure 3.5. Fracture paths are obtained by total energy minimization and satisfies Griffith's propagation criterion.
Multi fractures in two dimensions
Multi fractures in two dimensions
One of the most important features of our phase field hydraulic fracturing model is its ability to handle multiple fractures without additional computational or modeling effort than is required for simulating single fracture. This capability is highlighted in the following study of the stimulation of a network of parallel fractures. All cracks are subject to the same pressure and we control the total amount of fluid injected into cracks, i.e. fluid can migrate from a crack to another via a wellbore.
The case where all fractures of a parallel network propagate (multi fracking scenario) is often postulated. However, considering the variational structure of Griffith leads to a different conclusion. For the sake of simplicity consider only two parallel fractures. A virtual extension of one of the cracks (variational argument) induces a drop of pressure in both fractures. Consequently the shorter fracture is sub-critical and remains unchanged since the pressure p < p c . Moreover the longer fracture requires less pressure to propagate than the shorter because the critical pressure decreases with the crack length. Finally the longer crack continues to propagate. This non restrictive situation can be extended to multiple fractures (parallel and the same size). In the sequel, we propose to revisit the hypothesis of multi-fracking by performing numerical simulations using the Algorithm 2. Consider a network of infinite parallel cracks with the same pressure p where their individual length is l and the spacing between cracks is δ drawn in the Figure 3.6 (left). At the initial state all pre-cracks have the same length denoted l 0 and no in-situ stresses is applied on the reservoir domain.
Multi-fracking closed form solution
This network of parallel cracks is a duplication of an invariant geometry, precisely a strip domain Ω = (-∞, +∞) × [-δ, δ] cut in the middle by a fracture Γ = [-l, l] × {0}. An asymptotic solution of this cell domain problem is provided by Sneddon in [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF]
V (ρ) = 8pδ 2 E π ρ 2 f (ρ), (3.5)
where the density of fractures ρ = lπ/(2δ) and
f (ρ) = 1 -ρ 2 /2 + ρ 4 /3 + o(ρ 6 ).
The Taylor series of f (ρ) in 0 provided by Sneddon differs from one given in the reference [START_REF] Murakami | Handbook of stress intensity factors[END_REF] where f (ρ) = 1ρ 2 /2 + 3ρ 4 /8 + o(ρ 6 ). The latter is exactly the first three terms of the expansion of
f (ρ) = 1 1 + ρ 2 . (3.6)
The critical pressure satisfying Griffith propagation for this network of fractures problem is
p(ρ) = E G c δ(ρ 2 f (ρ)) (3.7)
Of course the closed form expression consider that all cracks grow by symmetry. It is convenient for numerical reason to consider an half domain and an half fracture (a crack lip) of the reference geometry such that we have (Ω 1 , Γ 1 ) and by symmetry expansion (Ω 2 , Γ 2 ), (Ω 4 , Γ 4 ),..,(Ω 2n , Γ 2n ) illustrated in the Figure 3.6 (right).
Numerical simulation of multi-fracking by computation of unit cells construction
The idea is to reproduce numerically multi-fracking scenario, thus simulation is performed on stripes of length 2L with pre-fractures of length 2l 0 such that, geometries considered are:
Ω 2n = [-L, L] × [0, (2n -2)δ] Γ 0, 2n = [-l 0 , l 0 ] × n k=1 {2(k -1)δ} (3.8)
for n ≥ 1, n being the number of crack lips. Naturally a crack is composed of two lips. The prescribed boundary displacement on the top-bottom extremities is u y (0) = u y (2(n -1)δ) = 0, and on the left-right is u(±L) = 0. All numerical parameters used are set up in the Table 3.2.
h
L δ l 0 E ν G c 0.005 10 1 0.115 1 0 1 3h Table 3.2: Parameters used in the numerical simulation for infinite cracks Using the same technique of loading cycle as in section 3.2 and after pumping enough fluid into the system of cracks we observed in all simulations performed that only one fracture grows, precisely the one at the boundary as illustrated in the Figure 3.7. By
Multi fractures in two dimensions
using reflexion symmetry we have a periodicity of large fractures of 1/n. We notice that simulations performed never stimulate middle fracture. Indeed, by doing so after reflexions this will lead to an higher periodicity cases. The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulations cells domain see table 3.2, and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack propagates in the domain. Using the multiplicity pictures from left to right we obtain a fracture propagation periodicity denoted period. of 6/6, 3/6, 1.5/6 and 1/6.
To compare the total injected fluid V between simulations, we introduce the fluid volume density i.e the fluid volume for a unit geometry cell given by 2V /n. The evolution of normalized pressure, volume of fluid per cell and length are plotted in Figure 3.8 and show that the multi-fracking situation (one periodic) match perfectly with the close form solution provided by the equations (3.7),(3.5) and (3.6). Also, one can see that Sneddon approximation is not accurate for dense fractures. We can observe from simulations in Figure 3.8 that a lower periodicity (1/n) of growing cracks implies a reduction in pressure evolution. Also notice that the rate of pressure drop increases when the number of long cracks decrease, so that rapid pressure drop may indicate a poor stimulation. Also this loss of multi fracking stimulation decreases the fracture surface are for resource recovery. All cracks propagating simultaneously case is not stable in the sense that there exits a lower energy state with fewer growing crack. However as we will be discussed in the section 3.3.3 multi fracking may work for low fracture density since their interactions are negligible.
Chapter 3. A phase-field model for hydraulic fracturing in low permeability reservoirs: propagation of stable fractures
Multi-fracking for dense fractures
In the following we investigate critical pressure with respect to the density of fracture for different periodicity. Colored plot are numerical results for different domain sizes Ω 1 , Ω 2 , Ω 4 and Ω 6 . The solid black line is the closed form solution and gray the approximated solution given by Sneddon [START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF].
Let us focus on fractures propagation when their interactions become stronger i.e. higher fracture density ρ = lπ/(2δ). We start by normalizing the pressure relation for multi-fracking equation (3.7) with p = E G c /(lπ) which is a single fracture problem studied in section 3.2.
r p (ρ) = 2ρ (ρ 2 f (ρ)) = 2(ρ 2 +1) 3/2 ρ 2 +2
.
(3.9)
Remark that r p (0) = 1 means that critical pressure for largely spaced fractures are identical to a line fracture in a infinite domain problem, thus cracks behave without interacting each other. We run a set of numerical simulations using the same set of parameters than previously recalled in the Table 3.2 except that δ varies. For a high fractures density we 3.4. Fracture stability in the burst experiment with a confining pressure discovered another loss of symmetry shown on Figure 3.9 such that the fracture grows only in one direction. Figure 3.9: Domains in the deformed configuration for respectively Ω 2 and Ω 4 with 2δ = .5. The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulation domain and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack tip propagates in one direction in the simulated domain.
We report pressures obtained numerically depending on the fracture density in the Figure 3.10 and comparison with (3.9). One can see that the closed form solution is in good agreement with numerical simulation for the periodicity one and also lower periodicity obtained by doing ρ ← ρ/n in the equation (3.9). We see that for low fractures density the critical pressure is equal to a line fracture in a infinite domain. For higher fractures density, interactions become stronger and propagating all fractures require a high pressure compare to grow only one of them. As an example, a network of pre-fractures of length l = 6.36m and spaced δ = 10m thus ρ = 1, in this situation the required pressure is equal to r(1)K Ic / √ lπ with r(1) = 1.4 to propagate all cracks together compare to r(1) = 1 for only one single fracture. Naturally the system bifurcate to less fractures propagation leading to a drop of the fluid pressure.
Fracture stability in the burst experiment with a confining pressure
This section focuses on the stability of fractures propagation in the burst experiment. This laboratory experiment was conceived to measure the resistance to fracturing K Ic (also called the fracture toughness) of rock under confining pressure which is a critical parameter to match the breakdown pressure in mini-frac simulation. The idea is to provide a value of K Ic for hydraulic fracturing simulations in the K-regime [START_REF] Detournay | Propagation regimes of fluid-driven fractures in impermeable rocks[END_REF]. However, past experimental studies suggest that the fracture toughness of rock is dependent on the confining pressure under which the rock is imposed. Various methodologies exist for the Black dash line is r p (ρ/n) with 1/n the periodicity. Colored line is numerical results for respectively a periodicity 6/6, 3/6 and 1.5/6. measurement of K Ic under confining pressure and results differ in each study. The most accepted methodology in petroleum industry is the so called burst experiment, which was proposed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF], as the experimental geometry replicates a situation encountered downhole with a borehole and bi-wing fracture. Under linear elastic fracture mechanics, stable and unstable crack growth regime have been calculated depending on the confining pressure and geometry. During unstable crack propagation the phase-field models for hydraulic fracturing do not bring information. Instead we perform a Stress Intensity Factor (SIF) analysis along the fracture path to determine propagation stability regimes, herein this section is different from the phase-field sprite of the dissertation. However at the end we will verify the ability of the phase-field model to capture fracture stability transition from stable to unstable.
The burst experiment
The effect of confining pressure on the fracture toughness was first studied by Schmidt and Huddle [START_REF] Schmidt | Effect of Confining Pressure on Fracture Toughness of Indiana Limestone[END_REF] on Indiana limestone using single-edge-notch samples in a pressure vessel. In their experiments, increase in the fracture toughness up to four fold have been reported. Other investigations to quantify the confining pressure dependency were performed on the three point bending [START_REF] Müller | Brittle crack growth in rocks[END_REF][START_REF] Vásárhelyi | Influence of pressure on the crack propagation under mode i loading in anisotropic gneiss[END_REF], modified ring test [START_REF] Thiercelin | Fracture Toughness and Hydraulic Fracturing[END_REF], chevron notched Brazillian disk [START_REF] Roegiers | Rock fracture tests in simulated downhole conditions[END_REF], cylinder with a partially penetrating borehole [START_REF] Holder | Measurements of effective fracture toughness values for hydraulic fracture: Dependence on pressure and fluid rheology[END_REF][START_REF] Sitharam Thallak | The pressure dependence of apparent hydrofracture toughness[END_REF], and thick wall cylinder 3.4. Fracture stability in the burst experiment with a confining pressure with notches [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF][START_REF] Chen | Laboratory measurement and interpretation of the fracture toughness of formation rocks at great depth[END_REF] and without notches [START_REF] Stoeckhert | Mode i fracture toughness of rock under confining pressure[END_REF]. Published results on Indiana limestone are shown in Figure 3.11 and the data suggest the fracture toughness dependency on the confining pressure with a linear relationship. Provided increasing reports on confining pressure dependent fracture toughness, theoretical works to describe the mechanisms focus mainly on process zones ahead of the fracture as a culprit of the "apparent" fracture toughness including Dugdale type process zone [START_REF] Zhao | Determination of in situ fracture toughness[END_REF][START_REF] Sato | Cohesive crack analysis of toughness increase due to confining pressure[END_REF], Barenblatt cohesive zone model [START_REF] Allan M Rubin | Tensile fracture of rock at high confining pressure: implications for dike propagation[END_REF], and Dugdale-Barenblatt tension softening model [START_REF] Hashida | Numerical simulation with experimental verification of the fracture behavior in granite under confining pressures based on the tension-softening model[END_REF][START_REF] Fialko | Numerical simulation of high-pressure rock tensile fracture experiments: Evidence of an increase in fracture energy with pressure?[END_REF]. The burst experiment developed by Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] is one of the most important methods to determine the critical stress intensity factor of rocks subject to confining pressure in the petroleum industry as the geometry closely represents actual downhole conditions of hydraulic fracturing stimulation (Figure 3.12). A hydraulic internal pressure is applied on a jacketed borehole of the thick-walled cylinder with pre-cut notches. Also, a confining pressure is applied on the outer cylinder. The inner and the outer pressures increase keeping a constant ratio of the outer to the inner pressure until the complete failure of the sample occurs and the inner and outer pressures will equilibrate to the ambient pressure abruptly. This test has great advantages in sample preparation, no fluid leak off to the rock, and easeness of measurement with various confining pressures. In this section, we firstly revisit the derivation of the stress intensity factor and analyze stabilities of fracture growth from actual burst experiment results. Subsequent analytical results indicate that fracture growth is not necessarily unstable and can have a stable phase in our experiments. In fact, stable fracture propagation has been observed also in past studies with PMMA samples [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF] and sandstone and shale rocks without confining pressure [START_REF] Zhixi | Determination of rock fracture toughness and its relationship with acoustic velocity[END_REF].
Evaluation and computation of the stress intensity factor for the burst experiment
Under Griffith's theory and for a given geometry (a, b, L) see Figure 3.12, the fracture stability is governed by,
K I (P i , L, b, a, P o ) ≤ K Ic
where K Ic is a material property named the critical fracture toughness. The stress intensity factor (SIF) denoted K I is such that, K I < 0 when crack lips interpenetrate and K I ≥ 0 otherwise. Let us define dimensionless parameters as,
w = b a , l = L b -a , r = P o P i (3.10)
Hence, the dimensionless crack stability becomes
K * I (1, l, w, r) ≤ K Ic (P i √ aπ) (3.11)
where K * I (1, l, w, r) = K I (1, l, w, r)/ √ aπ. Necessarily, the inner pressure must be positive
P i > 0 to propagate the crack.
For a given thick wall ratio w and pressure confinement r, we are able to evaluate the fracture toughness of the material by computing K * I if the experiment provides a value of the inner pressure P i and the crack length L at the time when the fracture propagates. The difficulty is to measure the fracture length in-situ during the experiment whose technique is yet to be established. However the burst experiment should be designed for unstable crack propagation. The idea is to maintain the crack opening by keeping the tensile load at the crack tips all along the path, so that the sample bursts (unstable crack propagation) after initiation. Therefore the fracture toughness is computed for the pre-notch length and the critical pressure measured.
3.4. Fracture stability in the burst experiment with a confining pressure Let us study the evolution of K * I (1, l, w, r) with the crack length l for the parameter analysis (w, r) to capture stability crack propagation regimes.
Using Linear Elastic Fracture Mechanics (LEFM) the burst problem denoted (B) is decomposed into the following elementary problems: a situation where pressure is applied only on the inner cylinder called the jacketed problem (J) and a problem with only a confining pressure applied on the outer cylinder problem named (C). This decomposition is illustrated in Figure 3.13. Therefore, the SIF for (B) can then be superposed as
K B * I (1, l, w, r) = K J * I (1, l, w) -rK C * I (1, l, w) (3.12)
where K C * I (1, l, w) is positive for negative applied external pressure P o . In Abou-Sayed [START_REF] Abou-Sayed | An Experimental Technique for Measuring the Fracture Toughness of Rocks under Downhole Stress Conditions[END_REF] the burst problem is decomposed following the Figure 3.14 such that, the decomposition is approximated by the jacketed problem (J) and the unjacketed problem (U) in which the fluid pressurized all internal sides. We get the following SIF,
K B * I (1, l, w, r) ≈ K J * I (1, l, w) -rK U * I (1, l, w) (3.13)
where K U * I (1, l, w) ≥ 0 for a positive P o applied in the interior of the geometry. Note that in our decomposition, no pore pressure (P p ) is considered in the sample, i.e. a drain evacuates the embedded pressure in the rock.
L 2a 2b L L 2a 2b L L 2a 2b L L 2a 2b L = + L 2a 2b L + = = L 2a 2b L L 2a 2b L L 2a 2b L - (B) (J) (C) (U) (B) (J) (B) (M)
+ = = L 2a 2b L L 2a 2b L L 2a 2b L - (B) (J) (C) (U) (B) (J) (B) (M)
Normalized stress intensity factor for the jacketed and unjacketed problems have been derived in Clifton [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF]. The Figure 3.15 shows a good agreement between our results (computational SIF based on the G θ methods) and one provided by Clifton [START_REF] Clifton | Determination of the critical-stress-intensity factor KIc from internally pressurized thick-walled vessels -The critical-stress-intensity factor of thick-walled rock specimens is determined from the pressure at failure[END_REF]. The G θ technique [START_REF] Destuynder | Sur une interprétation mathématique de l'intégrale de Rice en théorie de la rupture fragile[END_REF][START_REF] Suo | On the application of g (θ) method and its comparison with de lorenzi's approach[END_REF] is an estimation of the second derivatives of the potential energy with respect to the crack length, i.e. to make a virtual perturbation of the domain (vector θ) in the crack propagation direction. Then, the SIF is calculated using Irwin formula K I = EG/(1ν 2 ) based on the computed G.
Influence of the confinement and wall thickness ratio on stability of the initial crack
Based on the above result we compare K C * I with K U * I (Figures 3.13 and 3.14), and we found out their relative error is less than 15% for l ∈ [.2, .8] and w ∈ {3, 7, 10}. So, in a first approximation both problems are similar.
For the burst experiment, the fracture propagation occurs when (3.11) becomes an equality, thus we have P i = K Ic /(K B * I √ aπ). A decreasing K B * I induces a growing P i , a contrario a growing K B * I implies to decrease the inner pressure which contradicts the burst experiment set up (monotonic increasing pressure). Consequently the fracture growth is unstable (brutal) for a growing K B * I , and vice versa. In the Figure 3.16 we show different evolutions of the stress intensity factor with the crack length for various wall thickness ratio and confinement. We observe that when the confining pressure r increases fractures propagation are contained and the same effect is noticed for larger thick wall ratio w.
Depending where the pre-fracture tip is located we can draw different fracture regime summarized in three possible evolutions (see Figure 3.17 (a) For this evolution K B * I is strictly increasing thus for any pre-fracture length l 0 the sample will burst. The idea is the fracture initiates once the pressure is critical, then propagates along the sample until the failure. A sudden drop of the pressure is measured signature of the initiation pressure. By recording this pressure P i the fracture toughness K Ic is calculated using equation (3.11).
(b) By making a pre-fracture l 0 ≥ l SU , this leads to the same conclusion than (a).
However for l U S ≤ l 0 ≤ l SU the fracture propagation is stable. To get an estimation of the fracture toughness, we need to track the fracture and to measure its length otherwise is vain. A risky calculation is to assume the fracture initiation length be at the inflection point l SU before the burst. Reasons are the critical point can be a plateau shape leading to imprecise measure of l SU , secondly, since the rock is not a perfect brittle materials the l SU can be slightly different.
(c) For Griffith and any cohesive models which assume compressive forces in front of the notch tips, the fracture propagation is not possible. Of course others initiation criterion are possible as critical stress as an example.
Application to sandstone experiments
A commercial rock mechanics laboratory provided fracture toughness results for different pressure ratios on sandstones and the geometries summarized in the Table 3.3. As their end-caps and hardware are built for 0.25' center hole diameter with 2.5" diameter sample, w values are restricted to 9. Considering no pore pressure and applying stricto sensu the following equation . l U S is a critical point from unstable to stable crack propagation, vice versa for l SU . The fracture does not propagates at stop point denoted l ST by taking l equals to the dimensionless pre-notch length l 0 and the critical pressure recorded P i = P ic , we obtain that the fracture toughness K Ic is influenced by the confining pressure r as reported in the last column of the Table 3.3. However, the evolutions of K B * I with respect to l in the Figure 3.18 (right) shows that all confining experiments (Id 1-5) have a compressive area in front of the fracture tips. Moreover pre-fractures are located in the stable propagation regime, in fine the sample cannot break according to Griffith's theory.
P i √ aπK B * I (1, l, w, r) = K Ic , Chapter 3.
Sample ID 2a
[in] w The wall thickness cylinder w and the confining pressure ratio r play a fundamental role in the crack stability regime, to obtain a brutal fracture propagation after initiation smaller (w, r) is required. A possible choice is to take w = 3 for r = {1/8, 1/6} as shown in Figure 3. [START_REF] Bažant | Scaling of Structural Strength[END_REF].
P ic [Psi] r l 0 K Ic [Psi √ in] Id 0 0.
A stable-unstable regime is observed for (r = 1/6, w = 5). We performed a numerical simulation with the phase-field model to hydraulic fracturing to verify the ability of the simulation to capture the bifurcation point. For that we fix K Ic = 1, the geometric parameters (a = 1, b = 5, l 0 = .15, r = 1/5) and the internal length = 0.01. Then, by pressuring the sample (driven-pressure) damage grows until the critical point. After this 3.4. Fracture stability in the burst experiment with a confining pressure loading, the damage jumps to the external boundary and break the sample. The normalized SIF is computed using the K Ic /(P i √ aπ) for different fracture length and reported in the Figure 3.19
Remark 4 Stability analysis can be also done by volume-driven injection into the inner cylinder using phase-field models. This provides stable fracture propagation, and normalized stress intensity factor can be rebuild using simulations outputs.
Conclusion
Through this chapter we have shown that the phase-field models for hydraulic fracturing is a good candidate to simulate fractures propagation in the toughness dominated regime. The verification is done for a single-fracture and multi-fracking propagation scenario.
Simulations show that the multi-fractures propagation is the worst case energetically speaking contrary to the growth of a single fracture in the network which is the best total energy minimizer. Moreover the bifurcation to a loss of symmetries (e.g. single fracture tip propagation) is intensified by the density of fractures in the network. The pressure-driven burst experiment focuses on fracture stability. The confining pressure and the thickness of the sample might contain fractures growth. By carefully selecting those two parameters (confinement pressure ratio and the geometry) the experiment can be designed to calculate the fracture toughness for rocks.
In short those examples illustrate the potential of the variational phase-field models for hydraulic fracturing associated with the minimization principle to account for stable volume-driven fractures. The loss of symmetry in the multi-fracking scenario is a relevant example to illustrate the concept of variational argument. Same results is confirmed by coupling this model with fluid flow as detailed in Chukwudozie [START_REF] Chukwudozie | Application of the Variational Fracture Model to Hydraulic Fracturing in Poroelastic Media[END_REF].
Substituting (3.19) into (3.14), the fluid pressure is obtained.
p = 3 2 G 2 c E π V (3.20)
Similarly, the fracture length during propagation is obtained by substituting (3.16) into (3.14).
l = 3 E V 2 4π G c (3.21)
Penny-Shaped Fracture (3d domain):
For a penny-shaped fracture in a 3d domain, the fracture volume is
V = 16pl 3 3E (3.22)
where l denotes the radius, while the critical fluid pressure [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] is
p c = πG c E 4l 0 (3.23)
For an initial fracture radius l 0 , the critical volume is,
V c = 64πl 5 0 G c 9E (3.24)
If one follows a procedure similar to that for the line fracture, we will obtain the following relationships for the evolution of the fluid pressure and fracture radius
p c = 5 π 3 G 3 c E 2 12V l = 5 9E V 2 64πG c (3.25)
Chapter 4. Variational models of perfect plasticity Definition 7 (Generalized standard plasticity models)
i. A choice of independent states variables which includes one or multiple internal variables.
ii. Define a convex set where thermodynamical forces lie in.
Concerning i. we choose the plastic strain tensor (symmetric) p and the infinitesimal total deformation denoted e(u). The total strain is the symmetrical part of the spatial gradient of the displacement u, i.e.
e(u) = ∇u + ∇ T u 2 .
The kinematic admissibility is the sum of the plastic and elastic strains denoted ε, given by, e(u) = ε + p.
For ii. consider a free energy density ψ a differentiable convex state function which depends on internal variables. Naturally, thermodynamical forces are defined from the free energy by
σ = ∂ψ ∂e (e, p), τ = - ∂ψ ∂p (e, p). (4.1)
Commonly the free energy takes the form of ψ(e, p) = 1 2 A(e(u)p) : (e(u)p), where A is the Hooke's law tensor. It follows that, σ = τ = A(e(u)p). However for clarity we continue to use τ instead. Internal variables (e, p) and their duals (σ, τ ) are second order symmetric tensors and become n × n symmetric matrices denoted M n s after a choice of an orthonormal basis and space dimension of the domain (Ω ⊂ R n ). To complete the second statement ii., let K be a non empty closed convex subset of M n s where τ lies in. This subset is called elastic domain for τ . Assume that K is fixed and time independent, such that its boundary is the convex yield surface f Y : M n s → R, defined by,
τ ∈ K = {τ * ∈ M n s : f Y (τ * ) ≤ 0} (4.2)
Precisely, for any τ that lies in the interior of K denoted int(K) the yield surface is strictly negative. Otherwise, τ belongs to the boundary noted ∂K and the yield function vanishes:
f Y (τ ) < 0, τ ∈ int(K) f Y (τ ) = 0, τ ∈ ∂K . (4.3)
Let us apply the normality rule on it to get the plastic evolution law. In the case where ∂K is differentiable the plastic flow rule is defined as, 4.1. Ingredients for generalized standard plasticity models
ṗ = η ∂f Y ∂τ (τ ), with η = 0 if f Y (τ ) < 0 ≥ 0 if f Y (τ ) = 0 (4.4)
where η is the Lagrange multiplier. Sometimes the convex K has corners and the outer normal cannot be defined (f Y is not differentiable), thus, the normality rule is written using Hill's principle, also known as maximum dissipation power principle, i.e.,
τ ∈ K, (τ -τ * ) : ṗ ≥ 0, ∀τ * ∈ K. (4.5)
This is equivalent to say that ṗ lies in the outer normal cone of K in τ ,
ṗ ∈ N K (τ ) := { ṗ : (τ * -τ ) ≤ 0 ∀τ * ∈ K}. (4.6)
However we prefer to introduce the indicator function of τ ∈ K, and write equivalently the normality rule as, ṗ lies in the subdifferential set of the indicator function. For that, the indicator function is,
I K (τ ) = 0 if τ ∈ K +∞ if τ / ∈ K (4.7)
and is convex by construction. The normality rule is recovered by applying the definition of subgradient, such that, ṗ is a subgradient of I K at a point τ ∈ K for any τ * ∈ K, given by,
τ ∈ K, I K (τ * ) ≥ I K (τ ) + ṗ : (τ * -τ ), ∀τ * ∈ K ⇔ ṗ ∈ ∂I K (τ ), τ ∈ K (4.8)
where the set of all sub-gradients at τ is the sub-differential of I K at τ and is denoted by ∂I K (τ ). At this stage of the analysis, Hill's principle is equivalent to convex properties of the elastic domain K and the normality plastic strain flow rule.
For τ ∈ K, Hill ⇔ ṗ ∈ N K (τ ) ⇔ ṗ ∈ ∂I K (τ ) (4.9)
Dissipation of energy during plastic deformations
All ingredients are settled, such as, we have the variable set (u, p) and their duals (σ, τ ) which lie in the convex set K. Also, the plastic evolution law is given by ṗ ∈ ∂I K (τ ).
It is convenient to compute the plastic dissipated energy during a plastic deformation process. For that, the dissipated plastic power density can be constructed from the Clausius-Duhem inequality. To construct such dissipation energy let us define first the support function
H(q), q ∈ M 3 s → H(q) := sup τ ∈K {τ • q} ∈ (-∞, +∞] (4.10)
The support function is convex, 1-homogeneous, Chapter 4. Variational models of perfect plasticity
H(λq) = λH(q), ∀λ > 0, ∀q ∈ M n s (4.11)
and it follows the triangle inequality, i.e., H(q 1 + q 2 ) ≤ H(q 1 ) + H(q 2 ), for every q 1 , q 2 ∈ M n s .
(4.12)
The support function of the plastic strain rate H( ṗ) is null if the plastic flow is zero, non negative when 0 ∈ K, and takes the value +∞ when K is not bounded. Using Clausius-Duhem inequality for an isotherm transformation, the dissipation power is defined by
D = σ : ė -ψ, (4.13)
and the second law of thermodynamics enforce the dissipation to be positive or null,
D = τ : ṗ ≥ 0. (4.14)
Using Hill's principle, the definition of the support function and some convex analysis, one can show that the plastic dissipation is equal to the support function of the plastic flow.
D = H( ṗ) (4.15)
The starting point to prove (4.15) is the Hill's principle or equivalently the plastic strain flow rule.
For τ ∈ K, τ : ṗ ≥ τ * : ṗ, ∀τ * ∈ K. By passing the right term to the left and taking the supremum over all ṗ ∈ M n s , we get,
sup ṗ∈M n s {τ : ṗ -H( ṗ)} ≥ 0. (4.18)
Since K is a non empty close convex set, H( ṗ) is convex and lower semi continuous, we have built the convex conjugate function of H( q) in the sense of Legendre-Fenchel. Moreover, one observes that the conjugate of the support function is the indicator function, given by,
I K (τ ) := sup ṗ∈M n s {τ : ṗ -H( ṗ)} = 0 if τ ∈ K + ∞ if τ / ∈ K (4.19)
Hence, the following equality holds for τ ∈ K,
D = τ : ṗ = H( ṗ). (4.20)
Variational formulation of perfect plasticity models
Remark 5 The conjugate subgradient theorem says that, for τ ∈ K a non empty closed convex set,
ṗ ∈ ∂I K (τ ) ⇔ D = τ : ṗ = H( ṗ) + I K (τ ) ⇔ τ ∈ ∂H( ṗ)
Finally, once the plastic dissipation power defined, by integrating over time [t a , t b ] for smooth evolution of p, the plastic dissipated energy is,
D(p; [t a , t b ]) = t b ta H( ṗ(s)) ds (4.21)
This problem is rate independent because the dissipation does not depend on the strain rate , i.e. D( ė, ṗ) = D( ṗ) and is 1-homogeneous.
Variational formulation of perfect plasticity models
Consider a perfect elasto-plastic material with a free energy ψ(e, p) occupying a smooth region Ω ⊂ R n , subject to time dependent boundary displacement ū(t) on a Dirichlet part ∂ D Ω of its boundary. For the sake of simplicity the domain is free of stress and no body force applies on it, such that, σ • ν = 0 on the complementary portion ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. Assume the initial state of the material being (e 0 , p 0 ) = (0, 0) at t = 0. Internal variables e(u) and p are supposed to be continuous-time solution of the quasi-static evolution problem. At each time the body is in elastic equilibrium with the prescribed loads at that time, such as it satisfies the following equations,
σ = ∂ψ ∂e (e, p) in Ω τ = - ∂ψ ∂p (e, p) ∈ ∂H( ṗ) in Ω div(σ) = 0 in Ω u = ū(t) on ∂ D Ω σ • ν = 0 on ∂ N Ω
We set aside problems where plasticity strain may develop at the interface ∂ D Ω. The problem can be equivalently written in a variational formulation, which is based on two principles, i. Energy balance ii. Stability condition Let the total energy density be defined as the sum of the elastic energy and the dissipated plastic energy, E t (e(u), p) = ψ(e(u), p)ψ(e 0 , p 0 ) + D(p; [0, t])
Energy balance
The concept of energy balance is related to the evolution of state variables in a material point, and enforce the total energy rate be equal to the mechanical power energy at each time, i.e. Ėt = σ t : ėt .
(
The total energy rate is,
Ėt = ∂ψ ∂e (e t , p t ) : ėt + ∂ψ ∂p (e t , p t ) : ṗt + H( ṗt ), (4.23)
and using the definition of τ = -∂ψ/∂e and σ = ∂ψ/∂e, we obtain,
τ t • ṗt = sup τ ∈K {τ : ṗt } (4.24)
Stability condition for the plastic strain
The stability condition for p is finding stable p t ∈ M n s for a given loading deformation e t . We propose to approximate the continuous time evolution by a time discretization, such that,
0 = t 0 < • • • < t i < • • • < t N =
t b and at the limit max i |t it i-1 | → 0. At the current time t i = t, let the material be at the state e t i = e and p t i = p and the previous state (e t i-1 , p t i-1 ). The discretized plastic strain rate is ṗt (p-p t i-1 )/(t-t i-1 ). During the laps time from t i-1 to t the increment of plastic energy dissipated is t t i-1 H( ṗt )ds H(pp t i-1 ). Hence taking into account all small previous plastic dissipated energy events, the total dissipation is approximated by,
D(p) := H(p -p t i-1 ) + D(p t i-1 ) (4.25)
At the current time, a plastic strain perturbation is performed for a fixed total strain changing the system from (e, p) to (e, q). The definition of the stability condition adopted here is written as a variation of the total energy between this two states, p stable, e given ⇔ ψ(e, q) + H(qp t i-1 ) ≥ H(pp t i-1 ) + ψ(e, p), ∀q ∈ M 3 s (4.26)
We wish to highlight the stability definition adopted, which is for infinitesimal transformations the flow rule.
H(qp t i-1 ) ≥ H(pp t i-1 ) -ψ(e, q)ψ(e, p) qp : (qp), ∀q ∈ M n s , q = p (4.27)
Consider small variations of the plastic strain p in the direction p for a growing total energy, such that for some h > 0 small enough and p + hp ∈ M n s we have, Using the Legendre transform, we get,
τ ∈ ∂H(p -p t i-1 ) ⇔ (p -p t i-1 ) ∈ ∂I K (τ ). (4.29)
To recover the continuous-time evolution stability for p, divide by δt = tt i-1 and pass δt to the limit. We recover the flow rule ṗ ∈ ∂I K (τ ), or equivalently in the conjugate space τ ∈ ∂H( ṗ).
Let us justify the definition adopted of the stability by showing that there is no lowest energy that can be found for a given e t . Without loss of any generality assume a continuous straight smooth path p(t) starting at p(0) = p and finishing at p(1) = q, such as, (4.31)
t ∈ [0, 1] → p(t) = (1 -t)p + tq, ∀q ∈ M n s (4.
The right hand side is path independent, by taking the infimum over all plastic strain paths, we get,
inf t →p(t) p(0)=p, p(1)=q 1 0 H( ṗ(s))ds ≥ τ * : (q -p) (4.32)
The left hand side does not depends on τ * , taking the supremum for all τ * ∈ K, and applying the triangle inequality for any p t i-1 , one obtains,
inf t →p(t) p(0)=p, p(1)=q 1 0 H( ṗ(s))ds ≥ H(q -p) ≥ H(q -p t i-1 ) -H(p -p t i-1 ). (4.33)
which justifies the a posteriori adopted definition of the stability.
The stability condition for the displacement is performed on the first chapter and we simply recover the equilibrium constitutive equations for the elastic problem with the prescribed boundary conditions.
Numerical implementation and verification of perfect
elasto-plasticity models
- ∂ N Ω g(t) • u dH n-1 ,
where H n-1 denotes the Hausdorff n -1-dimensional measure of the boundary. Typical plastic yields criterion used for metal are Von Mises or Tresca, which are well known to have only a bounded deviatoric part of the stress, thus they are insensitive to any stress hydrostatic contributions. Consequently, the plastic strain rate is also deviatoric ṗ ∈ dev(M n s ) and it is not restrictive to assume that p ∈ dev(M n s ). For being more precise but without going into details, existence and uniqueness is given for solving the problem in the stress field, σ ∈ L 2 (Ω; M n s ) (or e(u) ∈ L 2 (Ω; M n s ) ) with a yield surface constraint σ ∈ L ∞ (Ω; dev(M n s )). Experimentally it is observed that plastic strain deformations concentrate into shear bands, as a macroscopic point of view this localization creates sharp surface discontinuities of the displacement field. In general the displacement field cannot be solved in the Sobolev space, but find a natural representation in a bounded deformation space u ∈ BD(Ω) when the plastic strain becomes a Radon measure p ∈ M(Ω ∪ ∂ D Ω; dev(M 3 s )). The problem of finding (u, p) minimizing the total energy and satisfying the boundary conditions is solved by finding stable states variables trajectory i.e. stationary points. This quasi-static evolution problem, is numerically approximated by solving the incremental time problem, i.e. for a given time interval [0, T ] subdivided into (N + 1) steps we have,
0 = t 0 < t 1 < • • • < t i-1 < t i < • • • < t N = T .
The discrete problem converges to the continuous time evolution provided max i (t it i-1 ) → 0, and the total energy at the time t i in the discrete setting is,
E t i (u i , p i ) = Ω 1 2 A(e(u i ) -p i ) : (e(u i ) -p i ) + D i (p i ) dx - ∂ N Ω g(t i ) • u i dH n-1
where, 4.3. Numerical implementation and verification of perfect elasto-plasticity models
D i (p i ) = H(p i -p i-1 ) + D i-1 (4.34)
for a prescribed u i = ūi on ∂ D Ω. Let i be the the current time step, the problem is finding (u i , p i ) that minimizes the discrete total energy, i.e (u i , p i ) := argmin
u∈C i p∈M(Ω∪∂ D Ω;dev(M 3 s )) E t i (u, p) (4.35)
where p = (ū iu) • ν on ∂ D Ω and C i is the set of admissible displacement,
C i = {u ∈ H 1 (Ω) : u = ūi on ∂ D Ω}.
The total energy E(u, p) is quadratic and strictly convex in u and p separately. For a fixed u or p, the minimizer of E(•, p) or E(u, •) exists, is unique and can easily be computed. Thus, a natural algorithm technique employed is the alternate minimization detailed in Algorithm 3, where δ p is a fixed tolerance.
More precisely, at the loading time t i , for a given p j i , let find u j i that minimizes E(u, p j i ), notice that the plastic dissipation energy does not depend on the strain e(u), thus,
u j i := argmin u∈C i Ω 1 2
A(e(u)p j i ) : (e(u)p j i ) dx -
∂ N Ω g(t) • u dH n-1 (4.36)
This is a linear elastic problem. Then, for a given u j i let find p on each element cell, such as it minimizes E(u j i , p). This problem is not easy to solve in the primal formulation,
p j i := argmin p∈M(Ω∪∂ D Ω;dev(M n s )) 1 2
A(e(u j i )p) : (e(u j i )p) + H(pp i-1 )
but from the previous analysis, the stability condition of this problem is
A(e(u j i )p) ∂H(pp i-1 ). Using the Legendre-transform, the stability of the conjugate problem is given by
(p -p i-1 ) ∈ ∂I K (A(e(u j i ) -p)).
One can recognize the flow rule in the discretized time. This is the stability condition of the problem,
p j i := argmin p∈M(Ω∪∂ D Ω;dev(M n s )) A(e(u j i )-p)∈K 1 2 A(p -p i-1 ) : (p -p i-1
).
The minimization with respect to u is a simple linear problem solved using preconditioned conjugated gradient while minimization with respect to p can be reformulated Solve the equilibrium,
u j+1 := argmin u∈C i E i (u, p j ) 4:
Solve the plastic strain projection on each cell, p j+1 := argmin j := j + 1 6: until p jp j-1 L ∞ ≤ δ p 7: Set, u i := u j and p i := p j
Numerical verifications
A way to do a numerical verification is to recover the closed form solution of a bi-axial test in 3D provided in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF].
In the fixed orthonormal basis (e 1 , e 2 , e 3 ), consider a domain Ω = (-d/2, d/2) × (-l/2, l/2) × (0, l), (d < l), with the boundary conditions:
σ 11 = 0 on x 1 = ±d/2 σ 22 = g 2
on x 1 = ±l/2 σ 13 = σ 23 = 0 on x 3 = 0, l and, add
u 3 = 0 on x 3 = 0 u 3 = tl on x 3 = l.
Considering the classical problem to solve,
div(σ) = 0 in Ω σ = Ae(u)
in Ω e(u) = (∇u + ∇ T u)/2 in Ω constrained by a Von Mises plasticity yield criterion,
3 2 dev(σ) : dev(σ) ≤ σ p
It is shown in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF] that the domain remains elastic until the plasticity is triggered at a critical loading time t c as long as
0 ≤ g 2 ≤ σ p / √ 1 -ν + ν 2 , t c = 1 2E (1 -2ν)g 2 + 4σ 2 p -3g 2 2
where (E, ν) denote respectively the Young's modulus and the Poisson ratio. For 0 ≤ t ≤ t c the elastic solution stands for
σ(t) = g 2 e 2 ⊗
σ(t) =g 2 e 2 ⊗ e 2 + σ3 e 3 ⊗ e 3 , σ3 = 1 2 g 2 + 4σ 2 p -3g 2 2 e(t) = -ν(1 + ν) g 2 E e 1 ⊗ e 1 + (1 -ν 2 ) g 2 E e 2 ⊗ e 2 + t(-νe 1 ⊗ e 1 -νe 2 ⊗ e 2 + e 3 ⊗ e 3 ) p(t) =(t -t c ) - g 2 + σ3 2 σ3 -g 2 e 1 ⊗ e 1 + 2g 2 -σ3 2 σ3 -g 2 e 2 ⊗ e 2 + e 3 ⊗ e 3 u(t) = -ν(1 + ν) g 2 E -νt c - g 2 + σ3 2 σ3 -g 2 (t -t c ) x 1 e 1 + (1 -ν 2 ) g 2 E -νt c + 2g 2 -σ3 2 σ3 -g 2 (t -t c )
x 2 e 2 + tx 3 e 3 (4.38) A numerical simulation has been performed on a domain parametrized by l = .5 and d = .2, pre-stressed on opposite faces by g 2 = .5 with the material parameters E = 1,σ p = 1 and a Poisson ratio set to ν = .3. For those parameters, numerical results and exact solution have been plotted see Figure 4.1, and matches perfectly.
One difficulty is to get closed form for different geometry and plasticity criterion. Alternate minimization technique converge to the exact solution on this example for Von Mises in 3D
Conclusion
The adopted strategy to model a perfect elasto-plastic material is to prescribe the elastic stress domain set (closed convex) with plastic yields functions without dealing with corners and approximate the continuous evolution problem by discretized time steps. The 99 Chapter 4. Variational models of perfect plasticity implemented algorithm solves alternately the elastic problem and the plastic projection onto the yield surface. Hence, there is no difficulty to implement other perfect plastic yield criteria. A verification is performed on the biaxial test for Von Mises plastic yield criteria.
Chapter 5
Variational phase-field models of ductile fracture by coupling plasticity with damage Phase-field models referred to as gradient damage models of brittle fracture are very efficient to predict cracks initiation and propagation in brittle and quasi-brittle materials [START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Bourdin | Numerical experiments in revisited brittle fracture[END_REF][START_REF] Bourdin | Numerical implementation of a variational formulation of quasi-static brittle fracture[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF]. They were originally conceived as an approximation of Francfort Marigo's variational formulation [START_REF] Francfort | Revisiting brittle fracture as an energy minimization problem[END_REF] which is based on Griffith's idea of competition between elastic and fracture energy. Their model inherits a fundamental limitation of Griffith's theory which is a discontinuity of the displacement belongs to the damage localization strip, and this is not observed during fractures nucleation in ductile materials. Moreover, they cannot be used to predict cohesive-ductile fractures since no permanent deformations are accounted for. Plasticity models [START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF][START_REF] Salençon | Elasto-plasticité[END_REF][START_REF] Halphen | Sur les matériaux standard généralisés[END_REF][START_REF] Maso | Quasistatic crack growth in elasto-plastic materials: The two-dimensional case[END_REF][START_REF] Babadjian | Quasi-static evolution in nonassociative plasticity: the cap model[END_REF] are widely used to handle with the aforementioned effects by the introduction of the plastic strain variable. To capture ductile fracture patterns the idea is to couple the plastic strain coming from plasticity models with the damage in the phase-field approaches to fracture.
The goal of this chapter is to extend the Alessi-Marigo-Vidoli work [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] by considering any associated perfect plasticity and to provide a general algorithm to solve the problem for any dimensions. We provide a qualitative comparison of crack nucleation in various specimen with published experimental results on metals material. We show capabilities of the model to recover cracks patterns characteristics of brittle and ductile fractures. After the set of parameters being adjusted to recover ductile fracture we focus solely on such regime to study cracks nucleation and propagation phenomenology in mild notched specimens.
The chapter is organized as follow: Section 5.1.1 starts by aggregating some experiments illustrating mechanisms of ductile fracture which will constitute basis of numerical comparisons provided in the last part of this chapter. Section 5.1.2 is devoted to the introduction of variational phase-field models coupled with perfect plasticity and to recall some of their properties. Section 5.1.3 focuses on one dimension bar in traction to provide the cohesive response of the material and draw some fundamental properties similarly 5.1. Phase-field models to fractures from brittle to ductile to [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF]. A numerical implementation technique to solve such coupled models is provided in section 5.2. For the remainder we investigate ductile fracture phenomenology by performing simulations on various geometries such as, rectangular specimen, a mild notch 2d plane strain and 3d round bar respectively exposed in sections 5. Numerous experimental evidences show a common phenomenology of fracture nucleation in a ductile materials. To illustrate this, we have selected relevant experiments showing fracture nucleation and propagation in a plate and in a round bar.
For instance in [START_REF] Spencer | The influence of iron content on the plane strain fracture behaviour of aa 5754 al-mg sheet alloys[END_REF] the role of ductility with the influence of the iron content in the formation of shear band have been investigated. Experiments on Aluminum alloy AA 5754 Al -Mg show fractures nucleation and evolution in the thickness direction of the plate specimen illustrated in Figure 5.2.
The tensile round bar is another widely used test to investigate ductile fractures. However, tracking fractures nucleation inside the material is a challenging task and requires special equipment like tomography imaging to probe. Nevertheless Benzerga [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF][START_REF] Amine Benzerga | Synergistic effects of plastic anisotropy and void coalescence on fracture mode in plane strain[END_REF] and Luu [START_REF] Luu | Déchirure ductile des aciers à haute résistance pour gazoducs (X100)[END_REF] results show pictures of cracks nucleation and propagation inside those types of samples see Figure 5.19. A simpler method is the fractography which consists in studding fracture surfaces of materials after failure of the samples. Typical ductile fractures 5.1. Phase-field models to fractures from brittle to ductile and powerful approach to study theoretically and solve numerically those problems. The coupling between both models is done at the proposed total energy level. We start by recalling some important properties of variational phase-field models interpreted as gradient-damage models and variational perfect plasticity.
Consider an elasto-plastic-damageable material with A the Hooke's law tensor occupying a region Ω ⊂ R n in the reference configuration. The region Ω is subject to a time dependent boundary displacement ū(t) on a Dirichlet part of its boundary ∂ D Ω and time stress dependent g(t) = σ • ν on the remainder ∂ N Ω = ∂Ω \ ∂ D Ω, where ν denotes the appropriate normal vector. A safe load condition is required for g(t) to set aside issues in plasticity theory. For the sake of simplicity body forces are neglected such that at the equilibrium, the stress satisfies, div(σ) = 0 in Ω
The infinitesimal total deformation e(u) is the symmetrical part of the spatial gradient of the displacement field u, i.e. e(u) = ∇u + ∇ T u 2
Since the material has permanent deformations, it is usual in small deformations plasticity to consider the plastic strain tensor p (symmetric) such that the kinematic admissibility is an additive decomposition, e(u) = ε + p where ε is the elastic strain tensor.
The material depends on the damage variable denoted α which is bounded between two extreme states, α = 0 is the undamaged state material and α = 1 refers to the broken part. Let the damage deteriorate the material properties by making an isotropic modulation of the Hooke's law tensor a(α)A, where the stiffness function a(α) is continuous and decreasing such that a(0) = 1, a(1) = 0. In linearized elasticity the recoverable energy density of the material stands for, ψ(e(u), α, p) := 1 2 a(α)A(e(u)p) : (e(u)p)
Consequently the relation which relates the stress tensor σ to the strain is, One can recognize the Hill's principle by applying the definition of subdifferential and the indicator function. Since b(α)K is a none empty closed convex set, using Legendre-Fenchel, the conjugate of the plastic flow is σ ∈ b(α)∂H( ṗ), where the plastic dissipation potential H(q) = sup τ ∈K {τ : q} is convex, subadditive, positively 1-homogeneous for all q ∈ M n×n s . The dissipated plastic energy is obtained by integrating the plastic dissipation power over time, such that,
φ p := t 0 b(α)H( ṗ(s)) ds (5.1)
This dissipation is not unique and we have to take into account the surface energy produced by the fracture. Inspired by the phase-field models to brittle fracture [START_REF] Ambrosio | On the approximation of free discontinuity problems[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Pham | Gradient damage models and their use to approximate brittle fracture[END_REF][START_REF] Pham | Stability of homogeneous states with gradient damage models: Size effects and shape effects in the three-dimensional setting[END_REF] we define the surface dissipation term as,
φ d := t 0 σ 2 c 2Ek w (α) α + 2 ∇α • ∇ α + b (α) α t 0 H( ṗ(s)) ds dt (5.2)
where the first term is the classical approximated surface energy in brittle fracture and the last term is artificially introduced to be combined with φ p . Precisely, after summation of the free energy ψ(e(u), α, p), the work force, the dissipated plastic energy φ p and the dissipated damage energy φ d , the total energy has the following form,
E t (u, α, p, p) = Ω 1 2 a(α)A(e(u) -p) : (e(u) -p) dx - ∂ N Ω g(t) • u dH n-1 + Ω b(α) t 0 H( ṗ(s)) ds dx + σ 2 c 2Ek Ω w(α) + 2 |∇α| 2 dx (5.3)
where p = t 0 ṗ(s) ds is the cumulated plastic strain which is embedded in the cumulated plastic dissipation energy t 0 H( ṗ(s)) ds. The surface dissipation potential w(α) is a continuous increasing function such that w(0) = 0 and up to a rescaling, w(1) = 1. Since the damage is a dimensionless variable, the introduction of ∇α enforce to have > 0 a regularized parameter which has a dimension of the length. Note that the total energy (5.3) is composed of two dissipations potentials ϕ p and ϕ d coupled where,
ϕ p = Ω b(α) t 0 H( ṗ(s)) ds dx, ϕ d = σ 2 c 2Ek Ω w(α) + 2 |∇α| 2 dx.
(5.4)
5.1. Phase-field models to fractures from brittle to ductile Taking p = 0 in (5.3), the admissible stress space is bounded by,
A -1 σ : σ ≤ σ 2 c Ek max α w (α) c (α)
where E is the Young's modulus, the compliance function is c(α) = 1/a(α) and let k = max α w (α) c (α) . Therefore, without plasticity in one dimensional setting an upper bound of the stress is σ c .
A first conclusion is the total energy (5.3) is composed of two coupled dissipation potentials associated with two yields surfaces and their evolutions will be discussed later.
In the context of smooth triplet state variable ζ = (u, α, p) and since the above total energy (5.3) must be finite, we have α ∈ H 1 (Ω) and e(u), p belong to L 2 (Ω). However, experimentally it is observed that plastic strain concentrates into shear bands. In our model since ṗ ∈ b(α)K, the plastic strain concentration is driven by the damage localization and both variables intensifies on the same confined region denoted J(ζ), where J is a set of "singular part" which a priori depends on all internal variables. Also, the damage is continuous across the normal surfaces of J(ζ) but not the gradient damage term which may jump. Accordingly, the displacement field cannot be solved in the Sobolev space, but find a natural representation in special bounded deformation space SBD if the Cantor part of e(u) vanishes, so that the strain measure can be written as,
e(u) = e(u) + u ν H n-1 on J(ζ(x))
where e(u) is the Lebesgue continuous part and denotes the symmetrized tensor product. For the sake of simplicity, consider the jumps set of the displacement being a smooth enough surface, i.e the normal ν is well defined, and there is no intersections with boundaries such that J(ζ)∩∂Ω = ∅. The plastic strain turns into a Dirac measure on the surface J(ζ). Without going into details, the plastic strain lies in a non-conventional topological space for measures called Radon space denoted M.
Until now, the damage evolution have not been set up and the plastic flow rule is hidden in the total energy adopted. Let us highlight this by considering the total energy be governed by three principles; damage irreversibility, the stability of E t (u, α, p) with respect to all admissible variables (u, α, p) and the energy balance.
We focus on the time-discrete evolution, by considering a time interval [0, T ] subdivided into (N + 1) steps such that, 0 = t
0 < t 1 < • • • < t i-1 < t i < • • • < t N = T .
The following discrete problem converges to the continuous time evolution provided max(t it i-1 ) → 0. At any time t i , the sets of admissible displacement, damage and plastic strain fields respectively denoted C i , D i and Q i are: 107 Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage
C i = u ∈ SBD(Ω) : u = ū(t i ) on ∂ D Ω , D i = α ∈ H 1 (Ω) : α i-1 ≤ α < 1 in Ω , Q i = p ∈ M( Ω; M n×n s ) such that, p = u ν on J ζ(x) (5.5)
and because plastic strains may develop at the boundary, we know from prior works on plasticity [START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF] that we cannot expect the boundary condition to be satisfied, thus we will have to set up p = (ū(t i )u) ν on ∂ D Ω.
It is convenient to introduce Ω ⊃ Ω a larger computational domain which includes the jump set and ∂ D Ω, this will become clearer. Note that the damage irreversibility is in the damage set D i . The total energy of the time-discrete problem is composed of (5.3) on the regular part and b(α)D ( u ν, [0, t i ]) on the singular part, such that,
E t i (u, α, p) = Ω\J(ζ) 1 2 a(α)A(e(u) -p) : (e(u) -p) dx - ∂ N Ω g(t i ) • u dH n-1 + Ω b(α)D i (p) dx + σ 2 c 2Ek Ω\J(ζ) w(α) + 2 |∇α| 2 dx (5.6)
where
D i (p) = H(p -p i-1 ) + D i-1 (5.7)
the total energy is defined over the regular and singular part of the domain, and the evolution is governed by, Definition 8 (Time discrete coupled plasticity-damage evolution by local minimization)
At every time t i find stable variables trajectory (u i , α i , p i ) ∈ C i × D i × Q i that satisfies the variational evolution: i. Initial conditions: u 0 = 0, α 0 = 0 and p 0 = 0 ii. Find the triplet ζ i = (u i , α i , p i ) which minimizes the total energy,
E t i (u, α, p)
iii. Energy balance,
E t i (u i , α i , p i ) =E t 0 (u 0 , α 0 , p 0 ) + i k=1 ∂ D Ω (σ k ν) • (ū k -ūk-1 ) dH n-1 - ∂ N Ω (g(t k ) -g(t k-1 )) • u k dH n-1
(5.8)
The damage and plasticity criterion are obtained by writing the necessary first order optimality condition of the minimizing problem E t i (u, α, p). Explicitly, there exists h > 0 small enough, such that for
(u i + hv, α i + hβ, p i + hq) ∈ C i × D i × Q i , E t i (u i + hv, α i + hβ, p i + hq) ≥ E t i (u i , α i , p i ) (5.9)
Consider that the displacement at u i in the direction v might extend the jump set of J(v). The variation of the total energy
E t i (u i + hv, α i + hβ, p i + hq) is equal to, Ω\(J(ζ i )∪J(v)) 1 2 a(α i + hβ)A e(u i + hv) -(p i + hq) : e(u i + hv) -(p i + hq) dx - ∂ N Ω g(t i ) • (u i + hv) dH n-1 + Ω\(J(ζ i )∪J(v)) b(α i + hβ)D i (p i + hq) dx + J(ζ i )∪J(v) b(α i + hβ)D i (( u i + hv) ν) dH n-1 + σ 2 c 2Ek Ω\(J(ζ i )∪J(v)) w(α i + hβ) + 2 |∇(α i + hβ)| 2 dx
(5.10) Note that the plastic dissipation term is split over the regular part and the singular part and for simplicity we set aside the plastic strain localization on the Dirichlet boundary.
Equilibrium and kinematic admissibility:
Take β = 0 and q = 0 in (5.9) and (5.10) such that E t i (u i +hv, α i , p i ) ≥ E t i (u i , α i , p i ). Using (5.7) we just have to deal with the current plastic potential H which is subadditive and 1-homogeneous. Hence, the fourth term in (5.10) becomes, 5.1. Phase-field models to fractures from brittle to ductile 2. The damage yield criteria in the bulk,
f D (σ t , α t (x), pt (x)) := - 1 2 c (α t (x)) E σ 2 t + σ 2 c 2kE w (α t (x)) -2 2 α t (x) + b (α t (x))σ p pt (x) ≥ 0 (5.31) 3. The damage yield criteria on x 0 , b (α t (x 0 )) u(x 0 ) σ p - 2 σ 2 c kE α t (x 0 ) ≥ 0 (5.32)
4. The damage yield criteria on ±L,
α t (-L) ≥ 0, α t (L) ≤ 0 (5.33)
5. The plastic yield criteria in the bulk and on the jump, We restrict our study to r Y = σ c /σ p > 1, meaning that the plastic yield surface is below the damage one. Consequently after the elastic stage, the bar will behave plastically. During the plastic stage, the cumulation of plastic strain decreases f D until the damage yield criteria is reached. On the third stage both damage and plasticity evolves simultaneously such that f D = 0 and f Y = 0 on the jumps x 0 . Of course there is no displacement jump on the bar before the third stage. Let expose the solution (u, α, p) Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage for the elastic, plastic and plastic damage stages.
f Y (σ t , α t (x)) := |σ t | -b(α t (x))σ p ≤ 0 ( 5
The elastic response of the bar ends once the tension reached u t = σ p /E. During this regime the damage and plastic strain remain equal to zero. After this loading point, the plasticity stage begins and we have a uniform p = p = u tσ p /E and α = 0 in Ω. Since b (α) < 0 and p increases during the plastic stage, the damage yield criteria f D decreases until the inequality (5.31) becomes an equality. At this loading time both criterion are satisfied, such that, f Y = 0 and f D = 0. Hence, plugging the equation (5.34) into (5.31), we get,
-b (α t (x))p t (x) = σ p E - 1 2 c (α t (x))b 2 (α t (x)) + r 2 Y 2k w (α t (x)) -2 2 α t (x) (5.39)
By taking α t (x) = 0 in the above equation, we get the condition when the plastic stage ends, for a uniform plastic strain,
p = u t - σ p E = σ p (-b (0))E r 2 Y 2k w (0) - 1 2 c (0)b 2 (0) (5.40)
The last stage is characterized by the evolution of the damage. For a given x 0 take L long enough to avoid any damage perturbation at the boundary such that, the damage remains equal to zero at the extremities of the bar α(±L) = 0 and assume being maximum at x 0 , α(x 0 ) = β. Let α ≥ 0 over [-L, x 0 ) with α (-L) = 0, multiplying the equation (5.31) by 2α and integrate over [-L, x 0 ), we get,
- 2E σ p x 0 -L b (α t (x))α t (x)p t (x) dx = c(β) -c(0) σ 2 t σ 2 p + r 2 Y k w(β) -2 β 2 (5.41)
A priori, the cumulated plastic strain evolves along the part of the bar [-L, x 0 ), but since the maximum damage value β is reached on x 0 and the stress is uniform in the bar we have σ t (x) ≤ b(β)σ p . In other words the plasticity does not evolve anymore in the bar except on x 0 , and p is equal to (5.40). We obtain a first integral of the form of,
2 β 2 = k r 2 Y c(β) -c(0) b 2 (β) + w(β) + 2 b(β) -b(0) p Ek σ p r 2 Y (5.42)
We know that on the jump set, we have,
b (β) u(x 0 ) σ p - 2 σ 2 c kE β = 0 (5.43)
Since β is known, the stress on the bar and the displacement jump on x 0 can be computed. We define the energy release rate as the energy dissipated by the damage process, 5.2. Numerical implementation of the gradient damage models coupled with perfect plasticity
G t := Ω\{x 0 } σ c 2kE w(α t (x)) + 2 α 2 t (x) + b(α t (x))σ p p dx + b(α t (x 0 ))σ p u(x 0 )
and the critical value is given for complete damage localization once σ = 0.
Let us recall some fundamental properties for a(α), b(α) and w(α) to satisfy. Naturally the stiffness function must satisfy a (α) < 0, a(0) = 1 and a(1) = 0, and the damage potential function w (α) > 0, w(0) = 0 and up to a rescaling w(1) = 1. The required elastic phase is obtained for α → -a 2 (α)w (α)/a (α) is strictly increasing. The coupling function b (α) < 0 ensure that the damage yield surface decreases with the cumulated plastic strain and b(0) = 1. For numerical reason (a, b, w) must be convex with respect to α which is not the case for the provided closed from solution in [START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF] for AT k see Table 5.1. Consequently, we prefer the model named AT 1 where a 1d computed solution example (dark lines) is compared with the numerical simulation (colored lines) see Figure 5.3. The numerical implementation is detailed in the following section 5.2. For this 1d example, we see the three phases described below in the stress-displacement plot, precisely the stress softening leads to a localization of the damage in which a cohesive response is obtained at the center.
Name
a(α) w(α) b(α) AT 1 (1 -α) 2 α a(α) + η b AT k 1 -w(α) 1 + (c 1 -1)w(α) 1 -(1 -α) 2 (1 -w(α)) c 2
Numerical implementation of the gradient damage models coupled with perfect plasticity
In the view to numerically implement the gradient damage model coupled with perfect plasticity it is common to discretized in time and space. For the time discretization evolution we refer to the Definition 8. However in the numerical implantation we do not enforce energy balance condition justified by following the spirit of [START_REF] Bourdin | The variational formulation of brittle fracture: numerical implementation and extensions[END_REF][START_REF] Bourdin | The Variational Approach to Fracture[END_REF]. Functions space are discretized through standards finite elements methods over the domain. Both damage and displacement fields are projected over linear Lagrange elements. Whereas the plastic strain tensor is approximated by piecewise constant element. By doing so we probably use the simplest finite element to approximate the evolution problem. Conversely, the chosen finite element space cannot describe the jumps set of u and the localization of p, however it might be possible to account such effects by using instead discontinuous Galerkin methods. Nevertheless, as you will see on numerical simulations performed, the plasticity concentrates in a strip of few elements once the damage localizes. Numerically 5.2. Numerical implementation of the gradient damage models coupled with perfect plasticity we are not restricted to Von Mises plastic criterion only but any associated plasticity. Since a(α), b(α) and w(α) are convex the total energy is separately convex with respect to all variables (u, α, p) but that is not convex. A proposed algorithm to solve the evolution is alternate minimization which guarantees a path decreasing of the energy, but the solution might not be unique. At each time step t i , the minimization for each variables are performed as follows:
i. For a given (α, p) the minimization of E with respect to u is an elastic problem with the prescribed boundary condition. To solve this we employed preconditioned conjugate gradient methods solvers.
ii. The minimization of E with respect to α for fixed (u, p) and subject to irreversibility (α ≤ α i-1 ) is solved using variational inequality solvers provided by PETCs [START_REF] Balay | Efficient management of parallelism in object oriented numerical software libraries[END_REF][START_REF] Balay | PETSc users manual[END_REF][START_REF] Balay | PETSc Web page[END_REF].
iii. For a fixed (u, α) the minimization of E with respect to p is not straight forward the raw formulation however reformulated as a constraint optimization problem turns being a plastic strain projection onto a convex set which is solved using SNLP solvers provided by the open source snlp 1 . Boundaries of the stress elastic domain is constrained by a series of yields functions describing the convex set without dealing with none differentiability issues typically corners.
The retained strategy to solve the evolution problem is to use nested loops. The inner loop solves the elasto-plastic problem by alternate i. and iii. until convergence. Then, the outer loop is composed of the previous procedure and ii., the exit is triggered once the damage has converged. This leads to the following Algorithm 4, where δ α and δ p are fixed tolerances. Argument in favor of this strategy is the elasto-plastic is a fast minimization problem, whereas compute ii. is slow, but changing loops orders haven't be tested. All computations were performed using the open source mef90 2 . Verifications of the numerical implementation have been performed on the elastodamage problem and elasto-plasticity problem separately considering three and two dimensions cases. The plasticity is verified with the existence and uniqueness of the bi axial test for elasto-plasticity in [START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF]. The implementation of the damage have been checked with propagation of fracture in Griffith regime, the optimal damage profile in 2d and many years of development by Bourdin. The verification of the coupling is done by comparison with the one dimensional setting solution in section 5.1.3. Solve the equilibrium,
u k+1 := argmin u∈C i E t i (u, α j , p k ) 6:
Solve the plastic strain projection on each cells,
p k+1 := argmin p∈M n s a(α j )A(e(u k+1 )-p)∈b(α j )K 1 2 A(p -p i-1 ) : (p -p i-1 ) 7: k := k + 1 8: until p k -p k-1 L ∞ ≤ δ p 9:
Set, u j+1 := u k and p j+1 := p k 10:
Compute the damage,
α j+1 := argmin α∈D i α≥α i-1 E t i (u j+1 , α, p j+1 ) 11:
j := j + 1 12: until α jα j-1 L ∞ ≤ δ α 13: Set, u i := u j , α i := α j and p i := p j 5.3 Numerical simulations of ductile fractures
Plane-strain ductility effects on fracture path in rectangular specimens
The model offer a large variety of possible behaviors depending on the choice of functions a(α), b(α), w(α) and the plastic yield function f Y (τ ) considered. From now, the presentation is limited to AT 1 in Table 5.1 and Von Mises plasticity such that,
f Y (σ) = ||σ|| eq -σ p
where ||σ|| eq = n n-1 dev(σ) : dev(σ) and dev(σ) denotes the deviatoric stresses.
Considering an isotropic material, the set of parameters to calibrate is (E, ν, σ p , σ c , ) where the Young's modulus E, the Poisson ratio ν and the plastic yield stress σ p can be easily characterized by experiments. However, σ c and are still not clear but in brittle fracture nucleation they are estimated by performing experiments on notched specimen 5.3. Numerical simulations of ductile fractures see [START_REF] Tanné | Crack nucleation in variational phase-field models of brittle fracture[END_REF]. Hence, a parameter analysis for our model is to study influences of the ratio r Y = σ c /σ p and independently.
Consider a rectangular specimen of length (L = 2) and width (H = 1) in plane strain setting, made of a sound material with the set up E = 1, ν = . Let first performed numerical simulations by varying the stress ratio of initial yields surfaces r Y ∈ [. [START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] with an internal length equal to = .02 smaller than the geometric parameters (L, H) and let others parameter unchanged.
The damage fields obtained after failure of samples are summarized on the Figure 5.5. A transition from a straight to a slant fracture for an increasing r Y is observed similarly to the Ti glass alloy in the Figure 5.1. A higher initial yields stress ratio induces a larger plastic strain accumulation leading to a thicker damage localization strip. The measure of the fracture angle reported in Figure 5.5 does not take into account the turning crack path profile around free surfaces caused by the damage condition ∇α • ν = 0. Clearly, for the case σ c < σ p the fracture is straight and there is mostly no accumulation of plastic strain. However due to plasticity, damage is triggered along one of shears bands, resulting of a slant fracture observation in both directions but never two at the same time. Now, let us pick up one of this stress ratio r Y = 5 for instance and vary the internal length ∈ [0.02, 0.2]. The stress vs. displacement is plotted in Figure 5.6 and shows various stress jumps amplitude during the damage localization due to the snap-back intensity. This effect is well known in phase-field models to brittle fracture and pointed out by [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF]. A consequence of this brutal damage localization is a sudden drop of the stress, when this happens the energy balance is not satisfied. Continuous and discontinuous energies evolution is observed for respectively = 0.2 and = 0.02 plotted on Figure 5.7.
The attentive reader may notice that the plastic energy decreases during the damage localization which contradicts the irreversibility hypothesis of the accumulation of dissipated plastic energy. Actually the plotted curve is not accurately representative of the dissipated plasticity energy but it is a combination of damage and plasticity such that a part of this energy is transformed into a surface energy contribution. Hence, those dis- Snap-shots of damage, accumulated plastic strain and damage in a deformed configuration fields are illustrated in Figure 5.9 for different loading time (a, b, c, d) shown in the Figure 5.6. The cumulated plastic strain is concentrated in few mesh elements across the surface of discontinuity (fracture center). Because damage and plasticity evolve together along this strip it is not possible to dissociate mechanism coming from pure plasticity or damage independently. It can be interpreted as a mixture of permanent deformation and voids growing with mutual cause and effects relationship.
Plane-strain simulations on two-dimensional mild notched specimens
In the sequel we restrict our scope to study fractures nucleation and propagation in ductile regime (r Y = σ c /σ p large enough) for a mild notched specimen. Experimentally this design shape samples favor fractures around the smallest cross section size. Necking is a well known instability phenomena during large deformations of a ductile material.
A consequence of the necking on a specimen is a cross sectional reductions which implies a curved profile to the deformed sample. Since we are in small deformations setting, necking cannot be recovered, thus we artificially pre-notch the geometry (sketched in Figure 5.10 with the associated Table 5.2) to recover a plastic strain concentrations. For more realistic numerical simulations and comparisons with pictures of the experiments on Aluminum alloy AA 5754 Al -Mg in Figure 5.2, we set material properties (see Table 5.3) such that the internal length is in the range of grain size, σ c is chosen to recover 7% elongation and (E, ν, σ p ) are given. We assume that the material follows Von Mises perfect plasticity criteria and the elastic stress domain shrinks from σ p to the lower limit of 15% of σ p . The experiments are built such that displacements are controlled at the extremities of the plate and observations are in the sheet thickness direction. Hence, the 2d plane strain theory is adopted for numerical simulations. Also we have studied two types of boundaries conditions, clamped and rollers boundary condition respectively named set-up A and set-up B. .10: Specimen geometry with nominal dimensions, typical mesh are ten times smaller that the one illustrated above. Note that meshes in the center area of the geometry are refined with a constant size h. Also a linear growing characteristic mesh size is employed from the refined area to a coarsen mesh at the boundary. considered, such that, the set-up B provides a slant fracture shear dominating with nucleation at the center and propagation along one of the shear band, and for the set-up A, the fracture nucleates at the center, propagates along the specimen section and bifurcate following shear bands. Final crack patterns are pure shear configuration and a slant-flatslant path. Again some snap shots of damage, cumulated plastic strain and damage in deformed configuration are presented in Figure 5.12 and Figure 5.13 for respectively the set-up A and B. Time loadings highlighted by letter are reported in the stress vs. strain plot in Figure 5.11. Main phenomenon are: (a) during the pure plastic phase there is no damage and the cumulated plastic strain is the sum of two large shear bands where the maximum value is located at the center, (b) the damage is triggered on the middle and develops following shear bands as a "X" shape, (c) a macro fracture nucleates at the center but stiffness remained and the material is not broken, (d) failure of the specimen with the final crack pattern. Close similarities between pictures of ductile fracture nucleations from simulations and experimental observations can be drawn. However, we were not able to capture cup-cones fractures. To recover the desired effect we introduced a perturbation in the geometry such that the parabola shape notch is no more symmetric along the shortest cross section axis, i.e. an eccentricity is introduced by taking ρ < 1 see the Figure 5.10. In a sense there is no reason that necking induces a perfectly symmetric mild notch specimen. Leaving all parameters unchanged and taking ρ = .9 we observed two cracks patterns: a shear dominating and cup-cones for respectively set-up B and set-up A illustrated in Figure 5.14. This type of non-symmetric profile with respect to the shortest cross section axis implies a different stress concentration between the right and the left side of the sample which consequently leads to unbalance the plastic strain concentrations intensity on both parts. Since damage is guided by the dissipated plastic energy we have recovered this cup cones fracture with again a macro fracture has nucleated at the center. Also the set-up B with ρ = .9 is not significantly perturbed to get a new crack path but still in the shear dominating mode.
Ductile fracture in a round notched bar
A strength of the variational approach is that it will require no modification to perform numerical simulations in three dimensions. Also this part is devoted to recover common observations made on ductile fracture in a round notched bar such as cup-cones and shear dominating fractures shapes. The ductile fracture phenomenology for low triaxility (defined as the ratio of the hydrostatic over deviatoric stresses) have been investigated by Benzerga [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF], relevant pictures of cracks nucleation and propagation into a round bar Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage with none destructive techniques is summarized in the Figure 5.19 . Since we focus on the fracture phenomenology we do not attribute physical values to material parameter but give attentions to the yield stress ratio r Y and the internal length . The internal length governs the thickness of the localization which has to be small enough compared to the specimen radius to observe a distinct fracture. In the other sides, drives the characteristics mesh size, typically ∼ 3h which constraint the numerical cost. For clarity the cumulated plastic strain will not be shown anymore since it does not provide further information on the fracture path than the damage. Based on the above results, boundary conditions play a fundamental role in our simulations so we will consider two cases: an eccentric mild notched shape (ρ = .7) specimens in the set-up A and B respectively associated to clamped and rollers boundary conditions. Both geometries are solids of revolution (tensile axis revolution) based on the sketch Figure 5.10 and Table 5 Those simulations were performed with 48 cpus during 48 hours on a 370 000 mesh nodes for 100 time steps with the provided resources of high performance computing of Louisiana State University3 . Results of numerical simulations are shown on the Figures 5.17 The ductile fracture phenomenology is presented by Benzerga-Leblond [START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF], and shows the voids growing and coalescence during the early stage of stress softening, then a macro fracture nucleates at the center end propagates following shear lips formations. Numerical simulations at the loading time (a) for the set-up A and B show a diffuse damage in the middle of the specimen which is exactly a loss of stiffness in the material. This can be interpreted as an homogenization of voids density. A sudden macro crack appears around the loading time (b) which corresponds to the observation made. From (b) to (c) the crack follows shear lips formation in a shear dominating or cup-cones crack patterns depending on the prescribed boundary conditions clamped (set-up A) or rollered (set-up B). These numerical examples suggest that variational phase-field models of ductile fracture are capable of predicting crack nucleation and propagation in low triaxiality specimen for the 2d plane strain specimen and round bar for a simple model considered.
Conclusion
In contrast with most of literature on ductile fracture we proposed a variational model by coupling gradient damage models and perfect plasticity following seminal papers of [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF]. In this chapter, we have investigated crack nucleation and propagation in multiple geometries in simple case of functions under Von Mises perfect plasticity. We confirmed observations reported elsewhere in the literature that fracture nucleates at the center of the specimen and propagates following shear bands before reaching free surfaces for low triaxiality configuration in ductile materials. Our numerical simulations also highlight that crack patterns observed is strongly dependent of the prescribed boundary conditions and geometry which leads to a plastic dissipated energy concentrations path. The strength of the proposed phase-field model is the ability to handle with both ductile and brittle fractures which mostly have been separated like oil and water. The key parameter to capture this transition is the ratio of initial yields surfaces of damage over plastic one. We show that variational phase-field models are capable of qualitative predictions of crack nucleation and propagation in a mild notch range of geometries including two and three dimensions, hence, this model is a good candidate to address the aforementioned issues. Also, the energy balance is preserved since the fracture evolution is smooth driven by and internal length. Of course, there are still many investigations to performed before claiming the superiority of the model such that, fracture nucleation at a notch of a specimen (high triaxiality) which due to the unbounded hydrostatics pressure for the plasticity criteria (Von Mises for instance) leads to hit the damage yield surface first, consequently a brittle response is attended. To get a cohesive response a possible choice of plastic yield surface is to consider a cap model closing the hydrostatic pressure in the stress space domain.
Chapter 6
Concluding, remarks and recommended future work
In this dissertation, we studied the phenomena of fracture in various structures using phase-field models.
The phase-field models have been derived from Francfort Marigo's variational models to fracture which have been conceived as an approximation of Griffith's theory. In Chapter 1 we exposed a complete overview and main properties of the model. In Chapter 2, we applied the phase-field models to study fracture nucleation in a V-and U-notches geometries. Supported by numerous validation we have demonstrated the ability of the model to make quantitative prediction of crack nucleation in mode I. The model is based on general energy minimization principle and does not require any ad-hoc criteria, just to adjust the internal length. Moreover the model properly accounts for size effects that cannot be recovered from Griffith-based theory. In Chapter 3 we have shown that the extended model to hydraulic fracturing satisfies Griffith's propagation criterion and there is no issues to handle with multi-fracking scenario. The fracture path is dictated by the minimization principle of the total energy. A loss of crack symmetry is observed in the case of a pressurized network of parallel fractures. In Chapter 4, we solely focused on the perfect elasto-plasticity models and we started by the classical approach to its variational formulation. A verification of the alternated algorithm technique is exposed. The last chapter was devoted to combine models exposed in the first and the fourth chapter to perform cohesive and ductile fractures. Our numerical simulations have shown the capability of the model to retrieve main features of ductile fractures in a mild notch specimen, precisely nucleation and propagation phenomenon. Also, we have observed that crack paths are sensitive to the geometry and boundary conditions applied on it.
In short, we have demonstrated that variational phase-field models address some of vexing issues associated with brittle fractures: scale effects, nucleation, existence of a critical stress and path prediction. By a simple coupling with the well known perfect plasticity theory, we recovered phenomenology of ductile fractures patterns.
Of course, there are still remaining issues that need to be addressed. Our numerical simulations do not enforce energy balance as indicated by a drop of the total energy upon crack nucleation without string singularities illustrated in Chapter 2. Perhaps extensions into phase field models of dynamic fracture will address this issue.
Also fracture in compression remains an issue in variational phase-field models. It is not clear of either of this models is capable of simultaneously accounting for nucleation under compression and self-contact.
A recommended future work is to study ductile fractures following the spirit of Chapter 2. The idea is by varying the yields stress ratio recover first the brittle initiation criterion and then study the ductile fracture for different notch angles.
Primal feasibility l ≥ l(t) 3. Dual feasibility λ ≥ 0 4. Complementary slackness λ(ll(t)) = 0
10 )
10 Splitting the second integral over ∂Ω = (∂ N Ω ∪ ∂ D Ω) \ Γ(l) into the Dirichlet and the remainder Neumann boundary part and using the condition v = 0 on ∂ D Ω, the Gateaux derivative of E becomes,
Figure 1 . 1 :
11 Figure 1.1: Sketch on the left shows the evolution of the crack (red curve) for a strict decreasing function G(1, l) subject to the irreversibility (l ≥ l 0 ) and G(1, l) ≤ G c /t 2 . Picture on the right shows the crack evolution for a local minimality principle (red curve) and for a global minimality (blue curve) without taking into account the energy balance.
w 1 . 3 .Remark 1
131 Limit of the damage energy The above expression (1.68) is invariant by a change of variable x = x, thus β(x) = β( x)
Chapter 1 .
1 Combining this two statements, we deduce that there exists a , b , c in I such that, a ≤ b ≤ c , and lim →0 α (a ) = lim →0 α (c ) = 0 and lim →0 α (b ) = 1 thus, I w(α ) + (α ) 2 dx = b a w(α ) + (α ) 2 dx + c b w(α ) + (α ) 2 dx (1.78) Again using the identity, a 2 + b 2 ≥ 2|ab|, we have that, Variational phase-field models of brittle fracture where (x) := x 0 w(s)ds. Using the substitution rule we then get, b a w(α ) + (α ) 2 dx ≥ 2 | (b ) -(a )| , (1.80) and since (0) = 0 and (1) = c w , we obtain, ) + (α ) 2 dx ≥ 2c w ,
82) and α = 1 for |x| ≤ b , we get that, b -b
0 H n- 1 2 x∈ΩChapter 1 .
0121 ({x; d(x) = y}) dy = 2H n-1 (J(u)) (1.100) and for the second term, b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx = w(α (d(x))) + α (d(x))∇d(x) 2 dH n-1 (x) dy = δ b x∈Ω w(α (y)) + α (y)∇d(x) 2 dH n-1 ({x; d(x) = y}) dy = δ b w(α (y)) + α (y) dH n-1 ({x; d(x) = y}) dy (1.101) Making the change of variable y = x , Variational phase-field models of brittle fracture b ≤d(x)≤δ w(α (d(x))) + |∇α (d(x))| 2 dx = δ b w(α (y)) + α (y) 2 H n-1 ({x; d(x) = y}) dy = δ b w(α (y)) + α (y) 2 s (y) dy
) 1 . 4 .Definition 5 (
145 Numerical implementationand the discrete time evolution problem is given by, Damage discrete evolution by local minimizers)
Figure 2.1(left) shows the outcome of a surfing experiment on a rectangular domain Ω = [0, 5] × [-1 2 , 1 2
Figure 2 . 1 :
21 Figure 2.1: Mode-I "surfing" experiment along straight (left) and circular (right) paths. Dependence of the crack length and elastic energy release rate on the loading parameter for multiple values of .
Figure 2 . 2 :
22 Figure 2.2: Pac-man geometry for the study of the crack nucleation at a notch. Left: sketch of the domain and notation. Right: relation between the exponent of the singularity λ and the notch opening angle ω determined by the solution of equation (2.10). For any opening angle ω we apply on ∂ D Ω the displacement boundary condition obtained by evaluating on ∂ D Ω the asymptotic displacement (2.12) with λ = λ(ω).
The mode-I Pac-Man test Consider a Pac-Man-shaped 3 domain with radius L and notch angle ω as in Figure 2.2(left). In linear elasticity, a displacement field associated with the stress field (2.7) is
Figure 2 . 3 :σ
23 Figure 2.3: Pac-Man test with the AT 1 model, L = 1, = 0.015, ω = 0.7π, and ν = 0.3.From left to right: typical mesh (with element size ten times larger than that in typical simulation for illustration purpose), damage field immediately before and after the nucleation of a crack, and plot of the energies versus the loading parameter t. Note the small damaged zone ahead of the notch tip before crack nucleation, and the energetic signature of a nucleation event.
Figure 2 . 4 :
24 Figure 2.4: Identification of the generalized stress intensity factor: σ θθ (r,0) (2π r) λ-1 along the domain symmetry axis for the AT 1 (left) and AT 2 (right) models with undamaged notch conditions, and sub-critical loadings. The notch aperture is ω = π/10
Figure 2 . 5 :
25 Figure 2.5: Critical generalized critical stress intensity factor at crack nucleation as a function of the internal length for ω 0 (left) and ω π/2 (right). AT 1 -U, AT 1 -D, AT 2 -U, and AT 2 -D refer respectively to computations using the AT 1 model with damaged notch and undamaged notch boundary conditions, and the AT 2 model with damaged notch and undamaged notch boundary conditions. (K Ic ) eff := G eff E 1-ν 2 denotes the critical mode-I stress intensity factor modified to account for the effective toughness G eff .
Figure 2 . 6 :
26 Figure 2.6: Critical generalized stress intensity factor k for crack nucleation at a notch as a function of the notch opening angle ω. Results for the AT 1 and AT 2 models with damaged -D and undamaged -U notch lips conditions. The results are obtained with numerical simulations on the Pac-Man geometry with (K Ic ) eff = 1 and = 0.01 so that σ c = 10 under plane-strain conditions with a unit Young's modulus and a Poisson ratio ν = 0.3.
.m 1-λ ]
Figure 2 . 7 :
27 Figure 2.7: Critical generalized stress intensity factor k c vs notch angle.Comparison between numerical simulations with the AT 1 and AT 2 models and damaged and undamaged boundary conditions on the notch edges with experiments in steel from[START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF] (top-left), and Duraluminium (top-right) and PMMA (bottom) from[START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF].
Figure 2 . 8 :
28 Figure 2.8: Critical generalized stress intensity factor k c vs notch angle and depth in PVC foam samples from [94]. Numerical simulations with the AT 1 model with damaged and undamaged notch conditions (left), and AT 2 model with damaged and undamaged notch conditions (right).
Figure 2 . 10 :
210 Figure 2.10: Critical generalized stress intensity factor k c vs notch angle for Al 2 O 3 -7%ZrO 2 (left) and PMMA (right). The black markers represents all experimental results. The numerical results are obtained through the Pac-Man test using the AT 1 model. See Tables 2.8-2.9 in the Appendix B for the raw data.
Figure 2 .
2 Figure 2.12: DENT geometry
2. 3 .AT 2 -
32 Size effects in variational phase-field models 1 -U ρ = 0.5AT 1 -U ρ = 1.25 AT 1 -U ρ = 2.5 U ρ = 0.5 AT 2 -U ρ = 1.25 AT 2 -U ρ = 2.5
Figure 2 . 13 :h = ρ 2 ah h = R 100 Figure 2 . 14 :
213100214 Figure 2.13: Crack nucleation at U-notches. Comparison between experimental data of [92] and numerical simulations using the AT 1 (top) and AT 2 (bottom) models.
Figure 2 . 15 :
215 Figure 2.15: Damage field at the boundary of the hole in the elastic phase 0 < t < t e (left), the phase with partial damage t e < t < t c (center), and after the nucleation of a crack t > t c (right). Blue: α = 0, red: α = 1. The simulation is for ρ = 1.0 and a/ = 5.
Figure 2 . 16 :
216 Figure 2.16: Normalized applied macroscopic stress t e /σ c at damage initiation as a function of the aspect ratio ρ for a/ = 1 (left) and of the relative defect sizes a/ for ρ = 1 and ρ = 0.1 (right).
52 2. 3 .Figure 2 . 17 :
523217 Figure2.17: Normalized applied macroscopic stress t c /σ e at crack nucleation for an elliptic cavity in an infinite plate. Left: shape effect for cavities of size much larger than the internal length (a/ = 48); the solid line is the macroscopic stress at the damage initiation t e (see also Figure2.16) and dots are the numerical results for the AT 1 model. Right: size effect for circular (ρ = 1.0) and highly elongated (ρ = 0.1) cavities.
Figure 2 . 53 Chapter 2 .Figure 2 . 18 :
2532218 Figure 2.18: Initiation of a crack of length 2a in a plate of finite width 2W . The numerical results (dots) are obtained with the AT 1 model for = W/25. The strength criterion and the Griffith's criterion (2.18).
Figure 3 . 1 :
31 Figure 3.1: Sketch of the geometry (invariant). The symmetry axis being a reflection for 2d and a revolution axis in 3d.
3. 2 .
2 Numerical verification case of a pressurized single fracture in a two and three dimensions
Chapter 3 .Figure 3 . 2 :
332 Figure 3.2: Evolutions of normalized p, V and l for the line fracture (left column figures) and penny shape crack (right column figures). Colored dots refer to numerical results and solid black lines to the closed form solution given in Appendix C. For the line fracture, V c = 4πl 3 0 (G c ) eff /E and p c = E (G c ) eff /(πl 0 ), where E = E/(1ν 2 ) in plane strain theory and E = E in plane stress. For the penny shape crack, V c = 8/3 πl 5 0 (G c ) eff /E and p c = πE(G c ) eff /(4l 0 ).
3. 2 .Figure 3 . 3 :
233 Figure 3.3: Snap-shots of damage for the line fracture example at different loadings, such that, before the loading cycle (top), before refilling the fracture (middle) and during the propagation (bottom). The red color is fully damage material and blue undamaged. We see the casing mesh which encapsulates the fracture.
Figure 3 . 4 :Figure 3 . 5 :
3435 Figure 3.4: Snap shots (view from above) of fracture damage (α ≥ .99) for the penny shape crack example at different loadings, that is before refilling the fracture (left) and during the propagation (right). The solid black lines are the limit of the casing.
Figure 3 . 6 :
36 Figure 3.6: Infinite network of parallel cracks domain (left). Domain duplications form the smallest invariant domain (right).
Figure 3 . 7 :
37 Figure 3.7: Domains in the deformed configuration for respectively Ω 1 , Ω 2 , Ω 4 and Ω 6 .The pseudo-color blue is for undamaged material and turns white when (α ≤ .01) (visibility reason). The full colors correspond to numerical simulations cells domain see table3.2, and opacity color refers to the rebuild solution using symmetries. In all simulations only one crack propagates in the domain. Using the multiplicity pictures from left to right we obtain a fracture propagation periodicity denoted period. of 6/6, 3/6, 1.5/6 and 1/6.
6 Figure 3 . 8 :
638 Figure 3.8: Plots of normalized variables such that crack pressure, average fracture length and energy density (per Ω) vs. fluid volume density (per Ω) respectively on the (top-left) and (top-right) and (bottom-right). The aperture of the longest crack for 2V /(nV c ) = 13.Colored plot are numerical results for different domain sizes Ω 1 , Ω 2 , Ω 4 and Ω 6 . The solid black line is the closed form solution and gray the approximated solution given by Sneddon[START_REF] Sneddon | Crack problems in the classical theory of elasticity[END_REF].
Chapter 3 . 6 Figure 3 . 10 :
36310 Figure 3.10: Ratio of critical pressures (multi-fracking over single fracture) vs. the inverse of the fracture density (hight density on the left x-axis and low density on the right side).Black dash line is r p (ρ/n) with 1/n the periodicity. Colored line is numerical results for respectively a periodicity 6/6, 3/6 and 1.5/6.
Figure 3 . 11 :
311 Figure 3.11: Fracture toughness vs. confining pressure for the Indiana limestone
Chapter 3 .LFigure 3 . 12 :
3312 Figure 3.12: Schematic of burst experiment for jacketed bore on the (left). Pre-(middle) and Post-(right) burst experiment photos.
Figure 3 . 13 :
313 Figure 3.13: Rigorous superposition of the burst problem.L
Figure 3 . 14 :
314 Figure 3.14: Superposition of the burst problem applied in Abou-Sayed (1978).
Figure 3 . 15 :
315 Figure 3.15: Comparison of the normalized stress intensity factor for the jacketed and unjacketed problems receptively denoted K J * I and K U * I vs. the normalized crack length l. Numerical computational SIF based on G θ method (colored lines) overlay plots provided by Clifton in [53].
10 Figure 3 . 16 :
10316 Figure 3.16: Computed normalized SIF vs. normalized crack length l for two confining pressure ratios r = 1/8 (dash lines) and r = 1/6 (solid lines) and various w = {3, 4, 7, 10} (colored lines).
Figure 3 . 17 :
317 Figure 3.17: Three possible regime for K B * I denoted (a), (b) and (c). l U S is a critical point from unstable to stable crack propagation, vice versa for l SU . The fracture does not propagates at stop point denoted l ST
Figure 3 . 18 :
318 Figure 3.18: Computed normalized SIF vs. normalized crack length for the unconfined (left) and confined (right) burst experiments according to the Table 3.3.
5 r = 1 /6 w = 5 Figure 3 . 19 :
515319 Figure 3.19: Colored lines are computed normalized SIF vs. normalized crack length for unstable propagation (l 0 ≥ .5). Red markers are time step results obtained using the phase-field model.
(4. 16 )
16 By applying the supremum function for all τ * ∈ K, it comes that, for τ ∈ K, τ : ṗ ≥ sup τ * ∈K {τ * : ṗ}.(4.17)
4. 2 .
2 Variational formulation of perfect plasticity models q = p + hp, ∀p ∈ M n s Plug this into (4.27) and send h → 0, then using the definition of Gateaux derivative and the subgradient, the stability definition leads to τ = -∂ ψ ∂p (e, p) = lim h→0 -ψ(e, p + hp) -ψ(e, p) hp ∈ ∂H(pp t i-1 ). (4.28)
30 )τ
30 For any given τ * a fixed element of K, * : p(s) ds = τ * : (qp).
Chapter 4 .Algorithm 3 1 :
431 Variational models of perfect plasticity as a constraint optimization problem implemented using SNLP solvers provided by the open source snlp 1 . All computations were performed using the open source mef90 2 . Elasto-plasticity alternate minimization algorithm for the step i Let j = 0 and p 0 := p i-1 2: repeat 3:
p i-1 ) : (pp i-1 ) 5:
line segment [( d/2, l/2, 0), (d/2, l/2, l)]
Figure 4 . 1 :
41 Figure 4.1: The closed-form solution equations (4.37),(4.38) are denoted in solid-lines and dots referred to numerical results is in dots. (Top-left) and (-right) figures show respectively the hydrostatics evolution of stresses and plastic strains with the time loading. The figure in the bottom shows displacements for t = 2.857 along the lineout axis [(-d/2, -l/2, 0) × (d/2, l/2, l)]
3.1, 5.3.2 and 5.3.3. 5.1 Phase-field models to fractures from brittle to ductile 5.1.1 Experimental observations of ductile fractures It is common to separate fractures into two categories; brittle and ductile fractures with different mechanisms. However relevant experiments [110] on Titanium alloys glass show a transition from brittle to ductile fractures response (see Figure 5.1) by varying only one parameter: the concentration of Vanadium. Depending on the Vanadium quantity, they observed a brutal formation of a straight crack, signature of brittle material response for low concentrations. Conversely a smooth stress softening plateau is measured before failure for higher concentrations. The post mortem samples show a shear dominating fracture characteristic of ductile behaviors. / Plot of uniaxial tension test data with optical images of dogbone specimens post-failure. (top eering stress as a function of engineering strain is plotted from tensile tests on dogbone samples o series composites. Samples were loaded until failure at a constant strain rate of 0.2 mm/min. Curv n the x-axis to highlight differences in plastic deformation behavior between alloys. (top right) h of complete V0 dogbone sample after failure in tension. (bottom) Optical microscope images at ilure in deformed alloys V2-V10 and DV1.
Figure 5 . 1 :
51 Figure 5.1: Pictures produced by [110] show post failure stretched specimens of Ti-based alloys V x --Ti 53 -x/2 Zr 27 -x/2 Cu 5 Be 15 V x . From left to right: transition from brittle to ductile with a concentration of Vanadium respectively equal to 2%, 6% and 12%.
α)A(e(u)p) Plasticity occurs in the material once the stress reaches a critical value defined by the plastic yield function f Y : M n×n s → R convex such that f Y (0) < 0. We proposed to couple the damage with the admissible stress set through the coupling function b(α) such that, the stress is constrained by σ ∈ b(α)K, where K := {τ ∈ M n×n s s.t. f Y (τ ) ≤ 0} is a non empty close convex set. The elastic stress domain is subject to isotropic transformations by b(α) a state function of the damage. Naturally to recover a stress-softening response, Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage the coupling function b(α) is continuous decreasing such that b(0) = 1 and b(1) = η b , where η b is a residual. By considering associated plasticity the plastic potential is equal to the yield function and the plastic flow occurs once the stress hits the yield surface, i.e. σ ∈ b(α)∂K. At this moment, the plastic evolution is driven by the normality rule such that the plastic flow lies in the subdifferential of the indicator function denoted I at σ, written as, ṗ ∈ ∂I b(α)K (σ)
.34) 6 . 7 . 2 c( 5 . 36 ) 8 .
6725368 Plastic flow rule in the bulk, b(α t (x))σ p | ṗt (x)|σ t ṗt (x) = 0(5.35) The damage consistency in the bulk, jump and boundary,f D (α t (x), p t (x), pt (x)) αt (x) = 0 b (α t (x 0 )) u(x 0 ) σ p -2 σ kE α t (x 0 ) αt (x) = 0 α t (±L) αt (±L) = 0The energy balance at the boundary, α t (±L) αt (±L) = 0 (5.37)9. The irreversibility which applies everywhere in Ω, 0 ≤ α t (x) ≤ 1, αt (x) ≥ 0 (5.38)
Chapter 5 .Figure 5 . 3 :
553 Figure 5.3: Comparisons of the computed solution (dark lines) for AT 1 see Table5.1 with the numerical simulation (colored lines) for parameters E = 1, σ p = 1, = 0.15, σ c = 1.58, L = .5 and η b = 0. The (top-left) picture shows the stress-displacement evolution, the (top-right) plot is the displacement jump vs. the stress during the softening behavior. The (bottom-left) figure shows the damage profile during the localization for three different loadings. The (bottom-right) is the evolution of the energy release vs. the displacement jump also known as the cohesive law(Barenblatt).
Chapter 5 .Algorithm 4 1 :: repeat 3 :
5413 Variational phase-field models of ductile fracture by coupling plasticity with damage Alternate minimization algorithm at the step i Let, j = 0 and α 0 := α i-1 , p 0 := p i-1 2Let, k = 0 and p 0 := p j
3 Figure 5 . 4 :
354 Figure 5.4: Rectangular specimen in tensile with rollers boundary condition on the leftright extremities and stress free on the remainder. The characteristic mesh size is h = /5.
Figure 5 . 5 :
55 Figure 5.5: Shows fracture path angle vs. the initial yields stress ratio r Y . Transition form straight to slant crack characteristic of a brittle -ductile fracture transition.
Figure 5 . 6 : 9 . 5 . 3 .Figure 5 . 7 :Figure 5 . 8 :
569535758 Figure 5.6: Stress vs. displacement plot for σ c /σ p = 5, shows the influence of the internal length on the stress jump amplitude signature of the snap back intensity. Letters on the curve = .1 referees to loading times when snap-shots of α and p are illustrated in Figure 5.9.
123 Chapter 5 .Figure 5 . 9 :
123559 Figure 5.9: Rectangular stretched specimen with rollers boundary displacement for parameters σ c /σ p = 5 and = .1, showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1%) at different loading time refereed to the plot 5.6 for (a, b, c, d). The cumulated plastic strain defined as p = t 0 || ṗ(s)||ds has a piecewise linear color table with two pieces, [0, 14] for the homogeneous state and [14, 600] for visibility during the localization process. Moreover the maximum value is saturated.
Figure 5
5 Figure 5.10: Specimen geometry with nominal dimensions, typical mesh are ten times smaller that the one illustrated above. Note that meshes in the center area of the geometry are refined with a constant size h. Also a linear growing characteristic mesh size is employed from the refined area to a coarsen mesh at the boundary.
Figure 5 . 11 :
511 Figure 5.11: Plot of the stress vs. strain (tensile axis component) for the mild notch specimen with clamped and rollers interfaces conditions respectively set-up A and set-up B
126 5. 3 .Figure 5 . 12 :Figure 5 . 13 :
1263512513 Figure 5.12: Zoom in the center of mild notched stretched specimen with clamped boundary displacement (set-up A) showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1) at different loading time refers to Figure 5.11 for (a, b, c, d). The cumulated plastic strain color table is piecewise linear with two pieces, [0, .35] for the homogeneous state and [.35, 2.5] for visibility during the localization process. Moreover the maximum value is saturated. The pseudo color turns white when (α ≥ 0.995) for the damage on the deformed configuration figure.
5. 3 .Figure 5 . 14 :
3514 Figure 5.14: Zoom in the center of eccentric mild notched stretched specimen (ρ = .9) showing snap-shots of damage, cumulated plastic strain and damage in deformed configuration (the displacement magnitude is 1) at the failure loading time, for the set-up A and B. The cumulated plastic strain color table is piecewise linear with two pieces, [0, .35] for the homogeneous state and [.35, 2.5] for visibility during the localization process. Moreover the maximum value is saturated. The pseudo color turns white when (α ≥ 0.995) for the damage on the deformed configuration figure.
and 5.18 were fractures patterns are similar to one observed in the literature see pictures 5.15 and 5.16. An overview of the fracture evolution in round bar are exposed in the Figures 5.19
.
Figure 5 . 15 :
515 Figure 5.15: Photo produced by [107] showing cup cones fracture in a post mortem rounded bar.
Figure 5 . 16 :
516 Figure 5.16: Photo produced by [107] showing shear dominating fracture in a post mortem rounded bar.
Figure 5 .
5 Figure 5.17: Snap-shot of the damage in deformed configuration for the set-up A after failure, two pieces next to each other.
Figure 5 .
5 Figure 5.18: Snap-shot of the damage in deformed configuration for the set-up B after failure, two pieces next to each other.
Chapter 5 .Figure 5 . 19 :
5519 Figure 5.19: Picture in Benzerga-Leblond[START_REF] Benzerga | Ductile fracture by void growth to coalescence[END_REF] shows the phenomenology of ductile fracture in round notched bars of high strength steel: damage accumulation, initiation of macroscopic crack, crack growth and shear lip formation. Numerical simulations shows the overlapped stress vs. displacement blue and orange curves for respectively set-up A and setup B, and snap shots of damage slices in the deformed round bar. The hot color table illustrates the damage, the red color turns white for α ≥ 0.95 which correspond to less than 0.25% of stiffness.
2.11: Critical load in the three-and four-point bending experiments of a Al 2 O 3 -7%ZrO 2 sample (left) and four-point bending of a PMMA sample (right) from[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] compared with numerical simulations using the AT 1 model and undamaged notch boundary conditions. Due to significant variations in measurements in the first set of experiments, each data point reported in[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] is plotted. For the PMMA experiments, average values are plotted. See Table2.10 and 2.11 in the Appendix B for raw data.
Table 2 .
2 3: Critical generalized stress intensity factor k for crack nucleation at a notch as a function of the notch opening angle ω from Figure2.5. Results for the AT 1 and AT 2 models with damaged -D and undamaged -U notch lips conditions. The results are obtained with numerical simulations on the Pac-Man geometry with (K Ic ) eff = 1 and = 0.01 so that σ c = 10 under plane-strain conditions with a unit Young's modulus and a Poisson ratio ν = 0.3.
500 1.292 1.084 1.349 1.284
10 0°0.500 1.308 1.091 1.328 1.273
20 0°0.503 1.281 1.121 1.376 1.275
30 0°0.512 1.359 1.186 1.397 1.284
40 0°0.530 1.432 1.306 1.506 1.402
50 0°0.563 1.636 1.540 1.720 1.635
60 0°0.616 2.088 1.956 2.177 2.123
70 0°0.697 2.955 2.704 3.287 3.194
80 0°0.819 4.878 4.391 5.629 5.531
85 0°0.900 6.789 5.890 7.643 7.761
89 9°0.998 9.853 8.501 9.936 9.934
Table 2 .
2 4: Generalized critical stress intensity factors as a function of the notch aperture in soft annealed tool steel, (AISI O1 at -50 • C). Experimental measurements from[START_REF] Strandberg | Fracture at V-notches with contained plasticity[END_REF] using SENT and TPB compared with Pac-Man simulations with the AT 1 model.
Experiments Undamaged notch Damaged notch
2ω Mat k (exp) c stdev k c (num) rel. error k (num) c rel. error
0°H80 0.14 0.01 0.18 22.91 % 0.15 5.81 %
H100 0.26 0.02 0.34 24.62 % 0.28 7.61 %
H130 0.34 0.01 0.44 29.34 % 0.36 5.09 %
H200 0.57 0.02 0.74 47.60 % 0.61 6.53 %
90°H80 0.20 0.02 0.22 12.65 % 0.21 4.73 %
H100 0.36 0.02 0.41 12.29 % 0.38 4.10 %
H130 0.49 0.05 0.54 11.33 % 0.50 0.50 %
H200 0.81 0.08 0.91 20.54 % 0.83 2.21 %
140°H80 0.53 0.06 0.53 0.37 % 0.48 9.26 %
H100 0.89 0.04 0.92 3.43 % 0.84 5.91 %
H130 1.22 0.10 1.25 2.95 % 1.13 7.48 %
H200 2.02 0.14 2.07 4.92 % 1.89 6.80 %
155°H80 0.86 0.07 0.83 3.63 % 0.75 14.36 %
H100 1.42 0.08 1.42 0.14 % 1.29 10.63 %
H130 1.90 0.10 1.95 2.82 % 1.76 8.06 %
H200 3.24 0.15 3.23 0.89 % 2.92 11.02 %
Table 2.5: Generalized critical stress intensity factors as a function of the notch aper-
ture in Divinycell® PVC foam. Experimental measurements from [94] using four point
bending compared with Pac-Man simulations with the AT 1 model.
Table 2 .
2 6: Generalized critical stress intensity factors as a function of the notch aperture in Duraluminium. Experimental measurements from[START_REF] Seweryn | Brittle fracture criterion for structures with sharp notches[END_REF] using single edge notch tension compared with Pac-Man simulations with the AT 1 model.
Experiments Undamaged notch Damaged notch
ω Type k c (exp) stdev k c (num) rel. error k (num) c rel. error
10°DENT 1.87 0.03 2.50 25.29 % 2.07 10.03 %
20°DENT 1.85 0.03 2.53 26.89 % 2.13 12.97 %
30°DENT 2.17 0.03 2.65 18.17 % 2.33 6.92 %
40°DENT 2.44 0.02 3.07 20.65 % 2.73 10.70 %
50°DENT 3.06 0.05 3.94 22.31 % 3.54 13.63 %
60°DENT 4.35 0.18 5.95 26.97 % 5.41 19.69 %
70°DENT 8.86 0.18 11.18 20.74 % 10.10 12.26 %
80°DENT 28.62 0.68 27.73 3.20 % 24.55 16.56 %
90°DENT 104.85 10.82 96.99 8.11 % 85.37 22.82 %
Table 2.7: Generalized critical stress intensity factors as a function of the notch aper-
ture in PMMA. Experimental measurements from [165] using single edge notch tension
compared with Pac-Man simulations with the AT 1 model.
Table 2 .
2 8: Generalized critical stress intensity factors as a function of the notch aperture in Aluminium oxide ceramics. Experimental measurements from[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three and four point bending compared with Pac-Man simulations.
Experiments Undamaged notch Damaged notch
2ω a/h k c (exp) stdev k c (num) rel. error k c (num) rel. error
60°0.1 1.41 0.02 1.47 4.5% 1.29 9.3%
0.2 1.47 0.04 1.47 0.4% 1.29 14.0%
0.3 1.28 0.03 1.47 13.0% 1.29 0.4%
0.4 1.39 0.04 1.47 5.8% 1.29 7.8%
90°0.1 2.04 0.02 1.98 3.0% 1.81 12.9%
0.2 1.98 0.01 1.98 0.0% 1.81 9.6%
0.3 2.08 0.03 1.98 5.1% 1.81 15.2%
0.4 2.10 0.03 1.98 5.9% 1.81 16.1%
120°0.1 4.15 0.02 3.87 7.3% 3.63 14.3%
0.2 4.03 0.06 3.87 4.2% 3.63 11.0%
0.3 3.92 0.18 3.87 1.4% 3.63 8.0%
0.4 3.36 0.09 3.87 13.0% 3.63 7.4%
Table 2.9: Generalized critical stress intensity factors as a function of the notch aperture
in PMMA. Experimental measurements from [71] using three and four point bending
compared with Pac-Man simulations. The value a/h refers to the ratio depth of the
notch over sample thickness. See Figure 2.9 for geometry and loading.
Table 2 .
2 10: Critical load reported in[START_REF] Yosibash | Failure criteria for brittle elastic materials[END_REF] using three-and four-point bending experiments of an Al 2 O 3 -7%ZrO 2 sample compared with numerical simulations using the AT 1 model and undamaged notch boundary conditions. TPB and FPB refer respectively to three point bending and four point bending. See Figure2.9 for geometry and loading.
2ω a/h P c (exp) [N] stdev P (num) c [N] rel. error
60°0.1 608.50 6.69 630.81 3.5%
0.2 455.75 12.48 451.51 0.9%
0.3 309.00 8.19 347.98 11.2%
0.4 258.75 6.61 268.69 3.7%
90°0.1 687.33 5.19 668.69 2.8%
0.2 491.00 2.94 491.41 0.1%
0.3 404.33 5.44 383.33 5.5%
0.4 316.00 4.24 297.48 6.2%
120°0.1 881.75 4.60 822.22 7.2%
0.2 657.25 9.36 632.32 3.9%
0.3 499.60 25.41 499.50 0.0%
0.4 336.25 9.09 386.87 13.1%
Table 2 .
2
11: Load at failure reported in
Table 3 . 3
33
288 10.07 1310 0 0.218 365
Id 1 0.279 8.93 9775 1/8 0.2025 1462
Id 2 0.258 9.65 14907 1/8 0.2060 1954
Id 3 0.273 9.12 11282 1/6 0.2128 1023
Id 4 0.283 8.82 17357 1/6 0.1102 2550
Id 5 0.257 9.70 18258 1/6 0.2022 1508
: Rock specimen dimensions provided by the commercial laboratory and calculated fracture toughness.
where A is the Hooke's law tensor. The domain is subject to time dependent stress boundary condition σ • ν = g(t) on ∂ N Ω. A safe load condition g(t) is prescribed to prevent issues in plasticity theory. The total energy is formulated for every x ∈ Ω and every t by E t (u, p) =
4.3.1 Numerical implementation of perfect plasticity models
Consider the same problem with stress conditions at the boundary and a free energy of
the form of, ψ(e(u), p) = 1 2 A(e(u) -p) : (e(u) -p),
Ω 1 2 A(e(u) -p) : (e(u) -p) + 0 t sup τ ∈K {τ : ṗ(s)}ds dx
e 2 + νg 2 e 3 ⊗ e 3 + tEe 3 ⊗ e 3 ⊗ e 2 + t(-νe 1 ⊗ e 1νe 2 ⊗ e 2 + e 3 ⊗ e 3 ) x 2 e 2 + t(-νx 1 e 1νx 2 e 2 + x 3 e 3 )
e(t) = -ν(1 + ν) e 2 u(t) = -g 2 E e 1 ⊗ e 1 + (1 -ν 2 ) g 2 E ν(1 + ν) E g 2 x 1 e 1 + 1 -ν 2 E g 2 (4.37)
After the critical loading, permanent deformation takes place in the structure and the
solution is
Table 5 .
5
1: Variety of possible models, where c 1 , c 2 are constants.
Table 5 .
5 2: Specimen dimensions. All measures are in [mm]. The internal length is specified in Table 5.3. We observed two patterns of ductile fractures depending on the boundary condition 125 Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage
E ν σ p σ c
[GPa] [MPa] [GPa] [µm]
70 .33 100 2 400
Table 5 .
5 3: Material parameters used for AA 5754 Al -Mg.
Table 5 .
5 .[START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF]. The smallest damageable plastic yield surface is given for 5% of σ p . 4: Specimen dimensions. For the internal length refer to the Table5.5.
L H W r D d l h
4.5 2.2 1.05 .5 1.09 0.98 0.82 /2.5
E ν σ p r Y
1 .3 1 12 .03
Table 5 .
5 5: Parameters used for 3d simulations.
Karush-Kuhn-Tucker
available at https://www.bitbucket.org/bourdin/mef90-sieve
available at https://www.bitbucket.org/bourdin/mef90-sieve
available at https://bitbucket.org/cmaurini/gradient-damage
https://en.wikipedia.org/wiki/Pac-Man
available at https://www.bitbucket.org/bourdin/mef90-sieve
available at http://abs-5.me.washington.edu/snlp/ and at https://bitbucket.org/bourdin/ snlp
available at https://www.bitbucket.org/bourdin/mef90-sieve
http://www.hpc.lsu.edu
Remerciements
problem. Perhaps extensions into phase field models of dynamic fracture will address this issue.
Fracture in compression remains an issue in variational phase-field models. Although several approaches have been proposed that typically consist in splitting the strain energy into a damage inducing and non damage inducing terms, neither of the proposed splits are fully satisfying (see [START_REF] Amor | Regularized formulation of the variational brittle fracture with unilateral contact: Numerical experiments[END_REF][START_REF] Lancioni | The variational approach to fracture: A practical application to the french Panthéon[END_REF][START_REF] Li | Gradient Damage Modeling of Dynamic Brittle Fracture[END_REF] for instance). In particular, it is not clear if either of this models is capable of simultaneously accounting for nucleation under compression and self-contact.
Finally, even though a significant amount of work has already been invested in extending the scope of phase-field models of fracture beyond perfectly brittle materials, to our knowledge, none of the proposed extensions has demonstrated its predictive power yet.
Appendix C
Single fracture in a infinite domain Line Fracture (2d domain):
The volume of a line fracture in a 2d domain is
where E = E/(1ν 2 ) in plane strain and E = E in plane stress theory. Before the start of propagation, l = l 0 and the fluid pressure in this regime is
If we consider an existing line fracture with an initial length of l 0 . Prior to fracture propagation, the fracture length does not change so that l = l 0 . Since fracture length at the onset of propagation is l 0 , the critical fluid pressure [START_REF] Sneddon | The opening of a griffith crack under internal pressure[END_REF] is
The critical fracture volume at the critical fluid pressure is obtained by substituting (3.16) into (3.14)
During quasi-static propagation of the fracture, l ≥ l 0 and the fracture is always in a critical state so that (3.16) applies. Therefore, the fluid pressure and fracture length in this regime are
Chapter 4
Variational models of perfect plasticity
Elasto-plasticity is a branch of solid mechanics which deals with permanent deformation in a structure once the stress reached a critical value at a macroscopic level. This topic is a vast research area and it is impossible to cover all contributions. We will focus on recalling basic mathematical and numerical aspects of perfect elasto-plasticity in small strain theory under quasi-static evolution problems. The perfect elasto-plastic materials fall into the theory of generalized standard materials developed by [START_REF] Halphen | Sur les matériaux standard généralisés[END_REF][START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF][START_REF] Salençon | Elasto-plasticité[END_REF][START_REF] Marigo | From clausius-duhem and drucker-ilyushin inequalities to standard materials[END_REF][START_REF] Mielke | A mathematical framework for generalized standard materials in the rate-independent case[END_REF].
Recently, a modern formalism of perfect plasticity arose [START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF][START_REF] Solombrino | Quasistatic evolution problems for nonhomogeneous elastic plastic materials[END_REF][START_REF] Babadjian | Quasi-static evolution in nonassociative plasticity: the cap model[END_REF][START_REF] Francfort | Small-strain heterogeneous elastoplasticity revisited[END_REF][START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF], the idea is to discretize in time and find local minimizers of the total energy. In this chapter we focus only on perfect elasto-plasticity materials and set aside the damage. We start with concepts of generalized standard materials in the section 4.1. Then using some convex analysis [START_REF] Ekeland | Convex analysis and variational problems[END_REF][START_REF] Temam | Mathematical problems in plasticity[END_REF] we show the equivalence with the variational formulation presented in the section 4.2. The last part 4.3 presents an algorithm to solve perfect elasto-plasticity materials evolution problems. A numerical verification example is detailed at the end of the chapter.
Ingredients for generalized standard plasticity models
For the moment we set aside the evolution problem and we focus on main ingredients to construct standard elasto-plasticity models [START_REF] Germain | Continuum thermodynamics[END_REF][START_REF] Quoc | Stability and nonlinear solid mechanics[END_REF][START_REF] Suquet | Sur les équations de la plasticité: existence et régularité des solutions[END_REF]. This theory requires a choice of internal variables, a recoverable and a dissipation potentials energies where both functionals are convex. The driving forces (conjugate variables) usually the stress and the thermodynamical force lie respectively in the elastic and dissipation potential energies. For smooth evolutions of the internal variables, the material response is dictated by the normality rule of the dissipation potential convex set (flow law rule). By doing so, it is equivalent to find global minimizers of the total energy sum of the elastic and dissipation potential energies.
Consider that our material has a perfect elasto-plastic response and can be modeled by the generalized standard materials theory, which is based on two statements. In all of these experiments main observations of fractures nucleation reported are: (i) formations of shear bands in "X" shape intensified by necking effects, (ii) growing voids and coalescence, (iii) macro-crack nucleation at the center of the specimen, (iv ) propagation of the macro crack, straightly along the cross section or following shear bands depending on the experiment and (v ) failure of the sample when the fracture reaches external free surfaces stepping behind shear bands path. Observed fracture shapes are mostly cup-cones or shear dominating.
The aforementioned ductile features examples will be investigated through this chapter by considering similar geometries such as, rectangular samples, round notched specimens in plane strain condition and round bars.
Pioneers to model ductile fractures are Dugdale [START_REF] Dugdale | Yielding of steel sheets containing slits[END_REF] and Barenblatt [START_REF] Barenblatt | The mathematical theory of equilibrium of cracks in brittle fracture[END_REF] with their contributions on cohesive fractures following Griffith's idea. Later on, a modern branch focused on micro voids nucleations and convalescence as the driven mechanism of ductile fracture. Introduced by Gurson [START_REF] A L Gurson | Continuum Theory of Ductile Rupture by Void Nucleation and Growth: Part I -Yield Criteria and Flow Rules for Porous Ductile Media[END_REF] a yield surface criterion evolves with the micro-void porosity density. Then, came different improved and modified versions of this criterion, Gurson-Tvergaard-Needleman (GTN) [START_REF] Tvergaard | Material failure by void growth to coalescence[END_REF][START_REF] Tvergaard | Analysis of the cup-cone fracture in a round tensile bar[END_REF][START_REF] Needleman | An analysis of ductile rupture in notched bars[END_REF], Rousselier [START_REF] Rousselier | Ductile fracture models and their potential in local approach of fracture[END_REF], Leblond [START_REF] Leblond | An improved gurson-type model for hardenable ductile metals[END_REF] to be none exhaustive. The idea to couple phase-field models to brittle fracture with plasticity to recover cohesive fractures is not new and have been developed theoretically and numerically in [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Conti | Phase field approximation of cohesive fracture models[END_REF][START_REF] Ambati | A phase-field model for ductile fracture at finite strains and its experimental verification[END_REF][START_REF] Ambati | Phase-field modeling of ductile fracture[END_REF][START_REF] Crismale | Viscous approximation of quasistatic evolutions for a coupled elastoplastic-damage model[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF][START_REF] Wadier | Mécanique de la rupture fragile en présence de plasticité : modélisation de la fissure par une entaille[END_REF][START_REF] Miehe | Phase field modeling of ductile fracture at finite strains. a variational gradient-extended plasticity-damage theory[END_REF].
Gradient damage models coupled with perfect plasticity
Our model is settled on the basis of perfect plasticity and gradient damage models which has proved to be efficient to predict cracks initiation and propagation in brittle materials. Both mature models have been developed separately and are expressed in the variational formulation in the spirit of [START_REF] Mielke | Evolution of rate-independent systems[END_REF][START_REF] Bourdin | The variational approach to fracture[END_REF][START_REF] Piero | Variational Analysis and Aerospace Engineering, volume 33 of Optimization and Its Applications[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : I. les concepts fondamentaux[END_REF][START_REF] Pham | Approche variationnelle de l'endommagement : Ii. les modèles à gradient[END_REF][START_REF] Dal Maso | Quasistatic evolution problems for linearly elastic-perfectly plastic materials[END_REF] which provides a fundamental Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage
(5.11)
Passing E t i (u i , α i , p i ) to left and dividing by h and letting h → 0 at the limit, we obtain,
By integrating by part the integral term in e(v) over Ω \ (J(ζ i ) ∪ J(v)), we get,
(5.13)
where σ i = a(α i )A e(u i )p i . Without plasticity there is no cohesive effect, hence, σ i ν = 0 and the non-interpenetration condition leads to u i • ν ≥ 0 on J(ζ i ), however for a general cohesive model we do not have information for σ i ν on J(ζ i ). So, to overcome this issue we restrict our study to material with tr (p i ) = 0, consequently on the jump set J(ζ i ) we have tr (
The material can only shear along J(ζ i ) which is commonly accepted for Von Mises and Tresca plasticity criterion. Thus, we have v • ν = 0 on J(ζ i ) and naturally σ i ν = 0 on J(v). The last term of (5.13) stands for,
Combining the above equation, (5.12) and (5.13), considering J(v) = ∅ and by a standard localization argument i.e. taking v concentrated around H n-1 and zero 5.1. Phase-field models to fractures from brittle to ductile almost everywhere, we obtain that all the following integrals must vanish,
which leads to the equilibrium and the prescribed boundary conditions,
Note that the normal stress σ i ν is continuous across J(ζ i ) but the tangential component might be discontinuous.
2. Plastic yield criteria on the jump set: Since the above equation (5.15) holds, for J(v) = ∅ in (5.12) we have,
Thus, on each point of the jump set
The right hand side of the above inequality,
Considering Von Mises criterion we get on the left hand side,
Taking the maximum for all ν = 1, and letting σ i = a(α i )ς i we obtain that (5.17) becomes,
This condition is automatically satisfied for Von Mises since a(α i )/b(α i ) ≤ 1. We refer the reader to [START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Francfort | The elastoplastic exquisite corpse: A suquet legacy[END_REF][START_REF] Gilles A Francfort | A case study for uniqueness of elasto-plastic evolutions: The bi-axial test[END_REF] for more details.
Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage 3. Damage yield criteria in the bulk: Taking v = 0 and q = 0 thus J(v) = ∅ in the optimality condition (5.9), such that E t i (u i , α i + hβ, p i ) ≥ E t i (u i , α i , p i ), then dividing by h and passing to the limit, we get, after integrating by parts the ∇α•∇β term over Ω \ J(ζ i ),
The above equation holds for any β ≥ 0, hence, all contributions must be positive, such that in Ω \ J(ζ i ), we have,
The damage yield criterion is composed of the classical part from gradient damage models and a coupling part in b (α). When the material remains undamaged and plasticity occurs, the cumulation of dissipated plastic energy combined with the property that b (α) < 0 leads to decrease the left hand side which becomes an equality up to a critical plastic dissipation. At this moment the damage is triggered.
4. Damage yield criteria in the jump set: From (5.18) we have,
The gradient damage is discontinuous across the jump set J(ζ i ) due to plastic strain concentration and vice versa.
5. Damage boundary condition: From (5.18) we have,
6. Plastic yield criteria in the bulk: Take v = 0 and β = 0 thus J(v) = ∅ in the optimality condition (5.9) such that
where
Since ψ is differentiable by letting h → 0 and applying the subgradient definition to (5.22), we get -∂ψ/∂p i ∈ b(α i )∂H(p ip i-1 ).
We recover the stress admissible constraint provided by the plastic yield surface.
The damage state decreases the plastic yield surface leading to a stress softening property.
7. Flow rule in the bulk: Applying the convex conjugate (Legendre-Fenchel) to the above equation we get,
which is the flow rule in a discrete settings, by letting max(t it i-1 ) → 0 we get the time continuous one.
Damage consistency:
The damage consistency is recovered using the energy balance condition which is not fully exposed here. However the conditions obtained are:
Damage irreversibility in the domain:
The damage irreversibility constraint is,
All of this conditions are governing laws of the problem. The evolution of the yields surfaces are given by the equations (5.19) and (5.23).
Chapter 5. Variational phase-field models of ductile fracture by coupling plasticity with damage
Application to a 1d setting
The goal of this section is to apply the gradient damage model coupled with perfect plasticity in 1d setting by considering a bar in traction. Relevant results are obtained through this example such as, the evolutions of the two yields functions, the damage localization process and the role of the gradient damage jump term which governs the displacement jump set. We refer the reader to Alessi-Marigo [START_REF] Alessi | Variational approach to fracture mechanics with plasticity[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity: variational formulation and main properties[END_REF][START_REF] Alessi | Gradient damage models coupled with plasticity and nucleation of cohesive cracks[END_REF][START_REF] Alessi | Coupling damage and plasticity for a phase-field regularisation of brittle, cohesive and ductile fracture: One-dimensional examples[END_REF] for a complete exposition of this 1d application.
In the sequel, we consider a one-dimensional evolution problem of an homogeneous elasto-plastic-damageable bar Ω = [-L, L] stretched by a time controlled displacements at boundaries where damage remains equal to zero. Assume that a unique displacement jump may occur on the bar located at the coordinate x 0 , thus the admissible displacement, damage and plastic strain sets are respectively,
The state variables of the sound material is at the initial condition (u 0 , α 0 , p 0 ) = (0, 0, 0). In one dimensional setting the plastic yield criteria is |τ | ≤ σ p , thus the plastic potential power is given by,
By integrating over the process, the dissipated plastic energy density is σ p p where the cumulated plastic strain is p = t 0 | ṗs |ds. Since no external force is applied, the total energy of the bar is given by,
where E is the Young's modulus and (•) = ∂(•)/∂x. The quadruple state variables (u t , α t , p t , pt ) ∈ C t ×D ×Q×M(Ω, R) is solution of the evolution problem, if the following conditions holds:
1. The equilibrium,
The stress is constant along the bar hence it is only function of time.
Titre : Modèles variationnels à champ de phase pour la rupture de type fragile et ductile: nucléation et propagation
Mots clefs : Modèles à champ de phase pour la rupture, nucléation de fissure, effet d'échelle dans les matériaux fragiles, modèles d'endommagement à gradient, fracturation hydraulique, stabilité des fissures, modèles de plasticités, approche variationnelle, rupture ductile.
Résumé :
Les simulations numériques des fissures de type fragile par les modèles d'endommagement à gradient deviennent maintenant très répandues. Les résultats théoriques et numériques montrent que dans le cadre de l'existence d'une pré-fissure la propagation suit le critère de Griffith. Alors que pour le problème à une dimension la nucléation de la fissure se fait à la contrainte critique, cette dernière propriété dimensionne le paramètre de longueur interne.
Dans ce travail, on s'attarde sur le phénomène de nucléation de fissures pour les géométries communément rencontrées et qui ne présentent pas de solutions analytiques. On montre que pour une entaille en U-et Vl'initiation de la fissure varie continument entre la solution prédite par la contrainte critique et celle par la ténacité du matériau. Une série de vérifications et de validations sur différents matériaux est réalisée pour les deux géométries considérées. On s'intéresse ensuite à un défaut elliptique dans un domaine infini ou très élancé pour illustrer la capacité du modèle à prendre en compte les effets d'échelles des matériaux et des structures.
Dans un deuxième temps, ce modèle est étendu à la fracturation hydraulique. Une première phase de vérification du modèle est effectuée en stimulant une pré-fissure seule par l'injection d'une quantité donnée de fluide. Ensuite on étudie la simulation d'un réseau parallèle de fissures. Les résultats obtenus montrent qu'une seule fissure est activée dans ce réseau et que ce type de configuration vérifie le principe de moindre énergie. Le dernier exemple se concentre sur la stabilité des fissures dans le cadre d'une expérience d'éclatement à pression imposée pour l'industrie pétrolière. Cette expérience d'éclatement de la roche est réalisée en laboratoire afin de simuler les conditions de confinement retrouvées lors des forages.
La dernière partie de ce travail se concentre sur la rupture ductile en couplant le modèle à champ de phase avec les modèles de plasticité parfaite. Grâce à la structure variationnelle du problème on décrit l'implémentation numérique retenue pour le calcul parallèle. Les simulations réalisées montrent que pour une géométrie légèrement entaillée la phénoménologie des fissures ductiles comme par exemple la nucléation et la propagation sont en concordances avec ceux reportées dans la littérature.
Title : Variational phase-field models from brittle to ductile fracture: nucleation and propagation Keywords: Phase-field models of fracture, crack nucleation, size effects in brittle materials, validation & verification, gradient damage models, hydraulic fracturing, crack stability, plasticity model, variational approach, ductile fracture Abstract : Phase-field models, sometimes referred to as gradient damage, are widely used methods for the numerical simulation of crack propagation in brittle materials. Theoretical results and numerical evidences show that they can predict the propagation of a pre-existing crack according to Griffith's criterion. For a one-dimensional problem, it has been shown that they can predict nucleation upon a critical stress, provided that the regularization parameter is identified with the material's internal characteristic length.
In this work, we draw on numerical simulations to study crack nucleation in commonly encountered geometries for which closed-form solutions are not available. We use U-and V-notches to show that the nucleation load varies smoothly from the one predicted by a strength criterion to the one of a toughness criterion when the strength of the stress concentration or singularity varies. We present validation and verification of numerical simulations for both types of geometries. We consider the problem of an elliptic cavity in an infinite or elongated domain to show that variational phase field models properly account for structural and material size effects.
In a second movement, this model is extended to hydraulic fracturing. We present a validation of the model by simulating a single fracture in a large domain subject to a control amount of fluid. Then we study an infinite network of pressurized parallel cracks. Results show that the stimulation of a single fracture is the best energy minimizer compared to multi-fracking case. The last example focuses on fracturing stability regimes using linear elastic fracture mechanics for pressure driven fractures in an experimental geometry used in petroleum industry which replicates a situation encountered downhole with a borehole called burst experiment.
The last part of this work focuses on ductile fracture by coupling phase-field models with perfect plasticity. Based on the variational structure of the problem we give a numerical implementation of the coupled model for parallel computing. Simulation results of a mild notch specimens are in agreement with the phenomenology of ductile fracture such that nucleation and propagation commonly reported in the literature. |
01758434 | en | [
"info"
] | 2024/03/05 22:32:10 | 2015 | https://inria.hal.science/hal-01758434/file/371182_1_En_22_Chapter.pdf | Edirlei Soares De Lima
Antonio L Furtado
email: furtado@inf.puc-rio.br
Bruno Feijó
email: bfeijo@inf.puc-rio.br
Storytelling Variants: The Case of Little Red Riding Hood
Keywords: Folktales, Variants, Types and Motifs, Semiotic Relations, Digital Storytelling, Plan Recognition
A small number of variants of a widely disseminated folktale is surveyed, and then analyzed in an attempt to determine how such variants can emerge while staying within the conventions of the genre. The study follows the classification of types and motifs contained in the Index of Antti Aarne and Stith Thompson. The paper's main contribution is the characterization of four kinds of type interactions in terms of semiotic relations. Our objective is to provide the conceptual basis for the development of semi-automatic methods to help users compose their own narrative plots.
Introduction
When trying to learn about storytelling, in order to formulate and implement methods usable in a computer environment, two highly influential approaches come immediately to mind, both dealing specifically with folktales: Propp's functions [START_REF] Propp | Morphology of the Folktale[END_REF] and the comprehensive classification of types and motifs proposed by Antti Aarne and Stith Thompson, known as the Aarne-Thompson Index (heretofore simply Index) [START_REF] Aarne | The Types of the Folktale[END_REF][START_REF] Thompson | The Folktale[END_REF][START_REF] Uther | The Types of International Folktales[END_REF].
In previous work, as part of our Logtell project [START_REF] Ciarlini | Modeling interactive storytelling genres as application domains[END_REF][START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF], we developed prototypes to compose narrative plots interactively, employing a plan-generation algorithm based on Propp's functions. Starting from different initial states, and giving to users the power to intervene in the generation process, within the limits of the conventions of the genre on hand, we were able to obtain in most cases a fair number of different plots, thereby achieving an encouraging level of variety in plot composition.
We now propose to invest on a strategy that is based instead on the analysis of already existing stories. Though we shall focus on folktales, an analogous conceptual formulation applies to any genre strictly regulated by conventions and definable in terms of fixed sets of personages and characteristic events. In all such genres one should be able to pinpoint the equivalent of Proppian functions, as well as of ubiquitous types and motifs, thus opening the way to the reuse of previously identified narrative patterns as an authoring resource. Indeed it is a well-established fact that new stories often emerge as creative adaptations and combinations of old stories: this is a most common practice among even the best professional authors, though surely not easy to trace in its complex ramifications, as eloquently expressed by the late poststructuralist theoretician Roland Barthes [3,p. 39]:
Any text is a new tissue of past citations. Bits of code, formulae, rhythmic models, fragments of social languages, etc., pass into the text and are redistributed within it, for there is always language before and around the text. Intertextuality, the condition of any text whatsoever, cannot, of course, be reduced to a problem of sources or influences; the intertext is a general field of anonymous formulae whose origin can scarcely ever be located; of unconscious or automatic quotations, given without quotation marks.
The present study utilizes types and motifs of the Aarne-Thompson's Index, under whose guidance we explore what the ingenuity of supposedly unschooled narrators has legated. We chose to concentrate on folktale type AT 333, centered on The Little Red Riding Hood and spanning some 58 variants (according to [START_REF] Tehrani | The Philogeny of Little Red Riding Hood[END_REF]) from which we took a small sample. The main thrust of the paper is to investigate how such rich diversities of variants of traditional folktales came to be produced, as they were told and retold by successive generations of oral storytellers, hoping that some of their tactics are amenable to semi-automatic processing. An added incentive to work with folktale variants is the movie industry's current interest in adaptations of folktales for adult audiences, in contrast to early Disney classic productions.
Related work is found in the literature of computational narratology [START_REF] Cavazza | Narratology for Interactive Storytelling: A Critical Introduction[END_REF][START_REF] Mani | Computational Narratology[END_REF] a new field that examines narratology from the viewpoint of computation and information processingwhich offers models and systems based on tale types/motifs that can be used in story generation and/or story comparison. Karsdorp et al. [START_REF] Karsdorp | In Search of an Appropriate Abstraction Level for Motif Annotations[END_REF] believe that oral transmission of folktales happens through the replication of sequences of motifs. Darányi et al. [START_REF] Darányi | Toward Sequencing 'Narrative DNA': Tale Types, Motif Strings and Memetic Pathways[END_REF] handle motif strings like chromosome mutations in genetics. Kawakami et al. [START_REF] Kawakami | On Modeling Conceptual and Narrative Structure of Fairytales[END_REF] cover 23 Japanese texts of Cinderella tales, whilst Swartjes et al use Little Red Riding Hood as one of their examples [START_REF] Swartjes | Iterative authoring using story generation feedback: debugging or co-creation?[END_REF].
Our text is organized as follows. Section 2 presents the two classic variants of AT 333. Section 3 summarizes additional variants. Section 4 has our analysis of the variant-formation phenomenon, with special attention to the interaction among types, explained in terms of semiotic relations. Section 5 describes a simple plan-recognition prototype working over variant libraries. Section 6 contains concluding remarks. The full texts of the variants cited in the text are available in a separate document. 1
2
The two classic variants
In the Index, the type of interest, AT 333, characteristically named The Glutton, is basically described as follows, noting that two major episodes are listed [1, p. 125]:
The wolf or other monster devours human beings until all of them are rescued alive from his belly. I. Wolf's Feast. By masking as mother or grandmother the wolf deceives and devours a little girl whom he meets on his way to her grandmother's.
II.
Rescue. The wolf is cut open and his victims rescued alive; his belly is sewed full of stones and he drowns, or he jumps to his death.
The first classic variant, Le Petit Chaperon Rouge (Little Red Riding Hood), was composed in France in 1697, by Charles Perrault [START_REF] Perrault | Little Red Riding Hood[END_REF], during the reign of Louis XIV th . It consists of the first episode alone, so that there is no happy ending, contrary to what children normally expect from nursery fairy tales. The little girl, going through the woods to see her grandmother, is accosted by the wolf who reaches the grandmother's house ahead of her. The wolf kills the grandmother and takes her place in bed. When the girl arrives, she is astonished at the "grandmother"'s large, ears, large eyes, etc., until she asks about her huge teeth, whereat the wolf gobbles her up. Following a convention of the genre of admonitory fables, a "moralité" is appended, to the effect that well-bred girls should not listen to strangers, particularly when they pose as "gentle wolves"
The second and more influential classic variant is that of the brothers Grimm (Jacob and Wilhelm), written in German, entitled Rotkäppchen (Little Red Cap) [START_REF] Grimm | The Complete Grimm's FairyTales[END_REF], first published in 1812. The girl's question about the wolf's teeth is replaced by: "But, grandmother, what a dreadful big mouth you have!" This is a vital changenot being bitten, the victims are gobbled up aliveand so the Grimm variant can encompass the two episodes prescribed for the AT 333 type. Rescue is effected by a hunter, who finds the wolf sleeping and cuts his belly, allowing girl and grandmother to escape. The wolf, his belly filled with heavy stones fetched by the girl, wakes up, tries to run away and falls dead, unable to carry the weight. As a moral addendum to the happy ending, the girl promises to never again deviate from the path when so ordered by her mother. Having collected the story from two distinct sources, the brothers wrote a single text with a second finale, wherein both female characters show that they had learned from their experience with the villain. A second wolf comes in with similar proposals. The girl warns her grandmother who manages to keep the animal outside, and eventually they cause him to fall from the roof into a trough and be drowned.
Some other variants
In [START_REF] Tehrani | The Philogeny of Little Red Riding Hood[END_REF] no less than 58 folktales were examined as belonging to type AT 333 (and AT 123). Here we shall merely add seven tales to the classic ones of the previous section. Since several variants do not mention a red hood or a similar piece of clothing as attribute of the protagonist, the conjecture was raised that this was Perrault's invention, later imitated by the Grimms. However a tale written in Latin by Egbert de Liège in the 11 th century, De puella a lupellis seruata (About a Girl Saved from Wolf Cubs) [START_REF] Ziolkowski | A Fairy Tale from before Fairy Tales: Egbert of Liège's 'De puella a lupellis seruata' and the Medieval Background of 'Little Red Riding Hood[END_REF], arguably prefiguring some characteristics of AT 333, features a red tunic which is not merely ornamental but plays a role in the events. The girl had received it as a baptismal gift from her godfather. When she was once captured by a wolf and delivered to its cubs to be eaten, she suffered no harm. The virtue of baptism, visually represented by the red tunic, gave her protection. The cubs, their natural ferocity sub-dued, gently caressed her head covered by the tunic. The moral lesson, in this case, is consonant with the teaching of the Bible (Daniel VI, 27).
Whilst in the variants considered so far the girl is presented as naive, in contrast to the clever villain, the situation is reversed in the Conte de la Mère-grand (The Story of Grandmother), collected by folklorist Achille Millien in the French province of Nivernais, circa 1870, and later published by Paul Delarue [START_REF] Delarue | The Story of Grandmother[END_REF]. In this variant, which some scholars believe to be closer to the primitive oral tradition, the villain is a "bzou", a werewolf. After killing and partly devouring the grandmother's body, he stores some of her flesh and fills a bottle with her blood. When the girl comes in, he directs her to eat and drink from these ghastly remains. Then he tells her to undress and lie down on the bed. Whenever the girl asks where to put each piece of clothing, the answer is always: "Throw it in the fire, my child; you don't need it anymore." In the ensuing dialogue about the peculiar physical attributes of the fake grandmother, when the question about her "big mouth" is asked the bzou gives the conventional reply: "All the better to eat you with, my child!"but this time the action does not follow the words. What happens instead is that the girl asks permission to go out to relieve herself, which is a ruse whereby she ends up outsmarting the villain and safely going back to home (cf. http://expositions.bnf.fr/contes/gros/chaperon/nivers.htm).
An Italian variant published by Italo Calvino, entitled Il Lupo e le Tre Ragazze (The Wolf and the Three Girls) [START_REF] Calvino | Italian Folktales[END_REF], adopts the trebling device [START_REF] Propp | Morphology of the Folktale[END_REF] so common in folktales, making three sisters, one by one, repeat the action of taking victuals to their sick mother. The wolf intercepts each girl but merely demands the food and drink that they carry. The youngest girl, who is the protagonist, throws at the wolf a portion that she had filled with nails. This infuriates the wolf, who hurries to the mother's house to devour her and lay in wait for the girl. After the customary dialogue with the wolf posing as the mother, the animal also swallows the girl. The townspeople observe the wolf coming out, kill him and extract mother and girl alive from his belly. But that is not all, as Calvino admits in an endnote. Having found the text as initially collected by Giambattista Basile, he had deliberately omitted what he thought to be a too gruesome detail ("una progressione troppo truculenta"): after killing the mother, the wolf had made "a doorlatch cord out of her tendons, a meat pie out of her flesh, and wine out of her blood". Repeating the strange above-described episode of the Conte de la Mère-grand, the girl is induced to eat and drink from these remains, with the aggravating circumstance that they belonged to her mother, rather than to a more remotely related grandparent.
Turning to China, one encounters the tale Lon Po Po (Grammie Wolf), translated by Ed Young [START_REF] Young | Lon Po Po: A Red-Riding Hood Story from China[END_REF], which again features three sisters but, unlike the Western folktale cliché, shows the eldest as protagonist, more experienced and also more resourceful than the others. The mother, here explicitly declared to be a young widow, goes to visit the grandmother on her birthday, and warns Shang, the eldest, not to let anyone inside during her absence. A wolf overhears her words, disguises as an old woman and knocks at the door claiming to be the grandmother. After some hesitation, the girls allow him to enter and, in the dark, since the wolf claims that light hurts his eyes, they go to bed together. Shang, however, lighting a candle for a moment catches a glimpse of the wolf's hairy face. She convinces him to permit her two sisters to go outside under the pretext that one of them is thirsty. And herself is also allowed to go out, promising to fetch some special nuts for "Grammie". Tired of waiting for their return, the wolf leaves the house and finds the three sisters up in a tree. They persuade him to fetch a basket mounted on which they propose to bring him up, in order to pluck with his own hands the delicious nuts. They pull on the rope attached to the basket, but let it go so that the wolf is seriously bruised. And he finally dies when the false attempt is repeated for the third time.
Another Chinese variant features a bear as the villain: Hsiung chia P`o (Goldflower and the Bear) [START_REF] Mi | Goldflower and the Bear[END_REF], translated by Chiang Mi. The crafty protagonist, Goldflower, is once again an elder sister, living with her mother and a brother. The mother leaves them for one day to visit their sick aunt, asking the girl to take care of her brother and call their grandmother to keep them company during the night. The bear knocks at the door, posing as the grandmother. Shortly after he comes in, the girlin spite of the darknessends up disclosing his identity. She manages to lock the boy in another room, and then obeys the bear's request to go to bed at his side. The villain's plan is to eat her at midnight, but she asks to go out to relieve her tummy. As distrustful as the werewolf in the before-mentioned French variant, the bear ties one end of a belt to her handan equally useless precaution. Safely outside on top of a tree, Goldflower asks if he would wish to eat some pears, to be plucked with a spear, which the famished beast obligingly goes to fetch in the house. The girl begins with one fruit, but the next thing to be thrown into his widely open gullet is the spear itself. Coming back in the morning, the mother praises the brave little Goldflower.
One variant, published in Portugal by Guerra Junqueiro, entitled O Chapelinho Encarnado [START_REF] Guerra Junqueiro | Contos para a Infância[END_REF], basically follows the Grimm brothers pattern. A curious twist is introduced: instead of luring the girl to pick up wild flowers, the wolf points to her a number of medicinal herbs, all poisonous plants in reality, and she mistakes him for a doctor. At the end, the initiative of filling the belly of the wolf with stones is attributed not to the girl, but to the hunter, who, after skinning the animal, merrily shares the food and drink brought by the girl with her and her grandmother.
The highly reputed Brazilian folklorist Camara Cascudo included in his collection [START_REF] Camara Cascudo | Contos Tradicionais do Brasil[END_REF] a variant, O Chapelinho Vermelho, which also follows the Grimm brothers pattern. The mother is introduced as a widow and the name of the girl is spelled out: Laura. Although she is known, as the conventional title goes, by a nickname translatable as "Little Red Hat", what she wears every day is a red parasol, given by her mother. One more particularity is that, upon entering her grandmother's house, the girl forgets to close the door, so that finding the door open is what strikes the hunter as suspicious when he approaches the house. The hunter bleeds the wolf with a knife and, noticing his distended belly, proceeds to open it thus saving the two victims. Nothing is said about filling the wolf's belly with stones, the wounds inflicted by the hunter's knife having been enough to kill him. Two prudent lessons are learned: (1) Laura would not forget her mother's recommendation to never deviate from the path, the specific reason being given here that there existed evil beasts in the wood; (2) living alone should no longer be an option for the old woman, who from then on would dwell with her daughter and granddaughter.
Comments on the formation of variants
It is a truism that people tend to introduce personal contributions when retelling a story. There are also cultural time and place circumstances that require adaptations; for example, in the Arab world the prince would in no way be allowed to meet Cinderella in a ballroomhe falls in love without having ever seen her (cf. "Le Bracelet de Cheville" in the Mardrus translation of One Thousand and One Nights [START_REF] Mardrus | Les Mille et une Nuits[END_REF]). Other differences among variants may result from the level of education of the oral storytellers affecting how spontaneous they are, and the attitude of the collectors who may either prefer to reproduce exactly what they hear or introduce corrections and rational explanations while omitting indecorous or gruesome scenes. On the storyteller's part, however, this tendency is often attenuated by an instinctive pact with the audiencewith children, in specialin favour of faithful repetition, preferably employing the very same words. Indeed the genre of folktales is strongly marked by conventions which, to a remarkable extent, remain the same in different times and places. The folklorist Albert Lord called tension of essences the compulsion that drives all singers (i.e. traditional oral storytellers) to strictly enforce such conventions [29, p. 98]:
In our investigation of composition by theme this hidden tension of essences must be taken into consideration. We are apparently dealing here with a strong force that keeps certain themes together. It is deeply imbedded in the tradition; the singer probably imbibes it intuitively at a very early stage of his career. It pervades his material and the tradition. He avoids violating the group of themes by omitting any of its members. [We shall see] that he will even go so far as to substitute something similar if he finds that for one reason or another he cannot use one of the elements in its usual form.
The notion of tension of essences may perhaps help explaining not only the total permanence of some variants within the frontiers of a type, but also the emergence of transgressive variants, which absorb features pertaining to other types, sometimes even provoking a sensation of strangeness. When an oral storyteller feels the urge "to substitute something similar" in a story, the chosen "something" should, as an effect of the tension-of-essences forceful compulsion, still belong to the folktale genrebut what if the storyteller's repertoire comprises more than one folktale type? As happens with many classifications, the frontiers between the types in the Index are often blurred, to the point that one or more motifs can be shared and some stories may well be classified in more than one type. So a viable hypothesis can be advanced that some variants did originate through, so to speak, a type-contamination phenomenon.
Accordingly we propose to study type interactions as a possible factor in the genesis of variants. We shall characterize the interactions that may occur among types, also involving motifs, by way of semiotic relations, taking an approach we applied before to the conceptual modelling of both literary genres and business information systems [START_REF] Ciarlini | Event relations in plot-based plot composition[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF][START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF]. We distinguish four kinds of semiotic relations, associated with the so-called four master tropes [START_REF] Burke | A Grammar of Motives[END_REF][START_REF] Chandler | Semiotics: the Basics[END_REF], whose significance has been cogently stressed by a literary theory scholar, Jonathan Culler, who regards them "as a system, indeed the system, by which the mind comes to grasp the world conceptually in language" [15, p. 72]. For the ideas and for the nomenclature in the table below, we are mainly indebted to the pioneering semiotic studies of Ferdinand de Saussure [START_REF] Saussure | Cours de Linguistique Générale[END_REF]: The itemized discussion below explores the meaning of each of the four semiotic relations, as applied to the derivation of folktale type variants stemming from AT 333.
relation
(1) Syntagmatic relation with type AT 123. As mentioned at the beginning of section 2, the Index describes type AT 333 as comprising two episodes, namely Wolf's Feast and Rescue, but the classic Perrault variant does not proceed beyond the end of the first episode. As a consequence, one is led to assume that the Rescue episode is not essential to characterize AT 333. On the other hand the situation created by Wolf's Feast is a long distance away from the happy-ending that is commonly expected in nursery fairy tales. A continuation in consonance with the Rescue episode, exactly as described in the Index, is suggested by AT 123: The Wolf and the Kids, a type pertaining to the group of Animal Tales, which contains the key motif F913: Victims rescued from swallower's belly.
The connection (syntagmatic relation) whereby AT 123 complements AT 333 is explicitly declared in the Index by "cf." cross-references [1, p. 50, p. 125]. Moreover the Grimm brothers variant, which has the two episodes, is often put side by side with another story equally collected by them, The Wolf and the Seven Little Kids [START_REF] Grimm | The Complete Grimm's FairyTales[END_REF], clearly of type AT 123.
Still it must be noted that several of the variants reported here do not follow the Grimm pattern in the Rescue episode. They diverge with respect to the outcome, which, as seen, may involve the death of the girl, or her rescue after being devoured, or even her being totally preserved from the villain's attempts either by miraculous protection or by her successful ruses.
(2) Paradigmatic relation with type AT 311B*. For the Grimm variant, as also for those that follow its pattern (e.g. the Italian and the two Portuguese variants in section 3), certain correspondences or analogies can be traced with variants of type AT 311B*: The Singing Bag, a striking example being another story collected in Brazil by Camara Cascudo [START_REF] Camara Cascudo | Contos Tradicionais do Brasil[END_REF], A Menina dos Brincos de Ouro (The Girl with Golden Earrings). Here the villain is neither an animal nor a werewolf; he is a very ugly old man, still with a fearsome aspect but no more than human. The golden earrings, a gift from her mother, serve as the girl's characteristic attribute and have a function in the plot. As will be noted in the summary below, the villain's bag becomes the wolf's belly of the Grimm variant, and what is done to the bag mirrors the act of cutting the belly and filling it with stones. In this sense, the AT 311B* variant replaces the Grimm variant.
One day the girl went out to bring water from a fountain. Having removed her earrings to wash herself, she forgot to pick them up before returning. Afraid to be reprimanded by her mother, she walked again to the fountain, where she was caught by the villain and sewed inside a bag. The man intended to use her to make a living. At each house that he visited, he advertised the magic bag, which would sing when he menaced to strike it with his staff. Everywhere people gave him money, until he came inadvertently to the girl's house, where her voice was recognized. He was invited to eat and drink, which he did in excess and fell asleep, whereat the bag was opened to free the girl and then filled with excrement. At the next house visited, the singing bag failed to work; beaten with the staff, it ruptured spilling its contents.
(3) Meronymic relation with type AT 437. In The Story of Grandmother the paths taken by the girl and the werewolf to reach the old lady's house are called, respectively, the Needles Road and the Pins Road. And, strangely enough, while walking along her chosen path, the little girl "enjoyed herself picking up needles" [START_REF] Delarue | The Story of Grandmother[END_REF]. Except for this brief and puzzling mention, these objects remain as meaningless details, having no participation in the story.
And yet, browsing through the Index, we see that needles and pins are often treated as wondrous objects (motifs D1181: Magic Needle and D1182: Magic Pin). And traversing the Index hierarchy upwards, from motifs to types, we find them playing a fundamental role in type AT 437: The Needle Prince (also named The Supplanted Bride), described as follows [1, p. 140]: "The maiden finds a seemingly dead prince whose body is covered with pins and needles and begins to remove them ... ". Those motifs are thus expanded into a full narrative in AT 437.
Especially relevant to the present discussion is a variant from Afghanistan, entitled The Seventy-Year-Old Corpse reported by Dorson [START_REF] Dorson | Folktales Told Around the World[END_REF], which has several elements in common with the AT 333 variants. An important difference, though, also deserves mention: the girl lives alone with her old father, who takes her to visit her aunt. We are told that, instead of meeting the aunt, the girl finds a seventy year old corpse covered with needles, destined to revive if someone would pick the needles from his body. At the end the girl marries the "corpse", whereas no further news are heard about her old father, whom she had left waiting for a drink of water. One is tempted to say that Bruno Bettelheim would regard this participation of two old males, the father and the daunting corpse, as an uncannily explicit confirmation of the presence in two different formsof the paternal figure, in an "externalization of overwhelming oedipal feelings, and ... in his protective and rescuing function" [4, p. 178].
(4) Antithetic relation with type AT 449. Again in The Story of Grandmother we watch the strange scene of the girl eating and drinking from her grandmother's remains, punctuated by the acid comment of a little cat: "A slut is she who eats the flesh and drinks the blood of her grandmother!" The scene has no consequence in the plot, and in fact it is clearly inconsistent with the role of the girl in type AT 333. It would sound natural, however, in a type in opposition to AT 333, such as AT 449: The Tsar's Dog, wherein the roles of victim and villain are totally reversed. The cannibalistic scene in The Story of Grandmother has the effect of assimilating the girl to a ghoul (motif G20 in the Index), and the female villain of the most often cited variant of type AT 449, namely The Story of Sidi Nouman (cf. Andrew Lang's translation in Arabian Nights Entertainment) happens to be a ghoul.
No less intriguing in The Story of Grandmother are the repartees in the ensuing undressing scene, with the villain (a werewolf, as we may recall) telling the girl to destroy each piece of clothing: "Throw it in the fire, my child; you don't need it anymore." This, too, turns out to be inconsequential in the plot, but was a major concern in the werewolf historical chronicles and fictions of the Middle Ages [START_REF] Baring-Gould | The Book of Were-Wolves[END_REF][START_REF] Sconduto | Metamorphoses of the Werewolf: A Literary Study from Antiquity Through the Renaissance[END_REF]. In 1521, the Inquisitor-General for the diocese of Besançon heard a case involving a certain Pierre Bourget [START_REF] Baring-Gould | The Book of Were-Wolves[END_REF]. He confessed under duress that, by smearing his body with a salve given by a demon, he became a wolf, but "the metamorphosis could not take place with him unless he were stark naked". And to recover his form he would "beat a retreat to his clothes, and smear himself again". Did the werewolf in The Story of Grandmother intend to transform the girl into a being of his species? Surely the anonymous author did not mean that, but leaving aside the norms of AT 333 the idea would not appear to be so farfetched.
In this regard, also illustrating type AT 449, there are two medieval lays (short narrative poems) that deserve our attention. They are both about noble knights with the ability to transform themselves into wolves. In the two narratives, they are betrayed by their villainous wives, intent on permanently preventing their resuming the human form. In Marie de France's lay of Bisclavret [START_REF] De | The Lais of Marie de France[END_REF] an old Breton word signifying "werewolf"the woman accomplishes this effect by stealing from a secret hiding place the man's clothes, which he needed to put on again to undo the transformation. In the other example, the anonymous lay of Melion [START_REF] Burgess | Eleven Old French Narrative Lays[END_REF], after a magic ring is applied to break the enchantment, the man feels tempted to punish the woman by inflicting upon her the same metamorphosis.
In the preceding discussion we purported to show how types can be semiotically related, and argued that such relations constitute a factor to be accounted for in the emergence of variants. We should add that types may be combined in various ways to yield more complex types, whose attractiveness is heightened by the occurrence of unexpected changes. Indeed Aristotle's Poetics2 distinguishes simple and complex plots, characterizing the latter by recognition () and reversal (). Differently from reversal, recognition does not imply that the world changed, but that the beliefs of the characters about themselves and the current facts were altered.
In particular, could a legitimate folktale promote the union of monster and girl? Could we conciliate type AT 333 (where the werewolf is a villain) with the antithetically related medieval lays of type AT 449 (where the werewolf is the victim)? Such conciliations of opposites are treated under the topic of blending [START_REF] Fauconnier | Conceptual projection and middle spaces[END_REF], often requiring creative adaptations. A solution is given by type AT 425C: Beauty and the Beast. At first the Beast is shown as the villain, claiming the life of the merchant or else of one of his daughters: "Go and see if there's one among them who has enough courage and love for you to sacrifice herself to save your life" [41, p. 159]but then proves to be the victim of an enchantment. Later, coming to sense his true inner nature (an event of recognition, as in Aristotle), Belle makes him human again by manifesting her love (motif D735-1: Disenchanting of animal by being kissed by woman). So, it is as human beings that they join.
Alternatively, we might combine AT 333 and AT 449 by pursuing until some sort of outcome the anomalous passages of The Story of Grandmother, allowing the protagonists to join in a non-human form. The werewolf feeds human flesh of his victim to the girl, expecting that she would transform herself like he did (as Melion for a moment thought to cast the curse upon his wife), thereby assuming a shape that she would keep forever once her clothes were destroyed (recall the concern of Pierre Bourget to "beat a retreat to his clothes", and the knight's need to get back his clothes in Bisclavret). At the end the two werewolves would marry and live happily forever after, as a variant of an admittedly misbegotten new type (of, perhaps, a modern appeal, since it would also include among its variants the story of the happy vampires Edward and Bella in the Twilight Saga: http://twilightthemovie.com/).
First steps towards variants in computer-generated stories
To explore in a computer environment the variants of folktale types, kept in a library of typical plans, we developed a system in C# that does plan-recognition over the variants of the type indicated (e.g. AT 333), with links to pages of semiotically related types (e.g. AT 123, AT 311B*, AT 437, AT 449). Plan-recognition involves matching a number of actions against a pre-assembled repertoire of plot patterns (cf. [START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF]).
Let P be a set of m variants of a specific tale type that are represented by complete plans, 𝑃 = {𝑃 1 , 𝑃 2 , ⋯ , 𝑃 𝑚 }, where each plan is a sequence of events, i.e.: 𝑃 𝑖 = 〈𝑒 1 𝑖 , 𝑒 2 𝑖 , ⋯ , 𝑒 𝑛 𝑖 𝑖 〉. These events are actions with ground arguments that are story elements (specific names, places, and objects). For instance, P k = go(Abel, Beach), meet(Abel, Cain), kill(Cain, Abel). The library of typical plans is defined by associating each plan P i with the following elements: (1) the story title; (2) a set of parameterized termsakin to those we use in Logtell [START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF] to formalize Proppian functionsdescribing the story events; (3) the specification of the characters' roles (e.g. villain, victim, hero) and objects' functions (e.g. wolf's feast place, basket contents); (4) the semiotic relations of the story with other variants of same or different types (Section 4); ( 5) a text template used to display the story as text, wherein certain phrases are treated as variables (written in the format #VAR 1 #); and (6) the comics resources used for dramatization, indicating the path to the folder that contains the images representing the characters and objects of the narrative and a set of event templates to describe the events textually. The library is specified in an XML file.
Let T be a partial plan expressed as a sequence of events given by the user. The system finds plans in P that are consistent with T. During the searching process, the arguments of the events in P are instantiated. For example, with the input T = {give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), eat(Joe, Little Ring Girl)}, the following stories are generated: Story 1: give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), go(Little Ring Girl, the woods), meet(Little Ring Girl, Joe), go(Joe, Grandmother's house), eat(Joe, Anne), disguise(Joe, Anne), lay_down(Joe, Grandmother's bed), go(Little Ring Girl, Grandmother's house), delivery(Little Ring Girl, tea), question(Little Ring Girl, Joe), eat(Joe, Little Ring Girl), sleep(Joe), go(Hunter, Grandmother's house), cut(Hunter, Joe, axe), jump_out_of(Little Ring Girl, Joe), jump_out_of(Anne, Joe), die(Joe).
Story 2: give(Anne, ring, Little Ring Girl), ask_to_take(Marie, Little Ring Girl, tea, Anne), go(Little Ring Girl, the woods), meet(Little Ring Girl, Joe), go(Joe, Grandmother's house), eat(Joe, Anne), disguise(Joe, Anne), lay_down(Joe, Grandmother's bed), go(Little Ring Girl, Grandmother's house), lay_down(Little Ring Girl, Grandmother's bed), delivery(Little Ring Girl, tea), question(Little Ring Girl, Joe), eat(Joe, Little Ring Girl).
which correspond, respectively, to the Grimm and Perrault AT 333 variants, rephrased to display the names of characters and objects given by the user.
Our plan recognition algorithm employs a tree structure, which we call generalized plan suffix tree. Based on the suffix tree commonly used for string pattern matching [START_REF] Gusfield | Algorithms on Strings, Trees, and Sequences[END_REF], this trie-like data structure contains all suffixes p k of each plan in P. If a plan P i has a sequence of events 𝑝 = 𝑒 1 𝑒 2 ⋯ 𝑒 𝑘 ⋯ 𝑒 𝑁 , then 𝑝 𝑘 = 𝑒 𝑘 𝑒 𝑘+1 ⋯ 𝑒 𝑁 is the suffix of p that starts at position k (we have dropped the index i of the expressions p and p k for the sake of simplicity). In a generalized plan suffix tree S, edges are labeled with the parameterized plan events that belong to each suffix p k , and the leaves point to the complete plans ending in p k . Each suffix is padded with a terminal symbol $i that uniquely signals the complete plan in the leaf node. Figure 1 shows an example of generalized plan suffix tree generated for the plan sequences The process of searching for plans that match a given partial plan T , expressed as a sequence of input terms, is straightforward: starting from the root node, the algorithm sequentially matches T against the parameterized plan events on the edges of the tree, in chronological but not necessarily consecutive order, instantiating the event variables and proceeding until all input terms are matched and a leaf node is reached. If more solutions are requested, a backtracking procedure tries to find alternative paths matching T. The search process produces a set of complete plans G, with the event variables instantiated with the values appearing in the input partial plan or, for events not present in the partial plan, with the default values defined in the library.
After generating G through plan-recognition, the system allows users to apply the semiotic relations (involving connection, similarity, unfolding, and opposition) and explore other variants of same or different types. The process of searching for variants uses the semiotic relations specified in the library of typical plans to create a link between a g i in G and its semiotically related variants. When instantiating one such variant v i , the event variables of v i are instantiated according to the characters and objects that play important roles in the baseline story g i . Characters playing roles in g i that also exist in v i , assume the same role in the variant. For roles that only exist in v i , the user is asked to name the characters who would fulfil such roles.
Following the g i →v i links taken from the examples of section 4, the user gains a chance to reinterpret the g i AT 333 variant, in view of aspects highlighted in the semiotically related v i : 1. the wolf's villainy complemented by a rescue act (AT 123); 2. As illustrated in Figure 2, our system supports two dramatization modalities: text and comics. The former uses the original literary rendition of the matched typical plan as a template and represents the generated stories in text format. The latter offers a storyboard-like comic strip representation, where each story event gains a graphical illustration and a short sentence description. In the illustrations, the scene compositing automatic process takes into account the specific object carried by each character and the correct movement directions. More details on the generation of comic strips can be found in our previous work on interactive comics [START_REF] Lima | Non-Branching Interactive Comics[END_REF]. similarly predefined genre, readers have a fair chance to find a given story in a treatment as congenial as possible to their tastes and personality profile. Moreover, prospective amateur authors may feel inspired to put together new variants of their own after seeing how variants can derive from the type and motif interactions that we associate with semiotic relations. They would learn how new stories can arise from episodes of existing stories, through a process, respectively, of concatenation, analogous substitution, expansion into finer grained actions, or radical reversal.
Computer-based libraries, such as we described, should then constitute a vital first step in this direction. In special, by also representing the stories as plans, in the form of sequences of terms denoting the story events (cf. the second paragraph of section 5), we effectively started to combine the two approaches mentioned in the Introduction, namely Aarne-Thompson's types and motifs and Proppian functions, and provided a bridge to our previously developed Logtell prototypes [START_REF] Ciarlini | Modeling interactive storytelling genres as application domains[END_REF][START_REF] Ciarlini | A logic-based tool for interactive generation and dramatization of stories[END_REF][START_REF] Furtado | Constructing Libraries of Typical Plans[END_REF][START_REF] Karlsson | Conceptual Model and System for Genre-Focused Interactive Storytelling[END_REF].
We expect that our analysis of variants, stimulated by further research efforts in the line of computational narratology, may contribute to the design of semi-automatic methods for supporting interactive plot composition, to be usefully incorporated into digital storytelling systems.
P 1 = {go(A, B), meet(A, C), kill(C, A)} and P 2 = {tell(A, B, C), meet(A, C), go(A, D)}.
Fig. 1 .
1 Fig. 1. Generalized plan suffix tree for P 1 = {go(A, B), meet(A, C), kill(C, A)} and P 2 = {tell(A, B, C), meet(A, C), go(A, D)}..
his belly replaced by ugly man and his bag (AT 311B*); 3. the girl's gesture of picking needles expanded to the wider scope of a disenchantment ritual (AT 437); 4. girl and werewolf with reversed roles of villain and victim (AT 449).
Fig. 2 .
2 Fig. 2. Plan recognition system: (a) main user interface; (b) comics dramatization; (c) a variant for story 1; and (d) text dramatization.
http://www-di.inf.puc-rio.br/~furtado/LRRH_texts.pdf
http://www.gutenberg.org/files/1974/1974-h/1974-h.htm
Acknowledgements
This work was partially supported by CNPq (National Council for Scientific and Technological Development, linked to the Ministry of Science, Technology, and Innovation), CAPES (Coordination for the Improvement of Higher Education Personnel), FINEP (Brazilian Innovation Agency), ICAD/VisionLab (PUC-Rio), and Oi Futuro Institute. |
01758437 | en | [
"info"
] | 2024/03/05 22:32:10 | 2015 | https://inria.hal.science/hal-01758437/file/371182_1_En_20_Chapter.pdf | Vojtech Cerny
Filip Dechterenko
email: filip.dechterenko@gmail.com
Rogue-like Games as a Playground for Artificial Intelligence -Evolutionary Approach
Keywords: artificial intelligence, computer games, evolutionary algorithms, rogue-like
Rogue-likes are difficult computer RPG games set in a procedurally generated environment. Attempts have been made at playing these algorithmically, but few of them succeeded. In this paper, we present a platform for developing artificial intelligence (AI) and creating procedural content generators (PCGs) for a rogue-like game Desktop Dungeons. As an example, we employ evolutionary algorithms to recombine greedy strategies for the game. The resulting AI plays the game better than a hand-designed greedy strategy and similarly well to a mediocre player -winning the game 72% of the time. The platform may be used for additional research leading to improving rogue-like games and general PCGs.
Introduction
Rogue-like games, as a branch of the RPG genre, have existed for a long time. They descend from the 1980 game "Rogue" and some old examples, such as NetHack (1987), are played even to this day. Many more of these games are made every year, and their popularity is apparent.
A rogue-like is a single-player, turn-based, highly difficult RPG game, featuring a randomized environment and permanent death 1 . The player takes the role of a hero, who enters the game's environment (often a dungeon) with a very difficult goal. Achieving the goal requires a lot of skill, game experience and perhaps a little bit of luck.
Such a game, bordering between RPG and puzzle genres, is challenging for artificial intelligence (AI) to play. One often needs to balance between being reactive (dealing with current problems) and proactive (planning towards the main goal). Attempts at solving rogue-likes by AI have been previously made [START_REF] Mauldin | ROG-O-MATIC: a belligerent expert system[END_REF][START_REF]Tactical Amulet Extraction Bot (TAEB) -Other Bots[END_REF][START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF], usually using a set of hand-coded rules as basic reasoning, and being to some extent successful.
On the other hand, the quality of a rogue-like can heavily depend on its procedural content generator (PCG), which usually creates the whole environment.
Procedural generation [START_REF] Shaker | Procedural Content Generation in Games: A Textbook and an Overview of Current Research[END_REF] has been used in many kinds of games [START_REF] Togelius | Search-based procedural content generation: A taxonomy and survey[END_REF][START_REF] Hendrikx | Procedural content generation for games: A survey[END_REF], and thus, the call for high-quality PCG is clear [START_REF] Liapis | Towards a Generic Method of Evaluating Game Levels[END_REF]. However, evaluating the PCG brings issues [START_REF] Dahlskog | A Comparative Evaluation of Procedural Level Generators in the Mario AI Framework[END_REF][START_REF] Smith | The Seven Deadly Sins of PCG Research[END_REF], such as how to balance between the criteria of high quality and high variability.
But a connection can be made to the former -we could conveniently use the PCG to evaluate the artificial player and similarly, use the AI to evaluate the content generator. The latter may also lead to personalized PCGs (creating content for a specific kind of players) [START_REF] Shaker | Towards Automatic Personalized Content Generation for Platform Games[END_REF].
In this paper, we present a platform for developing AI and PCG for a rogue-like game Desktop Dungeons [11]. It is intended as an alternative to other used AI or PCG platforms, such as the Super Mario AI Benchmark [START_REF] Karakovskiy | The Mario AI Benchmark and Competitions[END_REF] or SpelunkBots [START_REF] Scales | SpelunkBots API -An AI Toolset for Spelunky[END_REF]. AI platforms have even been created for a few rogue-like games, most notably NetHack [START_REF]Tactical Amulet Extraction Bot (TAEB) -Other Bots[END_REF][START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF]. However, Desktop Dungeons has some characteristics making it easier to use than the other. Deterministic actions and short play times help the AI, while small dungeon size simplifies the work of a PCG.
And as such, more experimental and resource demanding approaches may be tried. The platform could also aid other kinds of research or teaching AI, as some people create their own example games for this purpose [START_REF] Russell | Artificial Intelligence: A Modern Approach[END_REF]Chapter 21.2], where Desktop Dungeons could be used instead.
The outline of this paper is as follows. First, we introduce the game to the reader, then we proceed to describe our platform, and finally, we will show how to use it to create a good artificial rogue-like player using evolutionary algorithms.
Desktop Dungeons Description
Desktop Dungeons by QCF Design [11] is a single-player computer RPG game that exhibits typical rogue-like features. The player is tasked with entering a dungeon full of monsters and, through careful manipulation and experience gain, slaying the boss (the biggest monster).
Disclaimer: The following explanation is slightly simplified. More thorough and complete rules can be found at the Desktop Dungeons wiki page [START_REF]Desktop Dungeons -DDwiki[END_REF].
Dungeon
The dungeon is a 20 × 20 grid viewed from the top. The grid cells may contain monsters, items, glyphs, or the hero (player). Every such object, except for the hero, is static -does not move2 . Only a 3 × 3 square around the hero is revealed in the beginning, and the rest must be explored by moving the hero next to it. Screenshot of the dungeon early in the game can be seen in Fig. 1.
Hero
The hero is the player-controlled character in the dungeon and holds a set of values. Namely: health, mana, attack power, the number of health/mana potions, and his spell glyphs. The hero can also perform a variety of actions. He can attack a monster, explore unrevealed parts of the dungeon, pick up items and glyphs, cast spells or convert glyphs into bonuses.
Exploring
Unrevealed grid cells can be explored by moving the hero next to them (at least diagonally). Not only does exploration reveal what lies underneath for the rest of the game, but it also serves one additional purpose -restoring health and mana. Every square explored will restore health equal to the hero's level and 1 mana. This means that the dungeon itself is a scarce resource that has to be managed wisely. It shall be noted, though, that monsters heal also when hero explores, so this cannot be used to gain an edge over damaged monsters.
Combat
Whenever the hero bumps into a monster, a combat exchange happens. The higher level combatant strikes first (monster strikes first when tied). The first attacker reduces his opponent's health by exactly his attack power. The other attacker, if alive, then does the same. No other action causes any monster to attack the hero.
Items
Several kinds of items can be found lying on the ground. These comprise of a Health Powerup, Mana Powerup, Attack Powerup, Health Potion and a Mana Potion. These increase the hero's health, mana, attack power, and amount of health and mana potions respectively.
Glyphs
Spell glyphs are special items that each allow the hero to cast one kind of spell for it's mana cost. The hero starts with no glyphs, and can find them lying in the dungeon. Common spells include a Fireball spell, that directly deals damage to a monster (without it retaliating), and a Kill Protect spell, that saves the hero from the next killing blow.
Additionally, a spell glyph can be converted to a racial bonus -a specific bonus depending on the hero's race. These are generally small stat increases or an extra potion. The spell cannot be cast anymore, so the hero should only convert glyphs he has little use for.
Hero Races and Classes
Before entering the dungeon, the player chooses a race (Human, Elf, etc.) and a class (Warrior, Wizard, etc.) of his hero. The race determines only the reward for converting a glyph, but classes can modify the game in a completely unique way.
Other
The game has a few other unmentioned mechanics. The player can enter special "challenge" dungeons, he can find altars and shops in the dungeon, but all that is far beyond the basics we'll need for our demonstration. As mentioned, more can be found at the Desktop Dungeons wiki [START_REF]Desktop Dungeons -DDwiki[END_REF].
AI Platform
Desktop Dungeons has two parameters rarely seen in other similar games. Every action in the game is deterministic3 (the only unknown is the unrevealed part of the dungeon) and the game is limited to 20 × 20 grid cells and never extends beyond. These may allow for better and more efficient AI solutions, and may be advantageously utilized when using search techniques, planning, evaluating fitness functions, etc.
On the other hand, Desktop Dungeons is a very interesting environment for AI. It is complex, difficult, and as such can show usefulness of various approaches. Achieving short-term and long-term goals must be balanced, and thus, simple approaches tend to not do well, and must be specifically adjusted for the task. Not much research has been done on solving rogue-like games altogether, only recently was a famous, classic title of this genre -NetHack -beaten by AI [START_REF] Krajíček | NetHack Bot Framework. Master's thesis[END_REF].
From the perspective of a PCG, Desktop Dungeons is similarly interesting. The size of the dungeon is very limited, so attention to detail should be paid. If one has an artificial player, the PCG could use him as a measure of quality, even at runtime, to produce only the levels the artificial player found enjoyable or challenging. This is why we created a programming interface (API) to Desktop Dungeons, together with a Java framework for easy AI and PCG prototyping and implementation. We used the alpha version of Desktop Dungeons, because it is more direct, contains less story content and player progress features, runs in a browser, and the main gameplay is essentially the same as in the full version.
The API is a modified part of the game code that can connect to another application, such as our framework, via a WebSocket (TCP) protocol and provide access to the game by sending and receiving messages. A diagram of the API usage is portrayed in Fig. 2. The framework allows the user to focus on high-level programming, and have the technical details hidden from him. It efficiently keeps track of the dungeon elements, and provides full game simulation, assisting any search techniques and heuristics that might be desired. The developed artificial players can be tested against the default PCG of the game, which has the advantage of being designed to provide challenging levels for human players, or one can generate the dungeon on his own and submit it to the game. Intermediate ways can also be employed, such as editing the dungeons generated by the game's PCG to e.g. adjust the difficulty or reduce the complexity of the game.
The framework is completely open-source and its repository can be found at https://bitbucket.org/woitee/desktopdungeons-java-framework.
Evolutionary Approach
To demonstrate the possibilities of the Desktop Dungeons API, we have implemented an evolutionary algorithm (EA) [START_REF] Mitchell | An Introduction to Genetic Algorithms[END_REF] to fine-tune greedy AI. A general explanation of EAs is, however, out of the scope of this paper.
Simple Greedy Algorithm
The original greedy algorithm was a simple strategy for each moment of the game. It is best described by a list of actions, ordered by priority.
1. Try picking up an item. 2. Try killing a monster (prefer strongest). 3. Explore.
The hero tries to perform the highest rated applicable action, and when none exists, the run ends. Killing the monster was attempted by just simulating attacks, fireballs and drinking potions until one of the participants died. If successful, the sequence of actions was acted out. This can be modeled as a similar list of priority actions:
1. Try casting the Fireball spell. 2. Try attacking. 3. Try drinking a potion. Some actions have parameters, e.g. how many potions is the hero allowed to use against a certain level of monster. These were set intuitively and tuned by trial and error.
This algorithm has yielded good results. Given enough time (weeks, tens of thousands of runs), this simple AI actually managed to luck out and kill the boss. This was very surprising, we thought the game would be much harder to beat, even with chance on our side. It was probably caused by the AI always calculating how to kill every monster it sees, which is tedious and error-prone for human players to do.
Design of the Evolution
We used two ordered lists of elementary strategies in the greedy approach, but we hand-designed them and probably have not done that optimally. This would become increasingly more difficult, had we added more strategies to the list. We'll solve this by using evolutionary algorithms.
We'll call the strategies used to select actions in the game maingame strategies and the strategies used when trying to kill monsters attack strategies. Each strategy has preconditions (e.g. places to explore exists) and may have parameters. We used as many strategies as we could think of, which resulted in a total of 7 maingame strategies and 13 attack strategies.
The evolutionary algorithm was tasked with ordering both lists of strategies, and setting their parameters. It should be emphasized, that this is far from an easy task. Small imperfections in the strategy settings accumulate over the run, and thus only the very refined individuals have some chance of slaying the final boss.
However, the design makes the AI ignore some features of the game. It doesn't buy items in shops nor does it worship any gods. These mechanics are nevertheless quite advanced, and should not be needed to win the basic setting of the game. Using them can have back-biting effects if done improperly, so we just decided to ignore them to keep the complexity low.
On a side note, this design is to a certain extent similar to linear genetic programming [START_REF] Brameier | Linear Genetic Programming[END_REF].
Fitness Function
Several criteria could be considered when designing the fitness function. An easy solution would be to use the game's score, which is awarded after every run. However, the score takes into account some attributes that do not directly contribute towards winning the game, e.g. awarding bonuses for low completion time, or never dropping below 20% of health.
We inspired ourselves by the game's scoring, but simplified it. Our basic fitness function evaluates the game's state at the end of the run and looks like this:
f itness = 10 • xp + 150 • healthpotions + 75 • manapotions + health
The main contributor is the total gained XP (experience points, good runs get awarded over a hundred), and additionally, we slightly reward leftover health and potions. We take these values from three runs and add them together. Three runs are too few to have low variance on subsequent evaluations, but it yields far better results than evaluating only one run, and more runs than three would just take too much time to complete.
If the AI manages to kill the boss in any of the runs, we triple the fitness value of that run. This may look a little over the top, but slaying the final monster is very difficult, and if one of the individuals is capable of doing so, we want to spread it's gene in the population. Note, that we don't expect our AI to kill the boss reliably, 5-10% chance is more what we are aiming for.
We have tried a variety of fitness functions, taking into account other properties of the game state and with different weights. For a very long time, the performance of the bots was similiar to the hand-designed greedy strategy. But, by analyzing more of the game, we have constructed roughly the fitness function above and the performance has hugely improved.
The improvement lies in the observation of how can the bots improve during the course of evolution. Strong bots in the early state will probably just use objectively good strategies, and not make complete blunders in strategy priorities, such as exploring the whole level before trying to kill anything. This should already make them capable of killing quite a few monsters. Then, the bots can improve and fine-tune their settings, to use less and less resources (mainly potions) to kill as many monsters as possible. And towards the late state of evolution, the bots can play the game so effectively, they may still have enough potions and other resources to kill the final boss and beat the game. The current fitness function supports this improvement, because the fitness values of the hypothetical bots in subsequent stages of evolution continuously rises.
After implementation, this was exactly the course the bots have evolved through. Note, that saving at least a few potions for the final boss fight is basically a necessary condition for success.
Genetic Operators
Priorities of the strategies are represented by floating point numbers in the [0, 1] interval. Together with the strategy's parameter values, we can encode it as just a few floating point numbers, integers and booleans.
This representation allows us to use classical operators like one-/two-point crossovers and small change mutations. And they make good sense and work, but they are not necessarily optimal, and after some trial and error, we have Fig. 3. Graphs describing the fitnesses of the evolution for each of our class-race settings. The three curves describe the total best fitness ever encountered, the best fitnesses averaged over all runs and the mean fitnesses averaged over all runs. The vertical line indicates the point, where the AI has killed the boss and won the game at least once in three attempts. This fitness value is different for each setting, since some raceclass combinations can gain more hitpoints or health potions than other, both of which directly increase their fitness (see Section 4.3).
started using a weighted average operator to crossover the priorities for better performance.
The AI evolved with these settings were just a little too greedy, often using all their potions in the early game, and even though they advanced far, they basically had no chance of beating the final boss. These strategies found quite a strong local optimum of the fitness, and we wanted to slightly punish them for it. We did so in two ways. Firstly, we rewarded leftover potions in our fitness value calculation, and secondly, a smart mutation was added, that modifies a few individuals from the population to not use potions to kill monsters of lower level than 5. After some balancing, this has shown itself to be effective.
Mating and natural selection was done by simple roulette, i.e. individuals were chosen with probability proportional to their fitness. This creates a rather low selection pressure, and together with a large enough number of individuals in a generation, the evolution should explore a large portion of the candidate space and tune the strategies finely.
Results
After experimentation, we settled to do final runs with a population of 100 individuals, evolving through 30 generations. The population seemed large enough to be exploring the field well, and the generations sufficient for the population to converge. We ran the EA on 4 computers for a week, with a different combination of hero class and race on each computer. The result was a total of 62 runs, every hero class and race setting completed a minimum of 12 full runs. A single evaluation of an individual takes about 2 seconds, and a single whole run finishes in about 14 hours (intel i5-3470 at 3.2GHz, 4GB RAM, two instances in parallel).
The data of the results contain a lot of good strategies, their qualities can be seen in Fig. 3. Every combination of hero race and class managed to beat the boss at least once, and the strongest evolved individual kills the boss 72% of time (averaged over 10000 runs). This is definitely more than we expected. Note that no AI can slay the boss 100% of the time, since the game's default PCG sometimes creates an obviously unbeatable level (e.g. all exits from the starting room surrounded by high level monsters).
The evolved strategies also vary from each other. Different race and class combinations employ different strategies, but variance occurs even among runs of the same configuration. This shows that Desktop Dungeons can be played in several ways, and that different initial settings require different approaches to be used, which makes the game more interesting for a human. The different success rates of the configurations can also be used as a hint which race-class combinations are more difficult to play than others, either to balance them in the game design, or to recommend the easier ones to a beginner.
Conclusion
We present a platform for creating AI and PCG for the rogue-like game Desktop Dungeons. As a demonstration, we created an artificial player by an EA adjusting greedy algorithms. This AI functioned better than the hand-made greedy algorithm, winning the game roughly three quarters of the time, compared to a winrate of much less than 1%, and being as successful as an average human player.
This shows that the game's original PCG worked quite well, not generating a great abundance of impossible levels, yet still providing a good challenge.
A lot of research is possible with this platform. AI could be improved by using more complex EAs, or created from scratch using any techniques, such as search, planning and others. The PCG may be improved to e.g. create more various challenges for the player, adjust difficulty for stronger/weaker players or reduce the number of levels that are impossible to win. For evaluating the PCG, we could advantageously utilize the AI, and note some statistics, such as winrate, how often are different strategies employed or number of steps to solve a level. A combination of these would then create a rating function.
Also, it would be very interesting to keep improving both the artificial player and the PCG iteratively by each other.
Fig. 1 .
1 Fig. 1. Screenshot of the dungeon, showing the hero, monsters, and an item (a health potion). The dark areas are the unexplored parts of the dungeon.
Fig. 2 .
2 Fig.2. The API, as a part of the game, connects to an application using a WebSockets protocol and provides access to the game by receiving and sending messages.
The game offers no save/load features, it is always replayed from beginning to end.
Some spells and effects move monsters, but that is quite uncommon and can be ignored for our purpose.
Some rare effects have probabilistic outcomes, but with a proper game setting, this may be completely ignored. |
01758442 | en | [
"info"
] | 2024/03/05 22:32:10 | 2015 | https://inria.hal.science/hal-01758442/file/371182_1_En_4_Chapter.pdf | Augusto Baffa
email: abaffa@inf.puc-rio.br
Marcus Poggi
email: poggi@inf.puc-rio.br
Bruno Feijó
email: bfeijo@inf.puc-rio.br
Adaptive Automated Storytelling based on Audience Response
Keywords: Social Interaction, Group decision making, Model of Emotions, Automated Storytelling, Audience model, Optimization application End
To tell a story, the storyteller uses all his/her skills to entertain an audience. This task not only relies on the act of telling a story, but also on the ability to understand reactions of the audience during the telling of the story. A well-trained storyteller knows whether the audience is bored or enjoying the show just by observing the spectators and adapts the story to please the audience. In this work, we propose a methodology to create tailored stories to an audience based on personality traits and preferences of each individual. As an audience may be composed of individuals with similar or mixed preferences, it is necessary to consider a middle ground solution based on the individual options. In addition, individuals may have some kind of relationship with others that influence their decisions. The proposed model addresses all steps in the quest to please the audience. It infers what the preferences are, computes the scenes reward for all individuals, estimates their choices independently and in group, and allows Interactive Storytelling systems to find the story that maximizes the expected audience reward.
Introduction
Selecting the best events of a story to please the audience is a difficult task. It requires continued observation of the spectators. It is also necessary to understand the preferences of each individual in order to ensure that the story is able to entertain and engage as many spectators as possible.
Whereas an interactive story is non-linear, because it has several possible branches until the end, the objective of a storyteller is to find out the best ones considering an audience profile, the dramatic tension and the emotions aroused on the individuals.
Empathy is the psychological ability to feel what another person would feel if you were experiencing the same situation. It is a way to understand feelings and emotions, looking in an objective and rational way what another person feels [START_REF] Davis | A multidimensional approach to individual differences in empathy[END_REF]. Based on the empathy, it is possible to learn what the audience likes. This allows selecting similar future events along the story and, therefore, to maximize the audience rating.
The proposed method aims to select the best sequence of scenes to a given audience, trying to maximize the acceptance of the story and reduce drop outs. The idea behind this approach is to identify whether the audience is really in tune with the story that is being shown. A well-trained storyteller can realize if the audience is bored or enjoying the story (or presentation) just looking at the spectators.
During story writing, an author can define dramatic curves to describe emotions of each scene. These dramatic curves define how the scene should be played, its screenshot, lighting and soundtrack. After the current scene, each new one has a new dramatic curve which adds to the context of the story [START_REF] Araujo | Verification of temporal constraints in continuous time on nondeterministic stories[END_REF].
The reactions of the audience are related to the dramatic curves of the scene. If the audience readings of the emotions are similar to the emotions defined by the dramatic context, then there is a connection (empathy) between audience and what is being watched [START_REF] Jones | The actor and the observer: Divergent perceptions of the causes of behavior[END_REF][START_REF] Kallias | Individual Differences and the Psychology of Film Preferences[END_REF].
In this work, we propose a methodology to create tailored stories to an audience based on personality traits and preferences of each individual. The global objective is to maximize the expected audience reward. This involves considering a middle ground solution based on the individual options of the audience group. In addition, individuals may have some kind of relationship with others, characterizing an interaction among the audience and ultimately influencing their decisions.
The proposed model addresses all steps in the quest to please the audience. It infers what the preferences are, computes the scenes reward for all individuals, estimates their choices independently and in group, and allows Interactive Storytelling systems to find the story that maximizes the expected audience reward.
This paper is organized as follows. Section 2 discusses on emotion modeling and on its application to audience characterization and behavior expectation. The following section presents the main aspects of automated storytelling. Section 4 is dedicated to modeling the expected audience reward maximization. The interaction of individuals in the audience is the object of section 5. Section 6 proposes a heuristic to solve the optimization model in section 5. Analysis and conclusions are drawn in the last section.
Emotions and Audience
During film screening, the audience gets emotionally involved with the story. Individuals in the audience reacts according to their preferences. When an individual enjoys what is staged, he/she tends to reflect the same emotions that are proposed by the story. The greater the identification between the individual and the story, the greater are the emotions experienced.
As an audience can be composed of individuals who have very different preferences, it is important that the storyteller identifies a middle ground to please as many as possible. Knowing some personality traits of each individual helps to get the story closer to the audience.
Model of Emotions
The emotional notation used to describe the scenes of a story is based on the model of "basic emotions" proposed by Robert Plutchik [START_REF] Plutchik | The emotions: Facts, theories, and a new model[END_REF][START_REF] Plutchik | A general psychoevolutionary theory of emotions[END_REF]. Plutchik's model is based on Psychoevolutionary theory. It assumes that emotions are biologically primitive and that they evolved in order to improve animal reproductive capacity. Each of the basic emotions demonstrates a high survival behavior, such as the fear that inspires the fight-or-flight. In Plutchik's approach, the basic emotions are represented by a three-dimensional circumplex model where emotional words were plotted based on similarity [START_REF] Plutchik | The nature of emotions[END_REF]. Plutchik's model is often used in computer science in different versions, for tasks such as affective human-computer interaction or sentiment analysis. It is one of the most influential approaches for classifying emotional responses in general [START_REF] Ellsworth | Appraisal processes in emotion[END_REF].
Each sector of the circle represents an intensity level for each basic emotion: the first intensity is low, the second is normal and the third intensity is high. In each level, there are specific names according to the intensity of the emotion, for example: serenity at low intensity is similar to joy and ecstasy in a higher intensity of the instance.
Plutchik defines that basic emotions can be combined in pairs to produce complex emotions. These combinations are classified in four groups: Primary Dyads (experienced often), Secondary Dyads (sometimes perceived), Tertiary Dyads (rare) and opposite Dyads (cannot be combined).
Primary Dyads are obtained by combining adjacent emotions, e.g., Joy + Trust = Love. The Secondary Dyads are obtained by combining emotions that are two axes distant, for example, Joy + Fear = Excitement. The Tertiary Dyads are obtained by combining emotions that are three axes distant, for example, Joy + Surprise = Doom. The opposite Dyads are on the same axis but on opposite sides, for example, Joy and Sorrow cannot be combined, or cannot occur simultaneously [START_REF] Plutchik | The nature of emotions[END_REF]. Complex Emotions -Primaries Dyads:
antecipation + joy = optimism joy + trust = love trust + fear = submission fear + surprise = awe surprise + sadness = disappointment sadness + disgust = remorse disgust + anger = contempt anger + antecipation = aggression This model assumes that there are eight primary emotions: Joy, Anticipation, Trust, Fear, Disgust, Anger, Surprise and Sadness. It is possible to adapt the Plutchik's model within a structure of 4-axis of emotions [START_REF] Rodrigues | Um Sistema de Geração de Expressões Faciais Dinâmicas em Animações Faciais 3D com Processamento de Fala[END_REF][START_REF] Araujo | Verification of temporal constraints in continuous time on nondeterministic stories[END_REF] as shown in Figure 1.
The Plutchik's model describes a punctual emotion and it is used to represent an individual or a scene in a specific moment. In order to describe the emotions of a scene, the Plutchik's model is converted to a time series of emotions called "dramatic curve". The dramatic curve describes the sequence of emotions in a scene in an interval of one second per point. It follows the structure of 4-axis based on Plutchik's wheel and maps the variation of events in a story.
Audience Model
In Psychology, there are many models to map and define an individual's personality traits. One of the most used is called Big Five or Five Factor Model, developed by Ernest Tupes and Raymond Christal in 1961 [START_REF] Tupes | Recurrent personality factors based on trait ratings[END_REF]. This model was forgotten until achieving notoriety in the early 1980s [START_REF] Rich | User modeling via stereotypes[END_REF] and defines a personality through the five factors based on a linguistic analysis. It is also known by the acronym O.C.E.A.N. that refers to five personality traits.
The personality of an individual is analyzed and defined throughout answers to a questionnaire that must be completed and verified by factor analysis. Responses are converted to values that define one of the factors on a scale of 0 to 100. In this work only two traits are used to create the individual profile: Openness to experience O ∈ [0, 1] and Agreeableness (Sociability) A ∈ [0, 1]. Each personality trait is described as follows:
Openness to experience The openness reflects how much an individual likes and seeks for new experiences. Individuals high in openness are motivated to seek new experiences and to engage in self-examination. In a different way, closed individuals are more comfortable with familiar and traditional experiences. They generally do not depart from the comfort zone. [START_REF] John | The big-five trait taxonomy: History, measurement, and theoretical perspectives[END_REF] Agreeableness (Sociability) Agreeableness reflects how much an individual like and try to please others. Individuals high on agreeableness are perceived as kind, warm and cooperative. They tend to demonstrate higher empathy levels and believe that most people are decent, honest and reliable. On the other hand, individuals low on agreeableness are generally less concerned with others' wellbeing and demonstrate less empathy. They tend to be manipulative in their social relationships and more likely to compete than to cooperate. [START_REF] John | The big-five trait taxonomy: History, measurement, and theoretical perspectives[END_REF]
Concept of Empathy
According to Davis [START_REF] Davis | A multidimensional approach to individual differences in empathy[END_REF], "empathy" is defined by spontaneous attempts to adopt the perspectives of other people and to see things from their point of view. Individuals who share higher empathy levels tend to have similar preferences and do things together. In this work, he proposes a scale of "empathy" to measure the tendency of an individual to identify himself with characters in movies, novels, plays and other fictional situations. Also, the emotional influence of a movie to the viewer can be considered "empathy". It is possible to identify a personality based on the relationship between an individual and his favorite movies and books. Furthermore, it is possible to suggest new books or movies just knowing the personality of an individual [START_REF] Kallias | Individual Differences and the Psychology of Film Preferences[END_REF]. Following these ideas, it is possible to relate empathy to a rating index. During an exhibition, if the viewer is enjoying what he is watching, there is an empathy between the show and the spectator. This information is used to predict what the spectator likes and dislikes.
Interactive Storytelling
In recent years, there have been some efforts to build storytelling systems in which authors and audience engage in a collaborative experience of creating the story. Furthermore, the convergence between video games and film-making can give freedom to the player's experience and generate tailored stories to a spectator. Interactive Storytelling are applications which simulates a digital storyteller. It transforms the narrative from a linear to a dialectical form, creating new stories based on audience by monitoring their reactions, interactions or suggestions for new events to the story. [START_REF] Karlsson | Applying a planrecognition/plan-generation paradigm to interactive storytelling[END_REF] The proposed approach of a storytelling system should be able to generate different stories adapted to each audience, based on previously computed sequence of events and knowledge of preferences of each individual on the audience.
Story Model
A story is a single sequence of connected events which represents a narrative. The narrative context may be organized as a decision tree to define different possibilities of endings. During the story writing, the author can define many different ends or sequences to each event (or scene). Each ending option forwards to a new scene and then to new ending options, until story ends. For example, Figure 2 To evaluate the proposed model and algorithm, the tests are performed using an event tree corresponding to a non-linear variation of the fairy tale Little Red Cap [START_REF] Araujo | Verification of temporal constraints in continuous time on nondeterministic stories[END_REF]. The event tree is described in Table 1 and presented in Figure 3. In some events, there are possibilities of branches such as the moment when the girl meets the wolf in the forest. The original story is represented by the sequence of events π : {EV1, EV2, EV3, EV4, EV5, EV7, EV8, EV9, EV10, EV17, EV11, EV13, EV15}. Each scene describes what occurs to the characters and the story, and also has an emotional description called "dramatic curve". The dramatic curves are based on Plutchik's wheel of emotions and describes how emotions should manifest during the scene. Soundtracks, screenshots and lighting can be chosen based on the dramatic curves.
The sequence of scenes tells the story, describes a complete emotional curve and "tags" the story as "genre".
Modeling Emotions to Events
During the story writing, the scenes are described as a tree of events. Each event in the tree is associated to a dramatic curve and must be modeled containing the following information:
-Name: unique name for the event (each event has a unique name); -Text: describes what happens during the event; -Dramatic Curves: emotional time series presented on figure 1: Joy/Sadness (axis x), Fear/Anger (axis y), Surprise/Anticipation (axis w) and Trust/Disgust (axis z).
The Tree of events has different paths, connecting to different future events, until the end of the story. When the story is told, it is selected a single branch to each event. The dramatic curves representing the original story sequence of events are demonstrated in Figure 4. Table 2 illustrates the emotions involved in each of 20 events present in the story. Considering the personality traits, individuals who score high in "openness" like a greater variety of genres (often opposed) in comparison to others. Individuals low in "openness" generally prefer the same things and choose the same genres. In this case, there are less options to please individuals low in "openness".
The task of selecting a scene that pleases a person is to find which of the possible options approaches their preferences. The task selection becomes difficult when we try to find the best option that would please the most people in a group. In this case, it is necessary to consider other information about individuals as "agreeableness" and empathy between individuals. Individuals who score high in "agreeableness" try to approach quickly the choices of others and have more patience than others. They sometimes prefer to accept other preferences and decisions just to please them all.
The empathy indicates the level in a relationship that individuals have. For example, when two people like each other, they may want to do things together, thus it indicates a higher level of empathy. In the other hand, people who want avoid each other have a low level of empathy in this relationship. Generally, individuals in a relationship with high empathy choose the same options or a middle ground.
Maximizing the Audience
Given a tree of events, each ending (a leaf of the tree) uniquely determines a sequence of scenes or the story to tell. This corresponds to the path from the root of the tree to ending leaf. Finding the most rewardable path for a given audience amounts to evaluate an utility function that captures how the audience feels rewarded by the scenes and also the choices the audience makes at each branch. The tree of event can be represented as follows:
Let S be the finite set of events (scenes) of a story. Let also Γ + (s) be a subset of S containing the child nodes of node s. The utility function is given by E(s, i) which determines the expected value of state s for individual i and represents a measure of similarity between 0 and 1. Finally, let P rob(s l-1 , s l , i) be the probability with which individual i chooses state s l to follow state s l-1 . Remark that the probabilities P rob(s l-1 , s l , i)) must add one for each branch and for each individual, since one branch must be selected on each state.
Consider now a sequence of states π = {s 0 , . . . , s k } that represents a path from the root to a leaf. The proposed model evaluates path by computing its expected utility which is given by the expression:
f (π) = k l=1 i∈I (E(s l , i).P rob(s l-1 , s l , i)) (1)
Let R(s) be the maximum expected utility that can be obtained starting from state s. The following recursion determines R(s). where p(s) is the predecessor of s. By computing R(s 0 ), the root's reward, an optimal sequence π * , with maximum expected reward, can be retrieved in a straightforward way. We conclude the model by proposing an evaluation for the individual probabilities of choice on each story branch. This is done by assuming this probability is proportional to the expected individual reward of the branches. This leads to the expression:
R(s) = i∈I (E(s, i).P rob(p(s), s, i)) + max s ∈Γ + (s) R (
P rob(s, s , i) = IR(s , i) s ∈Γ + (s) IR(s , i) (3)
where IR(s, i) is the expected reward at state s for individual i, which is given by:
IR(s, i) = E(s, i) + max s ∈Γ + (s)
IR(s , i).P rob(s, s , i)
This model allows determining the best sequence of scenes for and audience provided there is no interaction within the audience. We address this case in the following section.
To create tailored stories for an individual it is just necessary to check what he/she likes most, based on its own probabilities but when an individual participates of a group he/she needs to deal a middle ground. The dynamic of choosing the best story to an audience is based on the fact that the individuals will watch the same story, share a minimal intimacy and want spend sometime together. In a similar way, it is possible to say that they are trying to watch television and need to choose a television program that please the entire group. During this interaction, each individual tries to convince others about his preferences. Some individuals may agree with these suggestions based on the relationship they share, but others may introduce some limits. After some rounds, some individuals give in and accept to approach other preferences [START_REF] Tortosa | Interpersonal effects of emotion in a multi-round trust game[END_REF]. The decision of accepting others' do not eliminate personal preferences but introduce a new aspect to the options. According to the proposed model some options that originally are not attractive will be chosen because of the induced social reward imposed by the probability function of choosing it. This means that for some individuals, it is better to keep the group together than take advantage of their preference. Furthermore, as explained in section 2.2, individuals high in "openness" do not care so much about their own preferences because they like to experiment new possibilities. They may be convinced by friends or relatives and will tend to support their preferences.
In order to model the audience behavior, we propose an algorithm based on a spring-mass system. Consider that all preferences are modeled by the real coordinate space ( 2 ) and each individual of the audience is represented by a point positioned on his preferences. Each point (individual) is connected to n springs (where n is the number of individuals). A spring is connected to its original position and other n-1 springs are connected to the other points. Then, we have a total of n×(n+1) 2 springs. The objective function aims to approach each point, considering the constraints of the springs. Each spring is modeled based on the individual personality traits and relationship levels between them.
Let K ii = (1 -O i ), be the openness level of each individual i and K ij = A ij be the agreeableness level for each pair of individuals i and j. In this model, we are assuming that "agreeableness" may also be influenced by the relationship between i and j and it is possible to describe an individual resistance by others' preferences. After some experiments, we realize that it is possible to start an audience from A ij = A i and fine tuning A ij after some rounds.
Given e ij ∈ [-1, 1], the empathy level between each pair of individuals, and x 0 i , the original position in space for each individual i, let d 0 ij = x 0 i -x 0 j be the original distance between individuals and let L ij = (1 -e ij ).d 0 ij be a weighted empathy level. The objective of the following model is to find the final positions x i minimizing the distances between the individuals d ij , weighted by their agreeableness level K ij and considering L ij . min i∈A j∈J:i =j
K ij .(d ij -L ij ) 2 + i∈I K ii .d 2 ii (5)
subject to
d ij = x i -x j ∀i, j ∈ A, i = j (6)
d ii = x i -x 0 i ∀i ∈ A (7)
The constraints (6) link the distance variables d ij with the coordinate variables x i and x j when individuals i and j are different. Constraints [START_REF] Kallias | Individual Differences and the Psychology of Film Preferences[END_REF] are used to obtain the distance d ii which each individual has moved from its original position. Figure 5 describes the operation of the Spring-mass system with 2 individuals.
Since this model is not linear, it is not possible to use a linear solver to obtain the optimal solution. Therefore, we use a meta-heuristic approach based on simulated annealing to obtain a good approximate solution. The simulated annealing algorithm is presented in Section 6. 6 Solving the audience interaction model Simulated annealing is a meta-heuristic for optimization problems based on thermodynamics. Given a large solution space, it solves the optimization problem by finding a good solution near the global optimum [START_REF] Dréo | Metaheuristics for Hard Optimization: Methods and Case Studies[END_REF]. At each iteration of the algorithm, it changes the current solution within a neighborhood and considers the new solution as the current one if there is any improvement on the objective function or, if there is no improvement, it may consider it based on a randomized criteria.
The neighborhood used for the audience problem is defined by all possible movements of each individual in other to minimize the distances between all individuals according to spring constraints. Let -→ a be the current position of individual a, -→ b be the referential position based on all relationships between the individuals and s a be the "agreeableness" level of the personality of individual a. It is possible to calculate the step of a movement δ x for individual a using equations ( 8)- [START_REF] Plutchik | A general psychoevolutionary theory of emotions[END_REF].
- → b = n j=1 a j y .e ij e ij , n j=1 a j x .e ij e ij (8)
α = (b y -a y )/(b x -a x ) (9)
δ x = -s 2 a /α 2 + 1 a x > b x , s 2 a /α 2 + 1 otherwise (10)
The final position after moving the individual a is given by -→ a f inal as follows:
- → a f inal = (δ x + a x , δ x .α + a y ) (11)
The simulated annealing method is shown in Algorithm 1. The algorithm receives as input an initial solution S 0 , limits on the number of iterations M , on the number of solution movements per iteration P and on the number of solution improvements per iteration L. During its initialization, the iteration counter starts with 1, the best solution S starts equal to S 0 and the current temperature T is obtained from the function InitialT emp(), which returns a value based on the instance being solved. On each iteration, the best solution is changed within the neighborhood by function Change(S) and the improvement is calculated on ∆F i . This solution is then accepted or not as the new best solution and, at the end of the iteration, the temperature is updated given the factor α.
Algorithm 1 Simulated Annealing procedure SA(S0, M, P, L) j ← 1 S ← S0 T ← InitialT emp() repeat i ← 1 nSuccess ← 0 repeat Si = Change(S) ∆Fi = f (Si) -f (S) if (∆Fi = 0)||(exp(-∆Fi/T ) > Rand()) then S ← Si nSuccess ← nSuccess + 1 end if i ← i + 1 until (nSuccess = L)||(i > P ) T ← α.T j ← j + 1 until (nSuccess = 0)||(j > M ) P rint(S) end procedure
The proposed methodology was initially applied on students of our graduate program in order to evaluate the emotional characteristics of the individuals. This allowed a positive view of techniques and validated the initial hypothesis. Then, the final experiments were conducted using 20 generated audience1 instances with 20 individuals each on instances divided in three groups: 8 entirely mixed audiences with 60% of individuals supporting an emotion, 8 audiences with similar individuals and 4 mixed audiences with one opinion leader. The opinion leader instances were generated by describing an influential individual to others2 . This starting point permitted a qualitative evaluation of the application of the whole methodology based on discussion among the ones involved in the experience.
The resulting story endings for each audience are presented on Table 3. Stories generated to mixed audiences before interaction considered average preferences while stories generated after interaction (SA) tend to select the majority preference. The proposed Red Cap story has a natural tendency for a Sadness + Angry endings (EV12, EV18, EV19, EV20) since there are more final events of these emotional features than Joy + Angry endings (EV15 only). However, the proposed method was able to select expected story endings according to the audience preferences. Also, the preliminary evaluation of an opinion leader suggested there is a sound basis for results that may effectively converge to the choice of audience rewarding paths. Next step amounts to carrying out more thorough and relevant experiments which requires not only larger groups but also stories that truly draws the audience. In this preliminary analysis, an evaluation of the model parameters also allowed to conclude that their determination may lead to conditions which can represent a wide range of groups, thus leading to a representative model.
Our evaluation is that the proposed methodology can still incorporate more factors of emotional behavior, group interaction and storytelling aspects. The goal is to experiment thoroughly on a wide spectrum of stories and audiences.
Fig. 1 .
1 Fig. 1. Simplified 4-axis structure -families of emotions
Fig. 2 .
2 Fig. 2. Story as a Decision Tree
Fig. 3 .
3 Fig. 3. Little Red-Cap story as a decision tree
Fig. 4 .
4 Fig. 4. Dramatic Curves of Little Red-Cap original sequence of Scenes
s ) i∈I (E(s, i).P rob(p(s), s, i)), for s a leaf i∈I E(s, i) + max s ∈Γ + (s) R(s ), for s the root (2)
Fig. 5 .
5 Fig. 5. Spring-mass example with 2 individuals
Table 1 .
1 Little Red-Cap story events
Event Description Event Description
EV1 Mother warns the girl EV11 Girl escapes
EV2 Girl leaves her home EV12 Wolf devours Girl
EV3 Girl is in the forest EV13 Girl finds the Hunter
EV4 Girl finds the wolf in the forest EV14 Wolf gets the girl
EV5 Wolf cheats the girl EV15 Hunter kills the wolf and saves Grandma
EV6 Wolf attacks the girl EV16 Wolf kills the Hunter
EV7 Wolf goes to Grandma's house EV17 Wolf attacks the Girl at Grandma's house
EV8 Wolf swallows Grandma EV18 Wolf eats the Girl after his escape
EV9 Girl arrives at Grandma's house EV19 Wolf devours the Girl in Grandma's house
EV10 Girl speaks with Wolf EV20 Wolf devours the Girl in the Forest
Table 2 .
2 Dramatic Curves for Little Red-Cap eventsEvery time an individual likes a scene or story, he/she tells what he/she likes and what does not. This information is then used to analyze and determine which are the individual preferences. The information from the dramatic curve indicates the emotion that has been liked and is used to classify genres. The favorite scenes of an individual are used to ascertain which are the emotions that stand out. The genres of the stories are set primarily by the main emotions of the scenes. Throughout readings of emotions which stand out, it is possible to know which genres the individual prefers and which scenes of a new story are emotionally similar.
Event Emotion Event Emotion
EV1 Joy + Surprise EV11 Joy + Anticipation
EV2 Joy + Anticipation EV12 Sadness + Angry
EV3 Trust + Surprise EV13 Joy + Anticipation
EV4 Fear + Surprise EV14 Angry + Disgust
EV5 Fear + Trust EV15 Joy + Angry
EV6 Angry + Anticipation EV16 Sadness + Surprise
EV7 Sadness + Anticipation EV17 Angry + Anticipation
EV8 Angry + Surprise EV18 Sadness + Angry
EV9 Joy + Fear EV19 Sadness + Angry
EV10 Trust + Surprise EV20 Sadness + Angry
3.3 Uncovering audience preferences
Table 3 .
3 Selected Story Endings for Audiences
Emotion Mixed SA Similar Opinion Leader
Trust EV 12 EV 15 EV 15 short -
Surprise EV 12 EV 12 EV 20 short -
Joy EV 12 EV 15 EV 15 EV 15
Sadness EV 12 EV 12 EV 12 short EV 12
Disgust EV 12 EV 12 EV 20 short -
Anger EV 12 EV 18 EV 18 EV 18
Fear EV 12 EV 12 EV 12 EV 12
Anticipation EV 12 EV 15 EV 15 short -
An audience is a set of individuals
We considered that the empathy from others to an opinion leader is near to 1 but his/her empathy to others is low
Acknowledgements
This work was partially supported by CNPq (National Council for Scientific and Technological Development, linked to the Ministry of Science, Technology, and Innovation), CAPES (Coordination for the Improvement of Higher Education Personnel, linked to the Ministry of Education), FINEP (Brazilian Innovation Agency), and ICAD/VisionLab (PUC-Rio). |
01713511 | en | [
"chim",
"chim.mate"
] | 2024/03/05 22:32:10 | 2018 | https://univ-rennes.hal.science/hal-01713511/file/Deunf%20et%20al_Anodic%20oxidation%20of%20p-phenylenediamines%20in%20battery%20grade%20electrolytes.pdf | Elise Deunf
Franck Dolhem
Dominique Guyomard
Jacques Simonet
Philippe Poizot
email: philippe.poizot@cnrs-imn.fr
Anodic oxidation of p-phenylenediamines in battery grade electrolytes
Keywords: phenylenediamine, cyclic voltammetry, redox-active amine, organic batteries, PF 6 decomposition, lithium hexafluorophosphate
The use of anion-inserting organic electrode materials represent an interesting opportunity for developing 'metal-free' rechargeable batteries. Recently, crystallized conjugated diamines have emerged as new host materials able to accommodate anions upon oxidation at potentials higher than 3 V vs. Li + /Li 0 in carbonate-based battery electrolytes. To further investigate the electrochemical behavior of such promising systems, comparison with electroanalytical data of soluble forms of conjugated diamines measured in battery grade electrolytes appeared quite useful. However, the literature on the topic is generally poor since such electrolyte media are not common in molecular electrochemistry. This contribution aims at providing relevant data on the characterization by cyclic voltammetry of unsubstituted, diphenyl-substituted and tetramethyl-substituted p-phenylenediamines. Basically, these three molecules revealed two reversible one-electron reaction upon oxidation corresponding to the electrogenerated radical cation and dication, respectively, combined with the association of electrolyte anions (i.e.,
M A N U S C R I P T A C C E P T E D
Introduction
Global warming, fossil fuels depletion and rapid population growth are confronting our technology-oriented society with significant challenges notably in the field of power engineering. One of the priorities in this domain is to promote reliable, safe but also lowpolluting electrochemical storage devices for various practical applications from mWh to MWh range. Since the invention of the first rechargeable battery in 1859 by G. Planté (leadacid cell), the current manufacturing of batteries is still dominated by the use of redox-active inorganic species but the organic counterparts appear today as a promising alternative displaying several advantages such as low cost, environmental friendliness and the structural designability [START_REF] Poizot | Clean energy new deal for a sustainable world: from non-CO 2 generating energy sources to greener electrochemical storage devices[END_REF][START_REF] Liang | Organic electrode materials for rechargeable lithium batteries[END_REF][START_REF] Song | Towards sustainable and versatile energy storage devices: an overview of organic electrode materials[END_REF][START_REF] Haeupler | Carbonyls: Powerful organic materials for secondary batteries[END_REF][START_REF] Zhao | Rechargeable lithium batteries with electrodes of small organic carbonyl salts and advanced electrolytes[END_REF][START_REF] Schon | The rise of organic electrode materials for energy storage[END_REF][START_REF] Muench | Polymer-based organic batteries[END_REF][START_REF] Zhao | Advanced organic electrode materials for rechargeable sodiumion batteries[END_REF][START_REF] Winsberg | Redox-flow batteries: from metals to organic redox-active materials[END_REF]. For instance, the operating redox potential of organic electrodes can be widely tuned by the choice of (i) the electroactive functional group (both n-or p-type1 [START_REF] Song | Towards sustainable and versatile energy storage devices: an overview of organic electrode materials[END_REF][START_REF] Gottis | Voltage gain in lithiated enolate-based organic cathode materials by isomeric effect[END_REF]), (ii) the molecular skeleton, and (iii) the substituent groups. Organic structures based on conjugated carbonyl/enolate redox-active moiety represent probably the most studied family of n-type organic electrode materials especially for developing Li/Na-based rechargeable systems. Conversely, p-type organic electrodes which involve an ionic compensation with anions [START_REF] Song | Towards sustainable and versatile energy storage devices: an overview of organic electrode materials[END_REF][START_REF] Muench | Polymer-based organic batteries[END_REF][START_REF] Gottis | Voltage gain in lithiated enolate-based organic cathode materials by isomeric effect[END_REF] makes development of 'molecular' ion batteries possible [START_REF] Yao | Molecular ion battery: a rechargeable system without using any elemental ions as a charge carrier[END_REF] since numerous metal-free anions do exist. In this regard, our group has recently reported (for the first time) that crystallized conjugated diamines can accommodate anions at E > 3 V vs. Li + /Li 0 in the solid state with an overall reversible two-electron reaction making them interesting for positive electrode applications [START_REF] Deunf | Reversible anion intercalation in a layered aromatic amine: A High-voltage host structure for organic batteries[END_REF][START_REF] Deunf | Solvation, exchange and electrochemical intercalation properties of disodium 2,5-(dianilino)terephthalate[END_REF][START_REF] Deunf | A dual-ion battery using diamino-rubicene as anion-inserting positive electrode material[END_REF].
In light of the opportunity offer by this new family of insertion compounds and to go further in the understanding of electrochemical processes, we revisited the anodic oxidation of pphenylenediamine derivatives by cyclic voltammetry but measured in typical (aprotic) battery grade electrolytes for which less than 0.1 ppm of both O 2 and H 2 O are guaranteed. More specifically, we report herein the electrochemical feature of three selected phenylenediamines
Experimental
Chemicals
The different electrolyte formulations were prepared in an Ar-filled glovebox (MBRAUN) containing less than 0.1 ppm of both O 2 and H 2 O from lithium perchlorate (LiClO 4 ), lithium hexafluorophosphate (LiPF 6 ), propylene carbonate (PC), ethylene carbonate (EC) and dimethylcarbonate (DMC) purchased from BASF (battery grade) and used as received.
Lithium trifluoromethanesulfonate (LiOTf, 99.995%, Aldrich) was dried at 100° C under vacuum for 15 h prior use. The common "LP30" battery grade electrolyte (i.e., LiPF 6 1 M in EC:DMC 1:1 vol./vol.) was directly employed as received from Novolyte. Amines was purchased from Aldrich with the following purities: N,N'-p-phenylenediamine PD (99%), N,N,N',N'-tetramethyl-p-phenylenediamine TMPD (≥ 97%), N,N'-diphenyl-pphenylenediamine DPPD (98%), and triethylamine (Et 3 N, ≥99%).
Electrochemical procedures
Cyclic voltammetric (CV) experiments were recorded on a SP-150 potentiostat/galvanostat (Bio-Logic S.A., Claix, France). All electrochemical experiments were systematically conducted from freshly prepared electrolyte solutions (except the "LP30" battery grade electrolyte, Novolyte) in a conventional three-electrode setup (V = 10 mL) placed inside an Ar-filled glovebox (MBRAUN) containing less than 0.1 ppm of both O 2 and H 2 O. The working electrode was constituted of a commercial platinum disk microelectrode with a diameter of 1.6 mm (ALS Japan). Facing the working electrode, a large platinum wire was
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT
5 used as the counter electrode. An Ag + /Ag 0 reference electrode made of a fritted glass tube filled with an AgNO 3 10 mM solution in acetonitrile [START_REF] Bard | Electrochemical Methods: Fundamentals and Applications[END_REF] was systematically used. However, reported potentials were also given against the Li + /Li 0 reference electrode for a better appreciation of the battery community. This second reference electrode, made of lithium metallic attached on a Pt wire, was experimentally checked versus the Ag + /Ag 0 reference electrode in each studied electrolyte giving a correction of the measured potentials of +3.6 V.
Results and discussion
The typical electrochemical activity of the simple PD molecule is preliminary reported being a representative member of this family of redox-active compounds. In addition, PC/LiClO 4 1 M electrolyte was first employed in order to be aligned with our former battery cycling tests performed on crystallized p-phenylenediamines derivatives [START_REF] Deunf | Reversible anion intercalation in a layered aromatic amine: A High-voltage host structure for organic batteries[END_REF][START_REF] Deunf | Solvation, exchange and electrochemical intercalation properties of disodium 2,5-(dianilino)terephthalate[END_REF]. Basically, the oxidation of PD in such an electrolyte shows two anodic peaks (I, II) located at 3.58 and 4.07 V vs.
Li + /Li 0 , respectively (Figure 1b). When reversing the scan, two corresponding cathodic peaks are observed. The peak-to-peak separation values for both steps (I/I', II/II') are equal to 60 mV, which indicates the occurrence of two fully reversible one-electron processes. The anodic events are assigned to the electrogeneration of the radical cation PD • • • • + at peak I further followed by the dicationic form (PD 2+ ) at peak II. The ratio between the peak currents and the square root of the scan rate at both anodic and cathodic waves shows linearity (Figure 1c), which confirms the two electrochemical processes are under diffusion control as expected for reversible systems [START_REF] Bard | Electrochemical Methods: Fundamentals and Applications[END_REF][START_REF] Batchelor-Mcauley | Voltammetry of multi-electron electrode processes of organic species[END_REF].
At this point it should be recalled that the typical mechanisms for the electrochemical oxidation of phenylenediamines have already been established in common aprotic solvents for molecular electrochemistry [START_REF]The solvent effect on the electro-oxidation of 1,4-phenylenediamine. The influence of the solvent reorientation dynamics on the one-electron transfer rate[END_REF][START_REF] Bewick | Anodic oxidation of aromatic nitrogen compounds: Spectroelectrochemical studies of EE and EECr processes with a coupled redox reaction[END_REF][START_REF] Fernández | Determination of the kinetic and activation parameters for the electro-oxidation of N,N,N',N'-tetramethyl-p-phenylenediamine (TMPD) in acetonitrile (ACN) by chronocoulometry and other electrochemical techniques[END_REF][START_REF] Santana | In situ UV-vis and Raman spectroscopic studies of the electrochemical behavior of N,N'-diphenyl-1,4phenylenediamine[END_REF][START_REF] Maleki | Mechanism diversity in anodic oxidation of N,N-dimethyl-pphenylenediamine by varying pH[END_REF] and very recently in the battery grade PC/LiBF 4 1 M electrolyte for non-aqueous redox flow batteries [START_REF] Kim | A comparative study on the solubility and stability of pphenylenediamine-based organic redox couples for nonaqueous flow batteries[END_REF]. Their oxidation processes through this
M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 6
two reversible one-electron transfer with the formation of successive stable radical cation and dication species. However, in the case of primary and secondary amines -meaning the existence of labile protons -this behavior can be impacted by the presence of acid-base species in the electrolyte [START_REF] Santana | In situ UV-vis and Raman spectroscopic studies of the electrochemical behavior of N,N'-diphenyl-1,4phenylenediamine[END_REF][START_REF] Maleki | Mechanism diversity in anodic oxidation of N,N-dimethyl-pphenylenediamine by varying pH[END_REF]. Indeed, these labile protons are easily involved in an acidbase reaction competing with the electrochemical process, sometimes even leading to the loss of the reversible character. Interestingly, in PC/LiClO 4 1 M electrolyte one can observe that
PD • • •
• + remains stable on the voltammetry time-scale and can undergo a second electrochemical oxidation at higher potentials for producing PD 2+ . Similarly, this dicationic form is also stable enough towards chemical side-reactions and is reduced back on the reverse scan. For comparison, two other common lithiated salts used in Li-batteries (LiPF 6 and LiOTf) were also evaluated as supporting electrolytes using again PC as the solvent. The resulting CV curves of PD shows quite similar electrochemical steps (Figure 2a) with fully reversible peaks obtained at anodic potentials of 3.6 and 4.1 V vs. Li + /Li 0 , respectively. This result attests that neither the thermodynamic nor the kinetic of the stepwise one-electron oxidation reactions are impacted by the counter anions of the supporting electrolyte although exhibiting very different donor number (DN) in PC and van der Waals volume [START_REF] Ue | Mobility and ionic association of lithium and quaternary ammonium salts in propylene carbonate and γ-butyrolactone[END_REF][START_REF] Ue | Ionic radius of (CF 3 SO 2 ) 3 C and applicability of stokes law to its propylene carbonate solution[END_REF][START_REF] Linert | Anions of low Lewis basicity for ionic solid state electrolytes[END_REF]. These results are in agreement with a dominance of the solvation process in high polarity aprotic solvents such as PC (ε r ~ 66) in which a low ion-pairing is expected [START_REF] Barrière | Use of weakly coordinating anions to develop an integrated approach to the tuning of ∆E 1/2 values by medium effects[END_REF]. However, the use of were also selected as representative secondary and tertiary p-phenylenediamine derivatives of interest for this comparative study. In fact, the possible π-delocalization by mesomeric effect (+M) occurring with the DPPD structure should induce both a positive potential shift and higher acidic character of the secondary amine functional group. On the contrary, methyl substituent groups which are electron-donating by inductive effect (+I) would decrease the oxidative strength of the p-phenylenediamine backbone (lower formal potential) whereas no acidic protons do exist. Figure 3 summarizes the most striking features observed in both PC/LiClO 4 1 M and EC-DMC/LiPF 6 1 M electrolytes. As expected with DPPD, the reversible stepwise one-electron oxidation steps occur at 140 and 50 mV higher than the corresponding events observed with PD while TMPD shows the lowest redox potentials of the series. It is worth noting that the presence of substituent groups on the p-phenylenediamine backbone does not impact the reversibility of the processes and further illustrates the stability of both the neutral and the electrogenerated species. Table 1 shows the diffusion coefficient values experimentally determined from the voltammetric curves recorded at different scan rates (50 mV.s -1 to 10 V.s -1 ). These values are comparable to those reported in the literature [START_REF] Kim | A comparative study on the solubility and stability of pphenylenediamine-based organic redox couples for nonaqueous flow batteries[END_REF][START_REF] Ue | Mobility and ionic association of lithium and quaternary ammonium salts in propylene carbonate and γ-butyrolactone[END_REF]. The peculiar electrochemical feature previously observed with PD in presence of PF 6 anion (Figure 2a) is again noticed in the case of DPPD with the appearance this time of an obvious reversible pre-peak (III) prior to the second main electrochemical step (Figure 3).
One possible explanation could be related to the peculiar chemistry of LiPF 6 in high polarity aprotic solvents (denoted :S). Indeed, it has long been known in the field of Li-ion batteries [START_REF] Linert | Anions of low Lewis basicity for ionic solid state electrolytes[END_REF][START_REF] Sloop | Chemical reactivity of PF 5 and LiPF 6 in ethylene carbonate/dimethyl carbonate solutions[END_REF][START_REF] Tasaki | Decomposition of LiPF 6 and stability of PF 5 in Li-ion battery electrolytes[END_REF][30][31]] that undissociated LiPF 6 do exist at relatively high electrolyte concentrations (in the range of 1 mol.L -1 ) in equilibrium with F -and the strong Lewis acid, PF 5 :
[Li + PF 6 - ] (ion pair) + :S ⇌ [S:PF 5 ] + (sol) + LiF (1) M A N U S C R I P T A C C E P T E D ACCEPTED MANUSCRIPT 8
The dilemma is that in high polarity aprotic solvents the dissociation of [Li + PF 6 -] ion pairs is facilitated but the stabilization of PF 5 too. In addition, it has been shown that the presence of H + ion in the medium that also form ion pairs with PF 6 -catalyzes its decomposition according to the following equilibria due to the strong H … F interactions [30]:
H + + PF 6 - (sol) ⇌ [H + PF 6 - ] (ion pair) ⇌ H … F-PF 5 ⇌ HF + PF 5 (2)
In the case of PD and DPPD, the resulting radical cations electrogenerated at peak I exhibit more polarized N-H bonds in comparison with the pristine state. In the presence of PF 6 -(and potentially F -in the vicinity of the electrode), the acidic proton can be neutralized according to Eq. 2 for producing the corresponding radical, which is a more readily oxidizable species.
This hypothesis is further supported by the fact that no pre-peak is observed with TMPD for which no labile protons do exist. However, supplementary experiments were also conducted by adding a base to LiPF 6 -free electrolyte media in order to verify the deprotonation assumption. In practice, triethylamine (Et 3 N) was used as a common base in organic chemistry. Figure 4 summarizes the as-obtained results by selecting both DPPD and TMPD as the two representative cases bearing no labile proton. As expected, in the presence of trimethylamine (0.5 mM), the pre-peak appeared with DPPD (III) while the electrochemical behavior of TMPD was not affected. Note that a pre-peak (IV) was also observed prior to the first oxidation step (I), which can be attributed to the deprotonation reaction of DPPD itself at this concentration of base. The proposed overall mechanism is finally depicted in Figure 5 Table 1. Diffusion coefficients for the oxidation of phenylenediamines in the different electrolytes calculated with the Randles-Sevcik equation from the slope of the experimental curves i p = f (υ 1/2 ) .
e., N,N'-p-phenylenediamine, PD; N,N'-diphenyl-p-phenylenediamine, DPPD; N,N,N',N'tetramethyl-p-phenylenediamine, TMPD-Figure1a) solubilized at millimolar concentrations in different electrolyte formulations including LiPF 6 as the most popular supporting salt used in the Li-ion battery by manufacturers and researchers.
LiPF 6
6 supporting salt shows some slight differences with the appearance of a new contribution between the two regular steps and a peak-to-peak separation of the second anodic wave shifted from reversibility (∆E 1/2 = 90 mV) suggesting a quasi-reversible process. When excluding LiPF 6 , the solvent change does not affect the reversibility of the two electrochemical steps involved with PD. Figure2bshows for instance a comparison of CVs recorded in PC, DMC and EC-DMC, respectively, using a concentration of 1 mol.L -(DPPD) and tetramethyl-substituted p-phenylenediamines (TMPD)
in the presence of PF 6 - 6 --
66 or by adding Et 3 N in PF free electrolyte. Interestingly, this particular electrochemical investigation focused on both substituted and unsubstituted pphenylenediamines supports well the few other reports pointed out the decomposition issues of LiPF 6 in aprotic media when labile protons are present.This study aimed at emphasizing the potentiality of p-phenylenediamines which can offer high potential and multi-electronic behavior as p-type materials for battery applications. A specific cyclic voltammetry study was then conducted to evaluate the electrochemical behavior of three selected p-phenylenediamines derivatives (PD, DPPD and TMPD) dissolved in several battery grade (carbonate) electrolyte media. Among the various electrolytes tested, it appeared a chemical instability of the electrogenerated radical cation in presence of LiPF 6 when labile protons do exist on nitrogen atoms due to the propensity of PF 6-to be decomposed in high-polarity solvents such as PC-or EC-based battery electrolytes;this phenomenon being catalyzed by labile protons. This electrochemical study provides also to the Li battery community a supplementary proof concerning the high reactivity of the most popular supporting salt versus any labile proton potentially present in a battery electrolyte.
14 Figure 1 .Figure 2 .
1412 Figure 1. (a) Structural formula of studied p-phenylenediamines denoted PD, DPPD and
Figure 3 .
3 Figure 3. Comparison of typical CV curves recorded on Pt disk microelectrode at a scan rate of 200 mV.s -1 using a concentration of 1 mM PC/LiClO 4 1 M or EC-DMC/LiPF 6 1 M
Figure 4 . 18 Figure 5 .
4185 Figure 4. Typical CV curves recorded on Pt disk microelectrode at a scan rate of 200 mV.s -1
Note that n-type structures involve upon oxidation an ionic compensation with cation release whereas p-type structures imply an anion.
Acknowledgments
This work was partially funded by a public grant overseen by the French National Research Agency as part of the program "Investissements d'Avenir" [grant number ANR-13-PRGE-0012] also labeled by the Pôle de Compétitivité S2E2. |
01197401 | en | [
"info",
"info.info-lg",
"info.info-mm"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01197401/file/371182_1_En_34_Chapter.pdf | Thomas Constant
email: thomas.constant@cnam.fr
Axel Buendia
email: axel.buendia@cnam.fr
Catherine Rolland
email: catherine.rolland@ktm-advance.com
Stéphane Natkin
email: stephane.natkin@cnam.fr
A switching-role mechanic for reflective decision-making game
Keywords: serious game, game design, decision-making, overconfidence
This paper introduces issues about a methodology for the design of serious games that help players/learners understand their decisionmaking process. First, we discuss the development of a video game system based on a switching-role mechanic where the player becomes the game leader of the experience. Then, we introduce game mechanics designed to induce a specific behavior, overconfidence, that helps to understand the players' decision-making processes. Finally, we describe tools for measuring the players' self-reflection regarding their judgment process.
Introduction
Serious games for decision-making play an important role in management training [START_REF] Barth | Former les futurs managers à des compétences qui n'existent pas: les jeux de simulation de gestion comme vecteur d'apprentissage[END_REF]. But their use is too often limited to the training of a specific behavior, or to learn good habits. Video games offer the possibility of teaching a more reflexive experience [START_REF] Constant | Enjeux et problématiques de conception d'un jeu sérieux pour la prise de décision[END_REF]. They can be designed as decision-driven systems [START_REF] Schell | The Art of Game Design A Book of Lenses[END_REF], tools created to help learners reflect on how they play [START_REF] Gee | Surmise the Possibilities: Portal to a Game-Based Theory of Learning for the 21st Century[END_REF], how they interact with the system [START_REF] Papert | Mindstorms: Children, Computers, And Powerful Ideas[END_REF] ; thus, how they make a decision [START_REF] Shaffer | Epistemic Games[END_REF]. This paper presents issues about a game design methodology for serious games whose goal is to help learners gain a better understanding of their decision-making process, and to encourage players' reflexivity towards their own decision-making. The design is based on an asymmetrical gameplay: after the player has made a judgment task, and has taken a decision, s/he can become the "game leader"able to influence the other player. By switching roles, s/he may gain a better understanding of his/her own decision process. Our proposal to validate the mechanic's efficiency is to build a video game designed to develop and maintain an excessive confident behavior in the players' judgment, in order to promote the emergence of a reflexive stance of the player towards their decision processes. The first section of this paper introduces our model and its working conditions. The second section explains game mechanics useful for inducing overconfident behavior. These mechanics are, in effect, a translation of cognitive science principles regarding overconfidence into game variables. The third section proposes measurement tools for evaluating the game's efficiency.
2 Main issue: enlighten the player's decision-making 2.1 Switching-role mechanic and operating conditions Our main hypothesis is that a switching-role mechanic can help the players to develop a better understanding of their decision-making processes. However, we make the assumption that switching-role is not enough: the player can be good at playing but may not necessarily understand of how. To help players to be in a reflexive position about their abilities to make a decision, we introduce three conditions to support the switching-role mechanic:
-A main condition: when switching-role, the player must become the game leader of the game. In this role, s/he can use variables to impact the game experience. The game leader is the one who plays with the mechanics in order to alter the other player's judgment. S/he can achieve an optimal point of view of how the game works, and how it can alter the player's behavior. -A pre-condition: before becoming the game leader, it is necessary that the player has been in the position of taking a decision for which s/he is confident about. The confidence must be assumed even if the decision was made in an uncertain situation, and may be biased by the context of the game. Players' judgment about their decision must be unequivocal if we want to help them to understand how it can be affected. -A post-condition: after playing the game leader, it is necessary that the player is able to play his/her first role again, in order to measure the impact of the switching-role mechanic on his/her behavior. For a serious purpose, we need to help the player to achieve this state of selfreflection. His/her way to make a decision has to be easier to understand and, as a consequence, the decision mechanisms have to be underlined by the system. Our proposal is to use cognitive fallacies in order to highlight judgment processes and explain why the player decision is biased.
Heuristic judgment and decision making processes
Heuristics and biases research allows to understand more precisely human judgment under uncertainty. Confronted with a complex question, decision-makers sometimes unwittingly substitute the question with an easier one. This process, called "attribute substitution", is an example of heuristic operating [START_REF] Kahneman | A model of heuristic judgment[END_REF]. A heuristic represents a shortcut in the judgment process as compared with a rational approach to decision-making. Heuristics are "rules of thumbs" -simpler and faster ways to solve a problem, based on knowledge, former experiences, skills, and cognitive abilities (similar to memory or computational ability) [START_REF] Kahneman | Judgment under Uncertainty: Heuristics and Biases[END_REF][START_REF] Gigerenzer | Heuristic decision making[END_REF]. If heuristic strategies are efficient most of the time, they can, however, occasionally lead to failure comparatively to a rational resolution of the full problem. These errors are called biases: markers of the use of a judgment heuristic. Identifying these markers allows researchers to better understand decision-making processes and reveal heuristic at work. Based on this approach, our methodology entails focusing on a single behavior in order to underline the player's decision-making processchosen specifically because it frequently manifests itself in the comportment of game players: overconfidence.
Serious game concept and context of use
Before introducing specific game mechanics, we present the key elements of a gameplay chosen to illustrate the use of our methodology. The game is played by two players on two different computers. Players cannot see each other and cannot communicate directly, but they are aware of each other presence and role in the game. They play a narrative adventure game which apparent goal is to solve a sequence of criminal cases. Each player has a specific role. One of the players adopts the role of an investigator, gathering information to build a hypothesis for a given problem. S/he is confronted with various forms of influence, which are going to have an impact on his/her judgment. The other player personify the game leader, played by the other player, who is going to control the investigator access to information. S/he has access to multiple game variables useful to induce overconfidence in the other player's judgment (see below). After playing a sufficient number of levels in the same role (to be sure that the evaluation of the player's behavior is correct), the players exchange their roles: the game leader becomes the investigator, and reciprocally. By experimenting with these two gameplays, the player puts its own actions into perspective in order to understand how s/he made a decision.
3 Pre-condition: guiding the player's judgment
Variables to orient the player's confidence
The overconfidence effect has been studied in economic and financial fields as a critical behavior of decision-makers [START_REF] Bessière | Excès de confiance des dirigeants et décisions financières: une synthèse[END_REF]. It impacts our judgment of both our own knowledge and skills and those of others [START_REF] Johnson | The evolution of overconfidence[END_REF][START_REF] Russo | Managing overconfidence[END_REF]. Overconfidence can be explained as a consequence of a person's use of heuristics such as availability and anchoring (defined in Section 3) [START_REF] Griffin | The weighing of evidence and the determinants of confidence[END_REF][START_REF] Russo | Managing overconfidence[END_REF]. Overconfidence is also commonly observed in player behaviors. In a card game, for example, beginners as well as experts can be overconfident with regard both to performance and play outcomes [START_REF] Keren | Facing uncertainty in the game of bridge: A calibration study[END_REF]. If we want to induce this behavior, the player's judgment has to be driven in a given direction. As a consequence, game mechanics must be related to expressions or sources of overconfidence in human behavior [START_REF] Moore | The Trouble with Overconfidence[END_REF]. Then, based on game design methods for directing the behavior of the player [START_REF] Schell | The Art of Game Design A Book of Lenses[END_REF][START_REF] Adams | Fundamentals of Game Design[END_REF], we derived game mechanics that can be used to produce the overconfidence effect. Figure 1 presents some mechanics examples according to three major expressions of the overconfidence effect.
Core gameplay
At the beginning of the level, the game leader introduces a case to the other player, the investigator. The investigator's mission is to find the culprit: s/he is driven through the level to a sequence of places where his/her is able to get new clues about the case, mainly by questioning non-playable characters. But the investigator is allowed to perform a limited number of actions during a level, losing one each time s/he gets a new clue. Thus, the investigator is pushed to solve the case as fast as possible. The game leader is presented as the assistant of the investigator, but his/her real role is ambiguous: maybe s/he is trying to help the investigator, or maybe s/he has to push the investigator on the wrong track. This doubt is required to avoid biasing the investigator's judgment about the nature of the influence which target him/her. The investigator should not easily guess what the game leader is really doing, and should stay in a context of judgment in uncertainty. If this is not the case, the measure of the confidence of the player may be distorted. After several levels (several cases), the investigator
Difficulty Anchoring Confirmation
Definition A decision-maker can be overconfident if s/he thinks that the task is too easy or too difficult [START_REF] Griffin | The weighing of evidence and the determinants of confidence[END_REF][START_REF] Lichtenstein | Do those who know more also know more about how much they know?[END_REF].
Estimations are based on an anchor, a specific value they will easily memorize. The adjustments will be too far narrowed down towards this value to give an appropriate estimation. Anchor bias can induce overconfidence when evaluating an item or an hypothesis [START_REF] Kahneman | Intuitive prediction: Biases and corrective procedures[END_REF][START_REF] Russo | Managing overconfidence[END_REF].
Confirmation bias reveals the fact that decision-makers often seek evidences that confirm their hypothesis, denying other evidences that may refute them [START_REF] Koriat | Reasons for confidence[END_REF][START_REF] Russo | Managing overconfidence[END_REF].
Mechanic example 1
Setting up sensitive difficulty by restricting the player's exploration in time and space.
The game designer chooses a specific piece of information to use as an anchor. In order for it to be clear to the player that s/he has to use it the information must be important to the case.
The game designer classifies each piece of evidence according to how they support the investigation's solution and each of the red herrings.
Mechanic example 2
Setting up logical difficulty using puzzle game design, the intrinsic formal complexity of which can be controlled via given patterns and parameters.
In order to compare its impact on player judgment the game leader sets the anchor at different points and times in the game.
During the game, when giving evidence to the player, the game leader must give priority to evidence that favors a specific red herring. becomes the new game leader, and vice versa. To win, the investigator must find the probable solution of a case depending on the clues s/he might have seen, associated with a realistic measure of his/her confidence. At the opposite, the game leader wins if s/he has induced overconfidence in the investigator's judgment, and if the latter didn't discover the game leader role.
4 Post-condition: measuring the player's behavior
Evaluation of the player's confidence
Two kinds of evaluations are used to assess the effectiveness of a serious game based on our switching-role model. The first ones focuses on the player's judgment through the evaluation of his/her confidence. Measurements of the investigator's overconfidence are based on credence calculation, which is used in overconfidence measurement studies [START_REF] Lichtenstein | Do those who know more also know more about how much they know?[END_REF]. This score assesses the players' ability to evaluate the quality of their decision rather than assessing the value of the decision itself. Variations of this score from one game session to an other can show the evolution of the players' confidence regarding their decision-making process. After playing, players must fill out a questionnaire survey in order to give a more precise evaluation of their progression and confidence [START_REF] Stankov | Confidence and cognitive test performance[END_REF].
Evaluation of the player's reflexivity
The second evaluations highlight the players' ability to assess their self-efficacy in terms of problem solving. Judgment calibration may engage the decision-maker in a reflexive posture on his/her ability to judge the quality of his/her decision that the overconfidence effect may bias [START_REF] Stone | Overconfidence in Initial Self-Efficacy Judgments: Effects on Decision Processes and Performance[END_REF]. But it is not enough for a long-lasting understanding of the behavior [START_REF] Stankov | Realism of confidence judgments[END_REF]. Therefore, in order to extend its effects, we design a re-playable game which can be experienced repeatedly within one or various training sessions. The switching-role mechanic allows the player to engage in a self-monitoring activity, by observing the behavior of other players and by experimenting on them. After several levels from this perspective, the player discerns how the investigator develops overconfidence, or tries to reduce it. Then the player resumes his/her first role and starts by giving new self-evaluations. This time, the player should give a more realistic assessment of his/her ability to solve the case. The variation of the players calibration score can give us a precise measure of the evolution of their behavior, and by extension, of their understanding on how they make a decision in the game. Figure 2 presents the range of possible player behaviors that we can expect.
Not confident Very confident
The solution of the case given by the player is improbable
The player is aware of the weakness of his/her reasoning. Well calibrated Score multiplied
The player was too quick in his/her reasoning (and s/he has failed to seen the limits). S/he made a mistake in his/her reasoning. Uncalibrated Player loses his points The solution of the case given by the player is probable
The player was too quick in his/her reasoning (and s/he realizes this). S/he is correct, but has no confidence in his/her reasoning.
Uncalibrated Player loses his points
The player is correct as well as confident in his/her reasoning.
Well calibrated Score multiplied Fig. 2. Player behavior matrix
Conclusion and future works
This paper proposed a game design methodology for building serious games and the way of use to let the players gain a better appreciation of how they make a decision. This methodology is based on the heuristic approach to the analysis of human judgment as well as game design research that relates to decision-making and reflexivity. We then proposed rules and game mechanics designed to induce and control the overconfidence effect and to encourage the players' reflexivity regarding their decision-making. Finally, we introduced the idea of tools for measuring both the players' reflexivity and the effectiveness of the game itself. This methodology is currently being used to develop a prototype of the serious game, which will be evaluated in training courses at the Management & Society School of the National Conservatory of Arts and Crafts1 . The prototype will be able to verify the proper functioning of the switching-role mechanic, its impact and its durability on the player's behavior.
Fig. 1 .
1 Fig. 1. Variables and game mechanics to orient the player's behavior
For more informations about the School and the Conservatory: http://the.cnam.eu |
01758572 | en | [
"sdv"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01758572/file/article.pdf | B G M Van Dijk
E Potier
Maarten Van Dijk
Marloes Langelaan
Nicole Papen-Botterhuis
Keita Ito
Reduced tonicity stimulates an inflammatory response in nucleus pulposus tissue that can be limited by a COX-2-specific inhibitor
Keywords: explant culture, disc herniation, inflammation, regenerative therapy, sustained release
In intervertebral disc herniation with nucleus pulposus (NP) extrusion, the elicited inflammatory response is considered a key pain mechanism. However, inflammatory cytokines are reported in extruded herniated tissue, even before monocyte infiltration, suggesting that the tissue itself initiates the inflammation. Since herniated tissue swells, we investigated whether this simple mechanobiological stimulus alone could provoke an inflammatory response that could cause pain. Furthermore, we investigated whether sustained-release cyclooxygenase-2 (COX2) inhibitor would be beneficial in such conditions. Healthy bovine NP explants were allowed to swell freely or confined. The swelling explants were treated with Celecoxib, applied either as a bolus or in sustained-release. Swelling explants produced elevated levels of interleukin-6 (IL-6) and prostaglandin E2 (PGE2) for 28 days, while confined explants did not. Both a high concentration bolus and 10 times lower concentration in sustained release completely inhibited PGE2 production, but did not affect IL-6 production. Swelling of NP tissue, without the inflammatory system response, can trigger cytokine production and Celecoxib, even in bolus form, may be useful for pain control in extruded disc herniation.
Introduction
Low back-related leg pain, or radicular pain, is a common variation of low back pain. Its main cause is extruded lumbar disc herniation [START_REF] Gibson | Surgical interventions for lumbar disc prolapse: updated Cochrane Review[END_REF], during which the central core of the intervertebral disc, the nucleus pulposus (NP), pushes through the outer ring, the annulus fibrosus (AF). This extruded NP tissue can cause inflammation of nerve roots, which has been recognized as a key factor in painful extruded herniated discs [START_REF] Takada | Intervertebral disc and macrophage interaction induces mechanical hyperalgesia and cytokine production in a herniated disc model in rats[END_REF].
Presence of inflammatory factors, interleukin-1 (3,4) and 6 (5,6) (IL-1 and IL-6), tumor necrosis factor alpha [START_REF] Takahashi | Inflammatory cytokines in the herniated disc of the lumbar spine[END_REF][START_REF] Yoshida | Intervertebral disc cells produce tumor necrosis factor alpha, interleukin-1beta, and monocyte chemoattractant protein-1 immediately after herniation: an experimental study using a new hernia model[END_REF][START_REF] Andrade | Tumor necrosis factor-alpha levels correlate with postoperative pain severity in lumbar disc hernia patients: opposite clinical effects between tumor necrosis factor receptor 1 and 2[END_REF] (TNF), and prostaglandin-E 2 (5,6) (PGE 2 ), is reported in herniated tissue. This is mainly attributed to the body's immune response: monocyte infiltration [START_REF] Doita | Immunohistologic study of the ruptured intervertebral disc of the lumbar spine[END_REF], macrophage maturation, and resorption of extruded tissue [START_REF] Komori | The natural history of herniated nucleus pulposus with radiculopathy[END_REF]. But, production of several cytokines have been measured, albeit in lower quantities, in protruded NP tissue, which is not exposed to the body's immune response [START_REF] Kang | Herniated lumbar intervertebral discs spontaneously produce matrix metalloproteinases, nitric oxide, interleukin-6, and prostaglandin E2[END_REF]. Furthermore, Yoshida et al. [START_REF] Yoshida | Intervertebral disc cells produce tumor necrosis factor alpha, interleukin-1beta, and monocyte chemoattractant protein-1 immediately after herniation: an experimental study using a new hernia model[END_REF] have observed IL-1 and TNF positive cells, before infiltration of monocytes. Thus, NP tissue itself may initiate the inflammatory response through an unknown mechanism. We hypothesize that when NP tissue is extruded, it is exposed to a lower osmolarity and this osmotic shock in turn stimulates the native NP cells to produce inflammatory factors.
Radicular pain caused by herniation is generally treated conservatively, for example, oral analgesics, and 50% of patients recover spontaneously [START_REF] Frymoyer | Back pain and sciatica[END_REF]. However, in many patients, analgesia is insufficient [START_REF] Croft | Outcome of low back pain in general practice: a prospective study[END_REF], and they are treated with epidural steroid injections or surgery. Long-term results of surgery are not different from conservative treatment for radicular pain [START_REF] Jacobs | Surgery versus conservative management of sciatica due to a lumbar herniated disc: a systematic review[END_REF], and although epidural injections can be effective in 80% of cases, patients often need four to five injections within a year [START_REF] Manchikanti | Evaluation of the effectiveness of lumbar interlaminar epidural injections in managing chronic pain of lumbar disc herniation or radiculitis: a randomized, double-blind, controlled trial[END_REF]. Moreover, steroids might slow down the natural resorption of extruded NP tissue as shown in a rabbit tissue model [START_REF] Minamide | Effects of steroid and lipopolysaccharide on spontaneous resorption of herniated intervertebral discs. An experimental study in the rabbit[END_REF]. Reducing pain, while not inhibiting resorption, would be a promising approach to treat herniation.
One of the inflammatory factors in herniation is PGE 2 which can sensitize nerves and induce pain [START_REF] Samad | Interleukin-1betamediated induction of Cox-2 in the CNS contributes to inflammatory pain hypersensitivity[END_REF]. Two enzymes are involved in PGE 2 production, cyclooxygenase 1 and 2 (COX1 and 2). Contrary to COX1, COX2 is inducible and, therefore, COX2 inhibitors are used in pain management, and have been shown to reduce pain in rat models of disc herniation [START_REF] Kawakami | Epidural injection of cyclooxygenase-2 inhibitor attenuates pain-related behavior following application of nucleus pulposus to the nerve root in the rat[END_REF]. Celecoxib (Cxb) is a COX2-specific inhibitor and a candidate for treating herniated discs whose cells can produce PGE 2 , when stimulated by macrophages [START_REF] Takada | Intervertebral disc and macrophage interaction induces mechanical hyperalgesia and cytokine production in a herniated disc model in rats[END_REF]. However, because the half-life of Cxb is only 7.8 h (17), a biodegradable Cxb sustained release option is likely to be more successful than a single Cxb injection.
We have previously cultured bovine NP tissue explants in an artificial annulus, to prevent swelling and provide a near in vivo environment for the NP cells up to 6 weeks [START_REF] Van Dijk | Long-term culture of bovine nucleus pulposus explants in a native environment[END_REF]. In this study, we allow the tissue to swell, as in extruded herniation, and investigate if this simple mechanobiological stimulus can stimulate the tissue to produce cytokines, in the absence of the inflammatory system response. Two injectable sustained release biomaterials loaded with Cxb were then tested in this model and compared to a single Cxb injection.
Materials and Methods
Tissue
Tissue NP explants (150-350 mg) were isolated with an 8 mm biopsy punch (Kruuse, Sherburn, UK) from the center of fresh caudal discs of 24-month-old cows, obtained from a local abattoir in accordance with local regulations. After weighing, free swelling samples were immediately placed in 6 ml of culture medium (DMEM; Gibco Invitrogen, Carlsbad, CA), with 4 mM Lglutamine (Lonza, Basel, Switzerland), 1% penicillin/streptomycin (Lonza), 50 mg/l ascorbic acid (Sigma), and 5% fetal bovine serum (Gibco)). Control samples (non-swelling) were cultured in an artificial annulus system (18) in 6 ml of medium (Fig. 1). In this system, a jacket knitted from UHMWPE fibers (Dyneema, DSM, Heerlen, the Netherlands) lined with a 100 kDa molecular weight cut off (MWCO) semi-permeable membrane (Spectrum Laboratories, Breda, the Netherlands) prevents swelling.
Injectable gel
A hybrid thermo-reversible biodegradable hydrogel (Fig. 2) was used as one controlled release platform (TNO, Eindhoven, the Netherlands) [START_REF] Craenmehr | Liquid composition comprising polymer chains and particles of an inorganic material in a liquid[END_REF]. The hydrogel consists of a network of biodegradable nanoparticles linked to Lower Critical Solution Temperature (LCST) polymers. At room temperature, they are dispersed in water, injectable through a 32-gauge needle. However, at 37°C, they gel by crosslinks arising from hydrophobic interactions of the LCST polymers. The hydrogel was mixed with Cxb (LC Laboratories, Woburn, MA) in three different concentrations [START_REF] Jacobs | Surgery versus conservative management of sciatica due to a lumbar herniated disc: a systematic review[END_REF]120, and 1200 mg/ ml gel). Previous in vitro experiments, showed that 0.5 ml gel, loaded with these concentrations, resulted in average releases of 0.1, 1, and 10 mM, respectively, when added to 6 ml of medium.
Microspheres
Biodegradable microspheres (DSM, Fig. 2) were prepared with an average particle diameter of 40 mm [START_REF] Yang | Applicability of a newly developed bioassay for determining bioactivity of anti-inflammatory compounds in release studies-celecoxib and triamcinolone acetonide released from novel PLGA based microspheres[END_REF]. During production, microspheres were loaded with 8.5%w/w Cxb. Previous in vitro experiments showed that 2.7mg of microspheres (loaded with 22.9 mg Cxb) in 6 ml of medium resulted in an average release of 10 mM. Therefore 2.7, 0.27, and 0.027 mg microspheres were added to a well to release 10, 1, and 0.1 mM Cxb.
Treatment conditions
Six NP tissue samples from independent donors were cultured in every experimental group (n = 6/group). Control artificial annulus samples remained untreated. Samples in free swelling condition were (1) not treated; (2) treated with a bolus (10 mM Cxb in medium for 3 days); (3) treated with a sustained release control (1 mM Cxb in medium for 28 days); and (4-11) treated with the microspheres or gel, loaded for an aimed release of: 0, 0.1, 1, or 10 mM Cxb (Table 1). Explants were cultured in 12-well deep-well insert plates (Greiner) with cell culture inserts (0.4mm pore size, Greiner) to hold the sustained release biomaterials. Samples were cultured for 28 days at 37°C, 5% O 2 , and 5% CO 2 and medium was changed twice a week. During medium changes, temperature was kept at 37°C to prevent hydrogel liquefaction.
Biochemical content
Samples were weighed at the beginning and end of culture, and the percent increase in sample wet weight (ww) was calculated. Subsequently, they were stored frozen at -30°C, lyophilized overnight (Freezone 2.5; Labconco) and the dry weight (dw) was measured. The water content was calculated from ww-dw/ww. The samples were then processed as described earlier and used to determine their content of sulfated glycosaminoglycans (sGAG), hydroxyproline (HYP), and DNA, as well as the fixed charge density (FCD) [START_REF] Van Dijk | Long-term culture of bovine nucleus pulposus explants in a native environment[END_REF]. The amounts of sGAG, HYP, and DNA were expressed as percentage of the sample dw.
Cytokine release into the media
At every medium change, samples were collected and stored at -80°C. Release of cytokines into the medium was measured with ELISAs specific for bovine IL-1 and IL-6 (both Thermo Fisher Scientific, Waltham, MA), specific for bovine TNF (R&D Biosystems, Minneapolis, MN), and PGE 2 (Enzo Life Sciences, Farmingdale, NY) following manufacturer's instructions, and normalized to the original sample ww. All standard curves were dissolved in fresh culture medium, to account for any effects of the media on the analysis. Release was measured for all groups at days 3 and 28 as well as on days 14 and 21 for the best performing concentration of both loaded biomaterials and control groups.
Cxb concentration
At every medium change, samples were stored at -80°C, before Cxb concentration analysis by InGell Pharma (Groningen, the Netherlands). Samples were pre-treated as described before with slight modifications, that is concentration using liquid-liquid extraction with ethyl acetate, and mefenamic acid (Sigma) used as internal standard [START_REF] Zarghi | Simple and rapid highperformance liquid chromatographic method for determination of celecoxib in plasma using UV detection: application in pharmacokinetic studies[END_REF]. The samples were analyzed with UPLC (1290 Infinity, Agilent, Santa Clara, CA) where the detection-limit was 10 ng/ml Cxb. The medium concentration of Cxb was measured for bolus treatment at days 14 and 28 and the best performing concentration of both loaded biomaterials at days 3, 14, 21, and 28.
Statistics
Matlab (Mathworks, Inc., Natick, MA) was used for statistical analysis. For all biochemical data, one-way analysis of variance (ANOVA) was performed, followed by Dunnett's test for differences compared to artificial annulus. For the cytokine release data of the selected groups, Kruskal-Wallis analysis was performed at each time point, followed by Bonferroni corrected Mann-Whitney post hoc tests for differences compared to artificial annulus. To investigate the difference in Cxb release between the two biomaterials, two-way ANOVA was performed, followed by Bonferroni corrected post hoc t-tests. Statistical significance in all cases was assumed for p<0.05.
Results
The ww of the artificial annulus group decreased 10% during culture, while free swelling groups increased between 100 and 150% ww (Fig. 3A). In all free swelling groups, the water content increased significantly (Fig. 3B), and the sGAG content (Fig. 3C) and FCD (Fig. 3D) decreased significantly compared to the artificial annulus group. In addition, the DNA content increased five-fold compared to the artificial annulus group (Fig. 3E). There were no differences in hydroxyproline content (Fig. 3F).
With both biomaterials, 1 mM aimed release was the lowest dosage that completely inhibited PGE 2 at day 28 (Fig. S1). These were analyzed at the intermediate time points (days 14 and 21), together with the artificial annulus, free swelling, sustained control, and bolus groups (Fig. 4). In artificial annulus samples, high levels of PGE 2 were produced during the first 3 days, but were ameliorated from day 14 onwards (Fig. 4A). Free swelling samples produced PGE 2 throughout culture, with maximal levels from days 14 to 28. Due to the large variance in response, these levels were only significantly different from artificial annulus at day 28. Bolus, sustained control and both biomaterial samples were significantly different from the artificial annulus at day 3, with a complete inhibition of PGE 2 production. From day 14 onwards, these groups were not different from the artificial annulus group, indicating continuous inhibition of PGE 2 production. In artificial annulus samples, very low or undetectable levels of IL-6 were produced throughout culture (Fig. 4B). In free swelling explants, significantly higher levels of IL-6 were produced, from day 14 onwards, which were maximal at days 14 and 21. In all treated groups, high levels of IL-6 were produced throughout the culture, except for gel 1 mM where IL-6 production was partially reduced at days 14 and 21 and not significantly different from artificial annulus. Very low or undetectable levels of IL-1 and TNF were produced in all groups and at all time points, and there were no differences between groups (Fig. S2).
Both biomaterials, loaded for an aimed release of 1 mM, released Cxb for 28 days (Fig. 5). The average release, determined from these four time points, was 0.84 and 0.68 mM for the microspheres and gels, respectively, that is, close to the aimed concentration of 1 mM Cxb release in the gel was stable around 0.7 mM for 28 days. A burst release was observed in microspheres during the first 3 days, which was significantly higher compared to the gel. Thereafter, the release decreased in the microspheres and was significantly lower at each subsequent time point. Cxb release at days 21 and 28 was significantly lower in the microspheres compared to the gel. Cxb was still measured in the culture medium of the bolus group at day 14 and even at day 28, and was significantly larger than 0.
Discussion
The artificial annulus prevented swelling, kept NP tissue stable [START_REF] Van Dijk | Long-term culture of bovine nucleus pulposus explants in a native environment[END_REF][START_REF] Van Dijk | Culturing bovine nucleus pulposus explants by balancing medium osmolarity[END_REF] and showed no long-term inflammatory response. Free swelling of NP tissue induced a sustained inflammatory response, of the inflammatory cytokine IL-6 and the nociceptive stimulus PGE 2 . This supports our hypothesis that even without interaction with the immune system, the extruded NP may alone initiate and/or contribute to the inflammation seen in herniation. With its high tonicity, extruded NP tissue will absorb water and swell [START_REF] Van Dijk | Culturing bovine nucleus pulposus explants by balancing medium osmolarity[END_REF], causing a hypo-osmotic stress to the cells. Such a stimulus has been shown to also stimulate amnion derived cells in a COX2 dependent manner resulting in production of PGE 2 [START_REF] Lundgren | Hypotonic stress increases cyclooxygenase-2 expression and prostaglandin release from amnion-derived WISH cells[END_REF]. NP cells regulate daily changes in osmolarity through NFAT [START_REF] Burke | Human nucleus pulposis can respond to a pro-inflammatory stimulus[END_REF][START_REF] Tsai | TonEBP/ OREBP is a regulator of nucleus pulposus cell function and survival in the intervertebral disc[END_REF] which is produced more in hypertonic conditions. Although NP cells in monolayer stimulated with hypo-osmotic stress demonstrated MEK/ERK and AKT involvement [START_REF] Mavrogonatou | Effect of varying osmotic conditions on the response of bovine nucleus pulposus cells to growth factors and the activation of the ERK and Akt pathways[END_REF], further research is needed, to clearly understand the involved biological pathway(s), which may be exploited for alternative treatment strategies. Nevertheless, this simple mechanobiological stimulus, similar to the initial phase of extruded herniation, did consistently elicit an inflammatory response although the variance was large between samples. This could have been due to the differences in osmotic pressure between samples, but this will be even more variable in patients with their different states of disc degeneration. PGE 2 is involved in painful herniated discs [START_REF] Samad | Interleukin-1betamediated induction of Cox-2 in the CNS contributes to inflammatory pain hypersensitivity[END_REF]; thus, the observation that NP tissue itself is able to produce PGE 2 shows the promise of COX2-specific treatment for radicular pain. The role of IL-6 in extruded herniation is not clear. It may be beneficial to alleviate the negative effects of herniation by contributing to resorption of herniated tissue via upregulation of matrix metalloproteases [START_REF] Studer | Human nucleus pulposus cells react to IL-6: independent actions and amplification of response to IL-1 and TNF-alpha[END_REF], inhibiting proteoglycan production [START_REF] Studer | Human nucleus pulposus cells react to IL-6: independent actions and amplification of response to IL-1 and TNF-alpha[END_REF], and stimulating macrophage maturation [START_REF] Mitani | Activity of interleukin 6 in the differentiation of monocytes to macrophages and dendritic cells[END_REF]. On the other hand, as it can induce hyperalgesia in rats [START_REF] Deleo | Interleukin-6mediated hyperalgesia/allodynia and increased spinal IL-6 expression in a rat mononeuropathy model[END_REF], it may be detrimental as well. However, IL-6 can induce PGE 2 (29) so, it is possible that hyperalgesic effects of IL-6 were not direct but due to increased PGE 2 production. This ambiguous role of IL-6 in herniated disc disease should be further investigated, but if it is mostly beneficial, treating painful herniated discs with Cxb could be a promising alternative to currently used steroids that have been reported to slow down the natural resorption of extruded NP tissue in a rabbit tissue model [START_REF] Minamide | Effects of steroid and lipopolysaccharide on spontaneous resorption of herniated intervertebral discs. An experimental study in the rabbit[END_REF].
Interestingly, swelling healthy bovine NP tissue did not directly produce effective doses of TNF and IL-1, although both have been detected in herniated human tissue [START_REF] Takahashi | Inflammatory cytokines in the herniated disc of the lumbar spine[END_REF][START_REF] Andrade | Tumor necrosis factor-alpha levels correlate with postoperative pain severity in lumbar disc hernia patients: opposite clinical effects between tumor necrosis factor receptor 1 and 2[END_REF] and rabbits (4). However, similar to our results, culture of such specimens did not lead to presence of either cytokine in the medium, even after lipopolysaccharide stimulation [START_REF] Burke | Human nucleus pulposis can respond to a pro-inflammatory stimulus[END_REF]. This could be because TNF detected in homogenates of human herniated NP tissue was only membrane bound and no soluble TNF was found [START_REF] Andrade | Tumor necrosis factor-alpha levels correlate with postoperative pain severity in lumbar disc hernia patients: opposite clinical effects between tumor necrosis factor receptor 1 and 2[END_REF]. Furthermore, expression of these cytokines is associated with increasing degree of disc degeneration [START_REF] Maitre | The role of interleukin-1 in the pathogenesis of human intervertebral disc degeneration[END_REF], but can also be produced by activated macrophages (32) neither of which were present in our system. Low levels of TNF, though, can have already strong effects in NP-like tissues [START_REF] Seguin | Tumor necrosis factor-alpha modulates matrix production and catabolism in nucleus pulposus tissue[END_REF] and TNF inhibitors have the potential to stop radicular pain in patients [START_REF] Korhonen | Efficacy of in¯iximab for disc herniation-induced sciatica: one-year follow-up[END_REF]. Thus, the roles of TNF and IL-1 in painful herniation remain pertinent.
The DNA content in free swelling samples increased five-fold compared to artificial annulus. However, the PGE 2 and IL-6 production at day 28 increased 30-90-fold, respectively, thus, cannot be explained by the increased cell number alone. In a previous study [START_REF] Van Dijk | Culturing bovine nucleus pulposus explants by balancing medium osmolarity[END_REF], increase in DNA was attributed to cell cloning and formation of a fibrous layer at the sample periphery. In the present study, we did not investigate the contribution of this fibrous layer to the production of IL-6 and PGE 2 . As this fibrous layer most likely contains active dedifferentiated cells, its production of inflammatory factors might be considerable. However, the increased PGE 2 and IL-6 production in the first 3 days, during absence of the fibrous layer, indicate that NP cells within the tissue produce inflammatory cytokines as well. Furthermore, in vivo this layer will probably also be formed [START_REF] Specchia | Cytokines and growth factors in the protruded intervertebral disc of the lumbar spine[END_REF] and may initiate the formation of the granulation tissue seen in sequestered and extruded surgical samples [START_REF] Koike | Angiogenesis and inflammatory cell infiltration in lumbar disc herniation[END_REF]. Any contribution of this layer to the production of cytokines in this study will occur in vivo as well.
Artificial annulus samples also increased PGE 2 production but only for the first 3 days. This is most likely a result of the shrinkage step needed before placing samples in the system. In this step, samples received a strong hyperosmotic stimulus (50% PEG in PBS (w/v) for 100 minutes), which has been shown to induce COX2 in renal cells [START_REF] Moeckel | COX2 activity promotes organic osmolyte accumulation and adaptation of renal medullary interstitial cells to hypertonic stress[END_REF]. Besides this temporal production of PGE 2 , no other cytokines were produced, showing that healthy bovine NP explants do not produce inflammatory cytokines, unless they are triggered.
Cxb was able to completely inhibit elevated PGE 2 levels for 28 days, when delivered continuously at a concentration of 1 mM in the sustained control group. This result was expected, as this is in the order of the therapeutic plasma concentration of Cxb (17). Both biomaterials were able to release Cxb continuously for 28 days and, thus, inhibit PGE 2 . However, the release kinetics from the two biomaterials were significantly different, that is, microspheres showed a burst release, and a slowly declining release afterwards as observed earlier [START_REF] Yang | Applicability of a newly developed bioassay for determining bioactivity of anti-inflammatory compounds in release studies-celecoxib and triamcinolone acetonide released from novel PLGA based microspheres[END_REF], while the gels showed constant release for 28 days.
To our surprise, the 10 mM bolus was also able to inhibit PGE 2 production of swelling NP explants for 28 days. This dosage is 10 times higher than the biomaterials and 3 times higher than maximum serum levels (17), but as this is a local deposit it will not lead to high serum concentrations in patients. We did not test a lower bolus; thus, we do not know if the bolus' success is because of the higher dose. What was most surprising is that we still measured Cxb in the medium at days 14 and 28, although all initial culture medium was removed at day 3. If Cxb was not degraded and distributed evenly throughout the fluid in tissue and medium, only approximately 70% of Cxb was removed at each medium change. Therefore, the bolus is likely to provide an effective dose longer than 3 days, possibly even until day 14, but not until day 28. Cxb only decreased 50% between days 14 and 28, where a 99% decrease was expected, so it is probable that Cxb is binding to the tissue. In blood serum samples, 97% of Cxb is bound to serum albumin (17), which is also a component of our culture medium. Furthermore, albumin is detected in osteoarthritic cartilage tissue, bound to keratan sulfate, one of the main sGAGs in the disc, through disulfide bonds [START_REF] Mannik | Immunoglobulin-G and Serumalbumin isolated from the articular-cartilage of patients with rheumatoidarthritis or osteoarthritis contain covalent heteropolymers with proteoglycans[END_REF]. If this proposed mechanism of albumin binding to the tissue is correct, this will have clinical implications. In 80% of extruded herniation samples, blood vessels are observed (38) so, Cxb injected adjacent to extruded NP tissue can diffuse in and bind to pre-bound albumin, lengthening the therapeutic effect. Possibly, this might explain the relative success of epidural steroid injections, as the NP tissue can prolong the effect of a single drug injection. Nevertheless, more research needs to be done to determine the validity of this mechanism. It is also possible that Cxb is binding to other proteins in the NP. This interesting finding shows the value of using our tissue model for preclinical evaluation of therapies for disc herniation, as in cell culture this property of Cxb would be missed. However, a limitation of this model is the absence of the body immune response. Repeating this study in a suitable animal model will truly show if a single injection of Cxb will be as successful as a sustained release biomaterial. Nevertheless, treatment with a COX2 inhibitor like Cxb has clinical potential, as not only the swelling NP tissues in this study, but also infiltrated samples, produce PGE 2 .
Another interesting finding is that the gel alone was able to reduce IL-6 production to some extent, especially at day 14. This reduction was also observed at day 14 with the 10 mM loaded gels but also with the empty gels (Fig. S3), indicating that the biomaterial itself affected IL-6 production. As we used Transwell systems (pore size 0.4 mm) in this experiment, there was no direct contact between tissue and biomaterials, but degradation products could have reached the tissue. When the gels degrade, magnesium ions (Mg 2 ) leach out, which can affect IL-6 production [START_REF] Nowacki | Highmagnesium concentration and cytokine production in human whole blood model[END_REF]. Furthermore, a relatively large amount of biomaterial was used in the gel groups (500 ml per sample in all concentrations). Nevertheless, any direct effect of the gel on IL-6 production is only partial and transient, and any benefits thereof remain to be investigated.
Conclusion
Exposing NP tissue explants to lower tonicity conditions, without the inflammatory system response, increased PGE 2 and IL-6. These cytokines are interesting candidates when treating acute herniation, as they are produced before infiltration of macrophages, and involved in pain and inflammation. Cxb could successfully stop PGE 2 , but not IL-6, production for 28 days when supplied by sustained release biomaterials. Interestingly, a bolus was able to achieve the same result, revealing that NP tissue itself can function as a carrier for sustained release of Cxb. Explants were cultured in a deep-well insert plate (bottom). In some groups, celecoxib (Cxb) was added to the medium directly. Both the injectable gel (top left), and the microspheres (top right) were added using a culture insert. LDH, layered double hydroxide; pNIPAAM, poly(N-isopropylacrylamide); RT, room temperature (drawn by Anthal Smits).
Figure S3. IL-6 release into the media.
Release at day 14, in pg/ml, during 3 days, normalized to original sample ww. Values are means ± standard deviation, n = 6.
Figure 1 .
1 Figure 1. Tissue NP explant culture system.Image of nucleus pulposus (NP) explants in artificial annulus culture. Artificial annulus samples (right) were cultured in a deep-well Transwell plate with 6 ml of culture medium. As the artificial annulus samples might be buoyant, a stainless steel cylinder was added to the culture insert (left) to keep samples submerged (drawn by Anthal Smits).
Figure 2 .
2 Figure 2. Schematic image of NP explants in free swelling culture.
Figure 3 .
3 Figure 3. Biochemical content of NP explants after 28 days. Weight change (A), water content (B), sulfated glycosaminoglycan (sGAG) content expressed per dry weight (dw) (C), fixed charge density in mEq/g (D), DNA content expressed per dw (E) and hydroxyproline content expressed per dw (F). Values are means standard deviation, n = 6. * Different from artificial annulus; p < 0.05.
Figure 4 .
4 Figure 4. Cytokine release into the medium over time.Release of prostaglandin E2 (PGE2, A) and interleukin-6 (IL-6, B), in pg/ ml, during 3 days, normalized to original sample wet weight (ww). Values are means +/-standard deviation, n = 6. * Different from artificial annulus at same time point, p < 0.05.
Figure 5 .
5 Figure 5. Cxb concentration measured in the medium over time.Concentration in mM at days 3, 14, 21, and 28 for both biomaterials aimed for 1mM release, and at days 14 and 28 for bolus. Values are means + standard deviation, n = 6. * Different from gel at same time point, # different from previous time point of the same biomaterial, $ different from 0 mM, p < 0.05.
Figure S1 .
S1 Figure S1. PGE 2 release into the media.Release at day 28, in pg/ml, during 3 days, normalized to original sample ww. Values are means ± standard deviation, n = 6.
Figure S2 .
S2 Figure S2. Cytokine release into the medium over time. Release of interleukin-1β (IL-1β, a) and tumor necrosis factor α (TNFα, b), in pg/ml, during 3 days, normalized to original sample ww. Values are means ± standard deviation, n = 6.
Table 1 . Overview of the culture groups.
1
Acknowledgments
This research forms part of the Project P2.01 IDiDAS of the research program of the BioMedical Materials institute, a Dutch public-private partnership. We would like to acknowledge Paul van Midwoud at InGell Pharma for celecoxib release measurements, Irene Arkesteijn at TU/e for the valuable contribution to the artificial annulus model, Renz van Ee and Klaas Timmer at TNO, and Detlef Schumann at DSM for supplying us with the biomaterials.
Disclosure Statement
The academic partners did not receive direct funding from the commercial partners and only non-commercial authors interpreted the data. |
01758625 | en | [
"sdv.bc.bc",
"sdv.mhep.rsoa"
] | 2024/03/05 22:32:10 | 2007 | https://hal.science/hal-01758625/file/Potier07_HypOsteo.pdf | E Potier
E Ferreira
R Andriamanalijaona
J P Pujol
K Oudina
D Logeart-Avramoglou
H Petite
Hypoxia affects mesenchymal stromal cell osteogenic differentiation and angiogenic factor expression
Keywords: Mesenchymal stromal cells, Hypoxia, Osteogenic differentiation, Angiogenic factor, Cell survival
Mesenchymal stromal cells (MSCs) seeded onto biocompatible scaffolds have been proposed for repairing bone defects. When transplanted in vivo, MSCs (expanded in vitro in 21% O 2 ) undergo temporary oxygen deprivation due to the lack of pre-existing blood vessels within these scaffolds. In the present study, the effects of temporary (48-hour) exposure to hypoxia (1% O 2 ) on primary human MSC survival and osteogenic potential were investigated. Temporary exposure of MSCs to hypoxia had no effect on MSC survival, but resulted in (i) persistent (up to 14 days post exposure) down-regulation of cbfa-1/Runx2, osteocalcin and type I collagen and (ii) permanent (up to 28 days post exposure) up-regulation of osteopontin mRNA expressions. Since angiogenesis is known to contribute crucially to alleviating hypoxia, the effects of temporary hypoxia on angiogenic factor expression by MSCs were also assessed. Temporary hypoxia led to a 2-fold increase in VEGF expression at both the mRNA and protein levels. Other growth factors and cytokines secreted by MSCs under control conditions (namely bFGF, TGF1 and IL-8) were not affected by temporary exposure to hypoxia. All in all, these results indicate that temporary exposure of MSCs to hypoxia leads to limited stimulation of angiogenic factor secretion but to persistent down-regulation of several osteoblastic markers, which suggests that exposure of MSCs transplanted in vivo to hypoxia may affect their bone forming potential. These findings prompt for the development of appropriate cell culture or in vivo transplantation conditions preserving the full osteogenic potential of MSCs.
Introduction
Mesenchymal stromal cells (MSCs) loaded onto biocompatible scaffolds have been proposed for restoring function of lost or injured connective tissue, including bone [START_REF] Cancedda | Cell therapy for bone disease: a review of current status[END_REF][START_REF] Logeart-Avramoglou | Engineering bone: challenges and obstacles[END_REF][START_REF] Petite | Tissueengineered bone regeneration[END_REF]. Physiological oxygen tensions in bone are about 12.5% O2 [START_REF] Heppenstall | Tissue gas tensions and oxygen consumption in healing bone defects[END_REF] but fall to 1% O2 in fracture hematoma [START_REF] Brighton | Oxygen tension of healing fractures in the rabbit[END_REF][START_REF] Heppenstall | Tissue gas tensions and oxygen consumption in healing bone defects[END_REF]. In tissue engineering applications, implanted MSCs undergo temporary oxygen deprivation, which may be considered as similar to fracture hematoma (i.e., 1% O2) due to the disruption of the host vascular system (as the result of injury and/or surgery) and the lack of preexisting vascular networks within these scaffolds.
These drastic conditions of transplantation can lead to the death or functional impairment of MSCs, which can affect their ultimate bone forming potential. The exact effects of hypoxia on osteoprogenitor or osteoblast-like cells have not been clearly established, however, as several studies demonstrated a negative impact on cell growth [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF] and differentiation [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF], whereas others have shown that hypoxia has positive effects on cell proliferation [START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF] and osteoblastic differentiation [START_REF] Warren | Hypoxia regulates osteoblast gene expression[END_REF]. These discrepancies may be due to the differences between the cell types (primary [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF][START_REF] Warren | Hypoxia regulates osteoblast gene expression[END_REF] and cell lines [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF]), species (rat [START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF][START_REF] Warren | Hypoxia regulates osteoblast gene expression[END_REF], human [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF] and mouse [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF]) and hypoxic conditions (from 0.02% to 5% O2) used. Since the success of bone reconstruction methods based on the use of engineered constructs depends on the maintenance of viable and functional MSCs, it is of particular interest to elucidate the effects of temporary hypoxia on primary human MSC survival and osteogenic potential.
MSCs secrete a wide variety of angiogenic factors (including vascular endothelial growth factor (VEGF) [START_REF] Kinnaird | Local delivery of marrow-derived stromal cells augments collateral perfusion through paracrine mechanisms[END_REF], transforming growth factor 1 (TGF1) [START_REF] Han | Potential of human bone marrow stromal cells to accelerate wound healing in vitro[END_REF][START_REF] Sensebe | Cytokines active on granulomonopoiesis: release and consumption by human marrow myoid [corrected] stromal cells[END_REF], and basic fibroblast growth factor (bFGF) [START_REF] Han | Potential of human bone marrow stromal cells to accelerate wound healing in vitro[END_REF][START_REF] Kinnaird | Local delivery of marrow-derived stromal cells augments collateral perfusion through paracrine mechanisms[END_REF]) and may therefore modulate angiogenic processes and participate in the vascular invasion of engineered contructs. Since effective neo-vascularization is crucial for shortening the hypoxic episodes to which transplanted MSCs are exposed, it seemed to be worth investigating the stimulatory effects of hypoxia on angiogenic factor expression by MSCs.
The aim of the present study was therefore to investigate the effects of temporary hypoxia on primary human MSC (hMSC) proliferation, osteogenic potential and angiogenic factor expression. In this study, O 2 tensions 4% are termed hypoxic conditions (as these conditions represent the hypoxia to which hMSCs transplanted in vivo are subjected) and 21% O 2 tensions are termed control conditions (as these conditions represent standard cell culture conditions). Cell viability was assessed after exposing hMSCs to hypoxic conditions during various periods of time. Osteogenic differentiation was assessed after temporary (48-hour) exposure of hMSCs to either control or hypoxic conditions followed by different periods of osteogenic cell culture. Expression of several angiogenic factors by hMSCs involved in new blood vessel formation (VEGF, bFGF and TGF) and maturation (platelet derived growth factor BB (PDGF-BB)) was assessed after temporary (48-hour) exposure of hMSCs to either control or hypoxic conditions.
Materials and Methods
Hypoxia
Hypoxia was obtained using a sealed jar (Oxoid Ltd, Basingstoke, United Kingdom) containing an oxygen chelator (AnaeroGen, Oxoid Ltd) [START_REF] Grosfeld | Transcriptional effect of hypoxia on placental leptin[END_REF]. Twice a day, the pO 2 was measured diving an oxygen electrode directly into cell culture medium (pH: 7.2) and using an Oxylab pO 2 TM (Oxford Optronix; Oxford, United Kingdom). The hypoxic system was left closed throughout the period of experimentation.
Cell culture
Human mesenchymal stromal cells (hMSCs) were isolated from tibia bone marrow specimens obtained as discarded tissue during routine bone surgery (spinal fusion) in keeping with local regulations. Bone marrows were obtained from 3 donors (2 males and 1 female; 14-16 years old). hMSCs were isolated using a procedure previously described in the literature [START_REF] Friedenstein | The development of fibroblast colonies in monolayer cultures of guinea-pig bone marrow and spleen cells[END_REF][START_REF] Pittenger | Multilineage potential of adult human mesenchymal stem cells[END_REF]. Briefly, cells were harvested by gently flushing bone marrow samples with alpha Minimum Essential Medium (MEM, Sigma) containing 10% fetal bovine serum (FBS, PAA Laboratories) and 1% antibiotic and anti-mycotic solution (PAA Laboratories). When the hMSCs reached 60-70% confluence, they were detached and cryopreserved at P1 (90% FBS, 10% DMSO). For each experiment, a new batch of hMSCs was thawed and cultured. Cells from each donor were cultured separately. Human endothelial cells (EC, kindly provided by Dr Le Ricousse-Roussanne) were cultured in Medium 199 (Sigma) containing 20% FBS supplemented with 15 mM HEPES (Sigma) and 10 ng/ml rhVEGF165 (R&D Systems) [START_REF] Ricousse-Roussanne | Ex vivo differentiated endothelial and smooth muscle cells from human cord blood progenitors home to the angiogenic tumor vasculature[END_REF].
Multipotency of hMSCs
Induction of osteogenic differentiation. hMSCs (passage P7) were cultured in osteogenic medium consisting of MEM containing 10% FBS, 10 -7 M dexamethasone, 0.15 mM ascorbate-2-phosphate (Sigma), and 2 mM -glycerophosphate (Sigma) [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After 10 and 20 days of culture, the cells were fixed in PBS containing 1% PFA and stained with a NBT/TCIP kit (Molecular Probes) to evaluate the alkaline phosphatase (ALP) activity. Calcium deposition was assayed by using the Von-Kossa staining method [START_REF] Bruder | Growth kinetics, self-renewal, and the osteogenic potential of purified human mesenchymal stem cells during extensive subcultivation and following cryopreservation[END_REF]. After 10 and 20 days of culture, mRNA extraction, cDNA synthesis and RT-PCR were performed as described in the "RT-PCR assays" section to assess the transcription levels of osteogenic markers (osteocalcin and osterix).
Induction of chondrogenic differentiation. hMSCs (passage P7; 2x10 5 cells) suspended in 0.5 ml of chondrogenic medium were centrifuged for 2 min at 500 g. The chondrogenic medium used contained MEM supplemented with 6.25 µg/ml insulin, 6.26 µg/ml transferrin (Sigma), 6.25 µg/ml selenious acid (Sigma), 5.35 µg/ml linoleic acid (Sigma), 1.25 µg/ml bovine serum albumin (Sigma), 1 mM pyruvate (Sigma), and 37.5 ng/ml ascorbate-2phosphate [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After centrifugation, pellets of hMSCs were cultured in chondrogenic medium supplemented with 10 ng/ml TGFß1 (R&D Systems) and 10 -7 M dexamethasone [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After 20 and 30 days of cell culture, hMSC pellets were cryo-preserved (-80°C) until immuno-histological analysis to detect the presence of human type II collagen. Human type II collagen protein was detected using a goat polyclonal IgG anti-human type II collagen antibody (200 g/ml; Santa Cruz Biotechnology). Peroxidaseconjugated anti-goat IgG antibody (1:200; Vectastain ABC kit; Vector) was used as the secondary antibody. Peroxidase activity was monitored using a Vectastain ABC kit. Sections were counterstained using haematoxylin.
Induction of adipogenic differentiation. hMSCs (passage P7) were cultured in adipogenic medium consisting of MEM containing 10% FBS, 5 µg/ml insulin (Boehringer Manheim), 10 -7 M dexamethasone (Sigma), 0.5 mM isobutylmethylxanthine (Sigma), and 60 µM indomethacin (Sigma) [START_REF] Dennis | The STRO-1+ marrow cell population is multipotential[END_REF]. After 10 and 20 days of culture, the cells were fixed in PBS containing 1% paraformaldehyde (PFA, Sigma) and stained with Oil Red O (Sigma) [START_REF] Diascro | High fatty acid content in rabbit serum is responsible for the differentiation of osteoblasts into adipocyte-like cells[END_REF]. After 10 and 20 days of cell culture, mRNA extraction, cDNA synthesis and RT-PCR were performed as described in the "RT-PCR assays" section to assess the transcription levels of adipogenic markers (fatty acid binding protein 4 (aP2) and peroxisome proliferator-activated receptor (PPAR)).
Cell death assays
hMSCs (passage P5) were plated at 5,000 cells/cm 2 and allowed to adhere overnight. Cells were subsequently exposed to hypoxic conditions (without medium change) for different periods of time. Cell death was assessed by image analysis (Leica Qwin software) after staining with the Live/Dead viability/cytotoxicity kit (Molecular Probes).
hMSC osteogenic differentiation after exposure to temporary hypoxia hMSCs (passage P5) were plated at 5,000 cells/cm 2 and allowed to adhere overnight. After exposure of hMSCs either to hypoxic or control conditions for 48 hours, the cell culture supernatant medium was replaced by osteogenic medium and hMSCs were cultured in control conditions for 0, 14 and 28 days. mRNA extraction, cDNA synthesis and RT-PCR were then performed as described in the "RT-PCR assays" section to assess the transcription levels of osteogenic markers (osteocalcin, ALP, type I collagen, osteopontin, bone sialoprotein (BSP), core binding factor alpha sub-unit 1 (cbfa-1/Runx2) and bone morphogenetic protein-2 (BMP-2)).
RT-PCR assays
Cytoplasmic mRNA was extracted from cell layers using an RNeasy mini kit (Qiagen) and digested with RNase-free DNase (Qiagen) in line with the manufacturer's instructions. cDNA synthesis was performed using a Thermoscript kit (Invitrogen) and Oligo DT primers (50 M). PCRs were performed on an iCycler using a Multiplex PCR kit (Qiagen) with 15 ng of cDNA and 0.2 M of each of the primers (for primer sequences see Table 1, supplemental data). After a 10-min denaturation step at 95°C, cDNA was amplified in PCR cycles consisting of a three-step PCR: a 30-sec denaturation step at 95°C , a 90-sec annealing step at 60°C, and a 90-sec elongation step at 72°C. An additional 10min elongation cycle was conducted at 72°C. PCR products were analyzed by performing agarose gel electrophoresis and ethidium bromide staining. In each PCR, ribosomal protein L13a (RPL13a) was used as the endogenous reference gene (for primer sequences see Table 1). RPL13a was chosen among the 5 housekeeping genes tested (RPL13a, actin, glyceraldehyde-3-phosphate dehydrogenase, 18S ribosomal RNA, and hypoxanthine phosphoribosyltransferase 1) as the most "stable" housekeeping gene in hMSCs exposed to hypoxic conditions. cDNA from ECs was used as the positive control in the angiogenic growth factor mRNA expression assays. Semi-quantitation of the PCR products was performed using Quantity One software (BioRad). Expression of target genes was normalized taking the respective RPL13a expression levels.
Real Time PCR assays
mRNA extraction and reverse transcription were conducted as described in the "RT-PCR assays" section. Real Time PCR assays were performed on the ABI Prism 7000 SDS (Applied Biosystems) using the SYBR Green Mastermix Plus (Eurogentec) with 1.5 ng of cDNA (1/50 diluted) and 400-600 nM of each of the primers (for primer sequences see Table 2, supplemental data). After a 10min denaturation step at 95°C, cDNA was amplified by performing two-step PCR cycles: a 15-sec step at 95°C, followed by a 1-min step at 60°C. In each Real Time PCR assay, one of the cDNA used was diluted (1/2; 1/4; 1/8) in order to establish a standard curve and define the exact number of cycles corresponding to 100% efficiency of polymerization. Reactions were performed in triplicate and expression of target genes was normalized taking the respective RPL13a expression levels. Relative quantities of cDNA were calculated from the number of cycles corresponding to 100% efficiency of polymerization, using the 2 -DeltaDeltaCT method [START_REF] Livak | Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)) Method[END_REF].
ELISA assays
After exposing hMSCs to either hypoxic or control conditions for 48 hours, the supernatant media were collected, centrifuged at 13,000g at 4°C for 10 min, collected, and kept at -80°C until ELISA assays were performed. VEGF, bFGF, and Interleukin-8 (IL-8) expressions were assayed using ELISA kits from R&D Systems (Quantikine) in line with the manufacturer's instructions. TGF1 expression was assayed using an ELISA assay developed at our laboratory [START_REF] Maire | Retention of transforming growth factor beta1 using functionalized dextran-based hydrogels[END_REF], after activating TGF1 by acidifying the cell culture supernatant media [START_REF] Van Waarde | Quantification of transforming growth factor-beta in biological material using cells transfected with a plasminogen activator inhibitor-1 promoter-luciferase construct[END_REF].
Angiogenesis antibody array assays
The levels of expression of 20 growth factors and cytokines were determined using the RayBio® human angiogenesis antibody array (Ray Biotech, Inc, Norcross, GA, USA). After exposing hMSCs to either hypoxic or control conditions for 48 hours, the supernatant media were collected and stored as described in the "ELISA assays" section. Protein-antibody complexes were revealed by chemoluminescence in line with the manufacturer's instructions and the results were photographed on Xomat AM film (Kodak). The following growth factors and cytokines were detected by the RayBio® angiogenesis antibody arrays: angiogenin, RANTES, leptin, thrombopoeitin, epidermal growth factor, epithelial neutrophil-activating protein 78, bFGF, growth regulated oncogene, interferon , VEGF, VEGF-D, insulin-like growth factor-1, interleukin 6, interleukin 8, monocyte chemoattractant protein 1 (MCP-1), PDGF, placenta growth factor, TGF1, tissue inhibitors of metalloproteinases 1 (TIMP-1), and tissue inhibitors of metalloproteinases-2 (TIMP-2).
Statistical analysis
Data are expressed as means standard deviations. Statistical analysis was performed using an ANOVA with a Fisher post hoc test. The results were taken to be significant at a probability level of P < 0.05.
Results
Multipotency of hMSCs
In order to determine the multipotency of the human mesenchymal stromal cells (hMSCs) used in this study, hMSCs were cultured in either osteogenic, chondrogenic, or adipogenic differentiation medium.
Culture of hMSCs in osteogenic medium for 10 and 20 days increased the levels of alkaline phosphatase (ALP) activity (Fig. 1A). Osteogenic differentiation of hMSCs was confirmed by the expression of the osteogenic differentiation markers osterix and osteocalcin (Fig. 1A).
Culture of hMSCs in chondrogenic medium for 30 days resulted in the expression of the type II collagen (marker of the chondrogenic differentiation) in the cell cytoplasm and extracellular matrix (Fig. 1B). Control sections incubated with secondary antibody alone showed negative staining patterns (Fig. 1B).
Culture of hMSCs in adipogenic medium for 20 days resulted in the development of several clusters of adipocytes containing intracellular lipid vacuoles, which stained positive with Oil Red O (Fig. 1C). Expression of fatty acid binding protein 4 (aP2) and peroxisome proliferator activated receptor (PPAR) (markers of the adipogenic differentiation) by hMSCs (Fig. 1C) confirmed the ability of these cells to differentiate along the adipogenic lineage.
All these results confirm that the hMSCs used in this study are multipotent cells, since they are capable of differentiating along the osteogenic, adipogenic and chondrogenic lineages as previously demonstrated by numerous studies (for review: [START_REF] Barry | Mesenchymal stem cells: clinical applications and biological characterization[END_REF][START_REF] Jorgensen | Tissue engineering through autologous mesenchymal stem cells[END_REF][START_REF] Krampera | and Franchini Mesenchymal stem cells for bone, cartilage, tendon and skeletal muscle repair[END_REF][START_REF] Prockop | Marrow stromal cells as stem cells for nonhematopoietic tissues[END_REF]). But, even when hMSCs were committed to the osteoblastic lineage, the extracellular matrix did not mineralize after 30 days of cell culture in osteogenic medium. These results suggest that the culture conditions used in this study were suboptimal to preserve full biological function of hMSCs.
Hypoxic model
In order to check the validity of the model for hypoxia used in this study, the pO 2 levels were monitored in the sealed jar during 5 days and without exposing to atmospheric oxygen tensions. Moderate hypoxic conditions (pO 2 = 4% O 2 ) may be said to have been reached within 24 hours. Severe hypoxic conditions (pO 2 < 1%O 2 ) may be considered as reached after 48 hours. The pO 2 levels in the cell culture medium gradually decreased, reaching a plateau corresponding to values of around 0.25% O 2 after 72 hours (Fig. 2).
Effects of prolonged hypoxia on hMSC survival
To investigate the effects of hypoxia on cell survival, hMSCs were exposed to hypoxic conditions for 48, 72 and 120 hours. Exposure of hMSCs to prolonged (120 hours) hypoxic conditions resulted in limited rates of cell death (Fig. 3; 35.5 18.5%), whereas temporary hypoxia did not affect hMSC survival.
Effects of temporary hypoxia on the osteogenic potential of hMSCs
Having established that temporary hypoxia has no effect on hMSC survival, its effects on hMSC osteogenic potential were assessed. After 48-hour exposure to hypoxic or control conditions, hMSCs were transferred to osteogenic medium and osteogenic differentiation was assessed by performing RT-PCR assays to detect the expression of several osteogenic markers. The levels of cbfa-1/Runx2, osteocalcin and type I collagen expression were checked by performing quantitative real-time PCR assays.
Similar levels of ALP, bone morphogenetic protein 2 (BMP2) and bone sialoprotein (BSP) expression were observed in hMSCs exposed to either hypoxic or control conditions at all time periods of osteogenic culture tested (Fig. 4).
Osteopontin expression increased after exposure of hMSCs to hypoxic conditions at all osteogenic culture times tested (0 days: 2.6-fold; 14 days: 12-fold; 28 days: 8-fold) (Fig. 4).
The levels of expression of cbfa-1/Runx2 and osteocalcin were slightly down-regulated after 0 and 14 days of osteogenic culture by temporary exposure to hypoxic conditions (0.5-fold with cbfa-1/Runx2; 0.7-fold with osteocalcin), as assessed by quantitative real time PCR assays (Fig. 5). After 28 days of osteogenic culture, however, the levels of cbfa-1/Runx2 and osteocalcin expressed by hMSCs exposed to hypoxic conditions were similar to those exposed to control conditions.
Type I collagen expression was permanently down-regulated after 48-hour exposure of hMSCs to hypoxic conditions (approximately 0.4-fold at all the osteogenic culture times tested), but this decrease was statistically significant only on days 0 and 28 of osteogenic culture (Fig. 5).
Effects of temporary hypoxia on the mRNA expression of angiogenic factors by hMSCs
Effects of temporary hypoxia on angiogenic factor expression by hMSCs were investigated. mRNA expression of angiogenic factors was assessed by performing RT-PCR assays after exposing hMSCs to either hypoxic or control conditions for 48 hours. Expression levels of key angiogenic factors (namely vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), transforming growth factor 1, 2 and 3 (TGF-1, TGF-2, and TGF-3)) and those of VEGF receptor 1 and receptor 2 were studied.
No expression of PDGF-BB, VEGF receptor 1 or VEGF receptor 2 was detected under any of the conditions tested with hMSCs. However, the RT-PCR conditions used were suitable for the detection of PDGF-BB, VEGF receptor 1 and VEGF receptor 2, as these factors were detected with endothelial cells (EC) (data not shown).
Similar levels of TGF1 and TGF2 expression were detected after exposing hMSCs to either hypoxic or control conditions for 48 hours (Fig. 6). The levels of TGF3 expression decreased after exposure to hypoxic conditions for 48 hours (TGF3/RPL13a ratio: 0.21 0.05), in comparison with TGF3 expression obtained under control conditions (0.89 0.8) (Fig. 6).
Conversely, expression levels of bFGF and VEGF increased when hMSCs were exposed to hypoxic conditions for 48 hours (bFGF/RPL13a ratio: 0.71 0.13; VEGF/RPL13a ratio: 1.51 0.05), in comparison to results obtained under control conditions (0.14 0.01 and 0.25 0.24 respectively) (Fig. 6).
Effects of temporary hypoxia on the protein secretion levels of three major regulators of angiogenesis by hMSCs
Since the secretion of angiogenic factors is required to induce angiogenesis, the levels of protein secretion of three major regulators of angiogenesis (namely VEGF, TGF1, and bFGF which were previously detected at the mRNA level) were assessed by performing ELISA assays after exposing hMSCs to either hypoxic or control conditions for 48 hours.
To measure the TGF1 content of the cell culture supernatant media, acid activation of samples was required. Without this activation, no TGF1 secretion was detectable (data not shown). TGF1 secretion by hMSCs exposed to hypoxic conditions (270 70 pg/ml) was down-regulated in comparison with TGF1 secretion obtained under control conditions (570 270 pg/ml), but did not reach statistical significance (Fig. 7A).
bFGF secretion decreased, but not significantly, in response to exposure of hMSCs to hypoxic conditions (0.4 0.3 pg/ml) in comparison with control conditions (1.2 0.5 pg/ml) (Fig. 7B). Even under control conditions, however, hMSCs were found to secrete small quantities of bFGF.
Contrary to what occurred with TGF1 and bFGF, VEGF secretion by hMSCs exposed to hypoxic conditions (1640260 pg/ml) increased 2-fold in comparison with the results obtained under control conditions (880 100 pg/ml) (Fig. 7C).
Neither TGF1, bFGF nor VEGF were detected in control medium alone (data not shown).
Effects of temporary hypoxia on the protein secretion of various growth factors and cytokines by hMSCs
To further investigate the effects of temporary and moderate hypoxia on hMSCs, the secretion levels of various growth factors and cytokines involved in angiogenic processes were monitored using angiogenesis antibody arrays after exposing hMSCs to either hypoxic or control conditions for 48 hours. Any changes in the growth factor and cytokine secretion levels were checked by performing conventional ELISA assays.
Similar levels of secretion of Interleukin-6 (IL-6), Monocyte Chemoattractant Protein-1 (MCP-1), Tissue inhibitor Metallo-Proteinases 1 and 2 (TIMP-1 and TIMP-2) were observed in hMSCs, whether they were exposed to hypoxic or control conditions.
Interleukin-8 (IL-8) secretion was up-regulated in two out of the three donors tested by exposing hMSCs to hypoxic conditions. These results were confirmed by the results of ELISA assays, which showed that IL-8 secretion by hMSCs exposed to hypoxic conditions increased (780 390 pg/ml) in comparison to what occurred under control conditions (440 230 pg/ml). This upregulation was not statistically significant, however, due to the great variability existing between donors.
Other growth factors and cytokines tested using angiogenesis antibody arrays were not detected in hMSCs exposed to control or hypoxic conditions (data not shown). Neither cytokines nor growth factors were detected by angiogenesis antibody arrays incubated in control medium alone (data not shown).
Discussion
The first step in the present study consisted in evaluating the effects of reduced oxygen tensions on hMSC survival. Our results showed that 120-hour exposure to hypoxia resulted in increased cell death rates, when 48-or 72-hour exposure did not, but those cell death rates may have been underestimated as the method used in the present study did not consider floatting dead cells. The mechanisms underlying hMSC death upon oxygen deprivation are unclear at present. A previous study conducted on rat MSCs, however, offers some clues as it reported the induction of caspase dependent apoptosis under brief (24 hours) oxygen and serum deprivation (nuclear shrinkage, chromation condensation, decrease in cell size, and loss of menbrane integrity) [START_REF] Zhu | Hypoxia and serum deprivation-induced apoptosis in mesenchymal stem cells[END_REF]. The hMSC viability does not seem to be affected by short-term (<72-hour) hypoxia which are in agreement with previously published data [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF][START_REF] Utting | Hypoxia inhibits the growth, differentiation and bone-forming capacity of rat osteoblasts[END_REF]. Grayson et al. reported that long-term culture of hMSCs under hypoxic conditions (2% O 2 ) resulted in decreased cell proliferation but not in increased apoptosis after 9, 16 or 24 days of cell culture [START_REF] Grayson | Effects of hypoxia on human mesenchymal stem cell expansion and plasticity in 3D constructs[END_REF]. These findings, combined with our own, suggest that hypoxia leads only to moderate cell death and that the surviving hMSCs are still able to proliferate. The ultimate bone forming ability of engineered constructs relies, however, on the survival of "functional" hMSCs. The second step in the present study was therefore to assess the effects of temporary hypoxia on hMSC osteogenic potential by drawing up transcriptional profiles of osteoblast membraneous and extra-cellular matrix molecules (ALP, osteocalcin, osteopontin and type I collagen), those of a growth factor stimulating osteoblast differentiation (BMP2) and those of a transcription factor regulating bone formation (cbfa1/Runx2).
Our results show that a slight down-regulation of cbfa-1/Runx2 expression occurs after temporary exposure to hypoxia, persisting for 14 days after the end of the hypoxic episode. Cbfa-1/Runx2 transcription factor plays an essential role in controlling osteoblastic differentiation (for a review: [START_REF] Ducy | Cbfa1: a molecular switch in osteoblast biology[END_REF][START_REF] Komori | Regulation of skeletal development by the Runx family of transcription factors[END_REF]) and its inhibition is associated with a large decrease in the rate of bone formation [START_REF] Ducy | A Cbfa1-dependent genetic pathway controls bone formation beyond embryonic development[END_REF]. Similar long-lasting inhibition of osteocalcin, a late osteogenic differentiation marker, confirmed the inhibition of osteoblastic maturation of hMSCs resulting from temporary exposure to hypoxia. As occurred with type I collagen, its level of expression was durably and strongly inhibited by temporary exposure to hypoxia. Type I collagen is the main component of bone matrix and plays a central role in the mineralization process. Long-term inhibition of cbfa-1/Runx2, osteocalcin and type I collagen expressions strongly suggest that temporary exposure to hypoxia may inhibit the osteoblastic differentiation of hMSCs. Literature conducted on other cell types (human [START_REF] Matsuda | Proliferation and Differentiation of Human Osteoblastic Cells Associated with Differential Activation of MAP Kinases in Response to Epidermal Growth Factor, Hypoxia, and Mechanical Stress in Vitro[END_REF][START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF] and rat (48) osteoblasts) reports that their osteogenic differentiation is impaired by temporary exposure to hypoxia (decreased ALP activity, collagen type I, osteocalcin and cbfa-1/Runx2 expressions). Conversely, Salim et al reported that exposure of hMSCs to hypoxic (2% O 2 ) conditions did not affect their terminal differentiation [START_REF] Salim | Transient changes in oxygen tension inhibit osteogenic differentiation and Runx2 expression in osteoblasts[END_REF]. The discrepancies observed between this study and our results may be explained by different time of exposure to hypoxic conditions (24 hours and 48 hours respectively), suggesting that hMSCs are able to face hypoxia for a short period of time (< 48 hours) without losing their osteogenic potential.
Surprisingly, neither the expression of BSP, which is regulated by cbfa-1/Runx2 at both mRNA [START_REF] Ducy | A Cbfa1-dependent genetic pathway controls bone formation beyond embryonic development[END_REF] and protein levels [START_REF] Hoshi | Morphological characterization of skeletal cells in Cbfa1-deficient mice[END_REF], nor that of ALP, the enzymatic activity of which has been previously reported to be down-regulated under hypoxic conditions [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF][START_REF] Tuncay | Oxygen tension regulates osteoblast function[END_REF][START_REF] Utting | Hypoxia inhibits the growth, differentiation and bone-forming capacity of rat osteoblasts[END_REF], were found here to be affected by temporary exposure to hypoxia. In the case of BSP expression, the down-regulation of cbfa-1/Runx2 observed in the present study may be too weak to significantly inhibit BSP expression. Moreover, Park et al. [START_REF] Park | Hypoxia decreases Runx2/Cbfa1 expression in human osteoblast-like cells[END_REF] have reported that the inhibitory effect of hypoxia on the osteoblastic differentiation of a human osteosarcoma cell line is time-dependent: the longer the hypoxic exposure time, the higher the down-regulation of osteoblastic marker expression. These results suggest that exposure times longer than those used in the present study (48-hour) may nonetheless induce a down-regulation of mRNA expression of BSP or ALP.
Osteopontin expression by hMSCs was permanently increased, on the contrary, by temporary exposure to hypoxia. Up-regulation of osteopontin induced by hypoxia has been previously observed in many other cell types, including mouse osteocytes [START_REF] Gross | Upregulation of osteopontin by osteocytes deprived of mechanical loading or oxygen[END_REF], rat aortic vascular smooth muscle cells [START_REF] Sodhi | Hypoxia stimulates osteopontin expression and proliferation of cultured vascular smooth muscle cells: potentiation by high glucose[END_REF], and human renal proximal tubular epithelial cells [START_REF] Hampel | Osteopontin traffic in hypoxic renal epithelial cells[END_REF]. In bone, osteopontin mediates the attachment of several cell types, including osteoblasts, endothelial cells and osteoclasts (for a review: [START_REF] Denhardt | Role of osteopontin in cellular signaling and toxicant injury[END_REF]). This molecule plays an important role in bone remodelling and osteoclast recruitment processes, as its absence (in knock-out mice) led to impaired bone loss after ovariectomy [START_REF] Yoshitake | Osteopontin-deficient mice are resistant to ovariectomy-induced bone resorption[END_REF] and decreased resorption of subcutaneously implanted bone discs [START_REF] Asou | Osteopontin facilitates angiogenesis, accumulation of osteoclasts, and resorption in ectopic bone[END_REF]. As far as the effects of its up-regulation are concerned, however, the results of previous studies are confusing as positive effects on rat osteoblast maturation [START_REF] Kojima | In vitro and in vivo effects of the overexpression of osteopontin on osteoblast differentiation using a recombinant adenoviral vector[END_REF] as well as negative effects on osteoblastic differentiation of the MC3T3 cell line ( 24) have been reported. But the most striking property of osteopontin may be its ability to promote macrophage infiltration (for a review: ( 9)). Increased osteopontin expression by transplanted hMSCs may therefore culminate in attracting macrophages to the bone defect site and exacerbating the inflammatory process. The exact effects of increased osteopontin expression on bone formation by hMSCs, i.e., whether it stimulates bone formation processes or attracts osteoclasts and macrophages to bone defect site, still remain to be determined.
Angiogenesis, a crucial process for oxygen supply to cells, is modulated by several proangiogenic factors (for a review: (7, 47)), which expression is stimulated by HIF-1 (hypoxia inducible factor 1), a transcription factor activated by hypoxia (for review see: : [START_REF] Pugh | Regulation of angiogenesis by hypoxia: role of the HIF system[END_REF][START_REF] Wenger | Cellular adaptation to hypoxia: O2-sensing protein hydroxylases, hypoxia-inducible transcription factors, and O2regulated gene expression[END_REF]). The third step in the present study was therefore to assess the effects of temporary exposure to hypoxia on angiogenic factor expression by hMSCs. Our results showed that a 2-fold upregulation of VEGF expression by hMSCs occurs under hypoxic conditions at both mRNA and protein levels. These findings are in agreement with previous reports that hypoxia increases VEGF expression in the MC3T3 cell line [START_REF] Steinbrech | Hypoxia regulates VEGF expression and cellular proliferation by osteoblasts in vitro[END_REF]. Expression of other growth factors and cytokines studied here, although regulated at the mRNA level, were not affected at the protein level by temporary exposure to hypoxia. The bFGF expression, indeed, was upregulated by exposure to hypoxia at the mRNA but not at the protein levels. The discrepancies between mRNA and protein may be explained by shorter half-life of bFGF, lower translation efficiency or the absence of post-translational modification under hypoxia. Moreover, several studies comparing genomic and proteomic analyses report moderate or no correlation between RNA and protein expression [START_REF] Chen | Discordant Protein and mRNA Expression in Lung Adenocarcinomas[END_REF][START_REF] Huber | Comparison of Proteomic and Genomic Analyses of the Human Breast Cancer Cell Line T47D and the Antiestrogen-resistant Derivative T47D_r*[END_REF].
Even so, MSCs are able to durably enhance (for up to 28 days) tissue reperfusion when transplanted into ischemic myocardium [START_REF] Fazel | Cell transplantation preserves cardiac function after infarction by infarct stabilization: augmentation by stem cell factor[END_REF][START_REF] Shyu | Mesenchymal stem cells are superior to angiogenic growth factor genes for improving myocardial performance in the mouse model of acute myocardial infarction[END_REF]. Stimulation of VEGF alone does not suffice, however, to trigger the formation of functional vascular networks, as attempts to accelerate vascularization by over-expressing VEGF (using a genetic system) resulted in the formation of immature, leaky blood vessels in mice [START_REF] Ash | Lens-specific VEGF-A expression induces angioblast migration and proliferation and stimulates angiogenic remodeling[END_REF][START_REF] Dor | Conditional switching of VEGF provides new insights into adult neovascularization and pro-angiogenic therapy[END_REF][START_REF] Ozawa | Microenvironmental VEGF concentration, not total dose, determines a threshold between normal and aberrant angiogenesis[END_REF]. These findings suggest either that the secretion levels of multiple angiogenic factors by MSCs, even if they are not up-regulated by hypoxia, suffice to promote vascular invasion of ischemic tissues; that MSCs secrete other growth factors and cytokines involved in angiogenesis, the expression levels of which have not been studied here; or that MSCs may indirectely promote angiogenesis in vivo by stimulating the secretion of angiogenic factors by other cell types.
The present study shows that exposure of primary hMSCs to temporary hypoxia results in persistent down-regulation of cbfa-1/Runx2, osteocalcin and type I collagen levels, but in the upregulation of osteopontin expression, which may therefore limit in vivo bone forming potential of hMSCs. This study, however, only addressed the effects of a transient 48-hour exposure to hypoxia with osteogenic differentiation conducted in hyperoxic conditions (21% O 2 ). When transplanted in vivo, MSCs undergo temporary oxygen deprivation but will never come back to hyperoxic conditions as the maximum oxygen tensions reported either in blood [START_REF] Heppenstall | Tissue gas tensions and oxygen consumption in healing bone defects[END_REF] or in diaphyseal bone (4) do not exceed 12.5% O 2 . One may then expect more disastrous effects on hMSC osteoblastic differentiation when cells are transplanted in vivo than when they are exposed to in vitro 48-hour hypoxia. It may be therefore of great interest to determine what in vitro hMSC culture conditions are most appropriate for preserving their osteogenic potential after their in vivo implantation.
Table 1. Primer sequences for target and housekeeping genes used in RT-PCR assays.
* The accession number is t GAPDH: glyceraldehyde-3-: basic fibroblast growth factor; PDGF-BB: platelet derived growth factor -BB; VEGF-R1: VEGF receptor 1 (Flt-1); VEGF-R2: VEGF receptor 2 (KDR); Type I coll.: type I collagen; cbfa-1/Runx2: core binding factor alpha 1 subunit/Runx2; BMP2: bone morphogenetic protein 2; BSP: bone sialoprotein; ALP: alkaline phosphatase; RPL13a: ribosomal protein L13a. * The accession number is the GenBank™ accession number. cbfa-1/Runx2: core binding factor alpha 1 subunit/Runx2; Type I coll.: type I collagen; RPL13a: ribosomal protein L13a.
Figure 1. Multipotency of hMSCs.
A. Induction of osteogenic differentiation. After 10 and 20 days of cell culture in osteogenic medium, osteogenic differentiation was assessed by determining the ALP activity and by performing RT-PCR analysis of osterix and osteocalcin expression. (n=1 donor). H2O was used as the negative control for RT-PCR. B. Induction of chondrogenic differentiation. After 20 and 30 days of cell culture in chondrogenic medium, chondrogenic differentiation was assessed by performing immuno-histological analysis of human type II collagen expression. Sections were counter-stained using haematoxylin. Incubation with secondary antibody alone was used as the negative control. Scale bar = 10 m. (n=1 donor). C. Induction of adipogenic differentiation. After 10 and 20 days of cell culture in adipogenic medium, adipogenic differentiation of hMSCs was assessed by Oil Red O staining, and the levels of aP2 and PPAR expression were determined by performing RT-PCR analysis. H2O was used as the negative control for RT-PCR. Scale bar = 50 m. (n=1 donor). Cell culture medium was placed in a sealed jar containing an oxygen chelator. Twice a day and during 5 days, pO2 levels were measured with a pO2 oxygen sensor without opening the hypoxic system. Values are means ± SD; in triplicate. hMSCs were exposed to either control (21% O2) or hypoxic (1% O2) conditions for 48 hours. After exposure, the media were replaced by osteogenic medium and hMSCs were cultured in control conditions for 0, 14 and 28 days. At the end of these time periods, osteoblastic differentiation was evaluated by performing RT-PCR analysis on osteoblastic markers. RPL13a was used as the endogenous reference gene. Results presented here were obtained on one donor representative of the three donors studied. hMSCs were exposed to either control (21% O2; white bars) or hypoxic (1% O2; grey bars) conditions for 48 hours. After exposure, the media were replaced by osteogenic medium and hMSCs were cultured in control conditions for 0, 14 and 28 days. At the end of these time periods, mRNA expression levels of cbfa-1/Runx2, osteocalcin and type I collagen were determined by performing Real-Time PCR. RPL13a was used as the endogenous reference gene. Values are means ± SD; n=3 donors; the assays performed on each donor were carried out in triplicate.
Figure 2 .
2 Figure 2. pO 2 levels with time in the hypoxic system.
Figure 3 .
3 Figure 3. hMSC death rate under hypoxic conditions. hMSCs were exposed to hypoxic conditions for 48, 72 and 120 hours. Cell death rates were assessed by Live/Dead staining followed by image analysis. Values are means ± SD; n=3 donors.
Figure 4 .
4 Figure 4. Effects of temporary hypoxia on the osteogenic potential of hMSCs.
Figure 5 .
5 Figure 5. Effects of temporary hypoxia on the cbfa-1/Runx2, osteocalcin and type I collagen expression by hMSCs.
Figure 6 .
6 Figure 6. Effects of temporary hypoxia on the mRNA expression of angiogenic factors by hMSCs.hMSCs were exposed to either control (21% O2; white bars) or hypoxic (1% O2; grey bars) conditions for 48 hours. Expression levels of TGF1, TGF2, TGF3, bFGF, and VEGF were normalized using the respective expression levels of RPL13a. Values are means ± SD; n=3 donors.
Figure 7 .
7 Figure 7. Effects of temporary hypoxia on the protein secretion of three major regulators of angiogenesis by hMSCs. hMSCs were exposed to either control (21% O2; white bars) or hypoxic (1%O2; grey bars) conditions for 48 hours. The secretion levels of TGF1 (A), bFGF (B) and VEGF (C) were then determined using ELISA assays. Values are means ± SD; n=3 donors.
Table 2 . Primer sequences for target and housekeeping genes used in real time PCR assays.
2
Acknowledgments
We thank Dr. Michele Guerre-Millo for providing the sealed jar for hypoxic cell culture conditions, Dr. Sylviane Dennler and Dr. Alain Mauviel for their expert assistance with the RT-PCR assays, and Dr. Sophie Le Ricousse-Roussanne for providing endothelial cells. We would also like to express special thanks to Professor Christophe Glorion and Dr. Jean-Sebastien Sylvestre for their help.
Disclosure Statement
The authors declare no competing financial interests. |
01758661 | en | [
"sde.es",
"sde.ie",
"spi.mat"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01758661/file/I2M_RCR_2017_GRIMAUD.pdf | Guilhem Grimaud
email: guilhem.grimaud@ensam.eu
Nicolas Perry
Bertrand Laratte
Aluminium cables recycling process: Environmental impacts identification and reduction
Life cycle impact of European generic primary and secondary aluminium are well defined. However specific endof-Life scenario for aluminium products are not available in the literature. In this study, the environmental assessment of cable recycling pathway is examined using the Life Cycle Assessment (LCA) methodology. The data comes from a recycling plant (MTB Recycling) in France. MTB Recycling process relies only on mechanical separation and optical sorting steps on shredder cables to obtain a high purity aluminium (above 99.6%). The life cycle assessment results confirm the huge environmental benefits for aluminium recycled in comparison with primary aluminium. In addition, our study demonstrates the gains of product centric recycling pathways for cables. The mechanical separation is a relevant alternative to metal smelting recycling.
This work was done firstly to document specific environmental impact of the MTB Recycling processes in comparison with traditional aluminium recycling smelting. Secondly, to provide an environmental overview of the process steps in order to reduce the environmental impact of this recycling pathway. The identified environmental hotspots from the LCA for the MTB recycling pathway provide help for designers to carry on reducing the environmental impact.
Introduction
General context
The European demand for aluminium has been growing over the past few decades at a rate of 2.4% per annum [START_REF] Bertram | Aluminium Recycling in Europe[END_REF]. The aluminium mineable reserves are large, but finite, an average value for the ultimately recoverable reserve is about 20-25 billion tons of aluminium. Nowadays, the aluminium production is about 50 million tons per year [START_REF] Sverdrup | Aluminium for the future: modelling the global production, market supply, demand, price and long term development of the global reserves[END_REF]. Increase for aluminium demand in Europe is mainly supported by the rise of recycling which growth was in the same time about 5% per annum [START_REF] Bertram | Aluminium Recycling in Europe[END_REF][START_REF] Blomberg | The economics of secondary aluminium supply: an econometric analysis based on European data[END_REF]. The abundance and the versatility of aluminium in various applications have made it one of the top solutions for lightweight metal strategy in various industries such as automotive [START_REF] Liu | Addressing sustainability in the aluminum industry: a critical review of life cycle assessments[END_REF]. In the cable industry, substitute copper for aluminium can considerably reduce the linear weight without degrading too much the electrical properties [START_REF] Bruzek | Eco-Friendly Innovation in Electricity Transmission and Distribution Networks[END_REF]. To obtain optimal electrical conductivity, aluminium use for cables has purity above 99.7% [START_REF] Goodwin | Metals[END_REF]. Because secondary aluminium does not meet the quality requirements for aluminium cables manufacturers; only primary aluminium is used for the aluminium cables supply chain. Nevertheless, improvement in recycling could help reach quality targets, by using new sorting technologies.
Aluminium properties are not deteriorated by recycling. However, in most cases aluminium parts are mixed together at the end of life step without considering their provenance and use. According to this, the seven series of aluminium are mixed together in waste treatment plant. All aluminium series do not have the same purity and alloying elements pollute aluminium. When aluminium series are mixed together, the cost-effective solution for refining use furnaces. As the metal is molten, the separation is done by using the difference of density and buoyancy (decantation methods, centrifugation, filtration, flotation, etc.) [START_REF] Rombach | Future potential and limits of aluminium recycling[END_REF]. Despite the technology optimisations, some alloying elements are lost in the process [START_REF] Paraskevas | Sustainable metal management and recycling loops: life cycle assessment for aluminium recycling strategies[END_REF] and a fraction of metal is not recycled [START_REF] Ohno | Toward the efficient recycling of alloying elements from end of life vehicle steel scrap[END_REF]. It leads to a drop of the metal quality which is akin to a down-cycling [START_REF] Allwood | Squaring the circular economy: the role of recycling within a hierarchy of material management strategies[END_REF].
By mixing all the aluminium waste streams, it becomes very difficult to maintain a high level of purity for the recycled aluminium. Streams of material available for recycling become increasingly impure as they move further along the materials processing chain, and therefore refining the stream for future high-quality use becomes more difficult. In particular, recycling materials from mixed-material products discarded in mixed waste streams, is most difficult [START_REF] Allwood | Material efficiency: a white paper[END_REF]. To make a step toward the circular economy, it is essential to achieve a recycling industry [START_REF] Sauvé | L'économie circulaire: Une transition incontournable[END_REF].
Upstream, the solution lies in a better separation of aluminium products to steer each flow to a specific recycling chain. This strategy should enable products to be guided through the best recycling pathway and maintain the quality of alloys. This strategy makes it possible for manufacturing companies to take back their own products and secure their material resources [START_REF] Singh | Resource recovery from post-consumer waste: important lessons for the upcoming circular economy[END_REF]. Increasing the quality of recycled materials should allow recycling company to integrate close loop End-of-Life (EoL) strategy.
Morphology of aluminium cables
The cables are composed of numerous materials. As illustrated in Fig. 1, the cables are composed of an aluminium core cable (a), covered with a polymer thick layer (b). Additional metallic materials (c) are coaxial integrated into the matrix of cables. These cables are manufactured by extruding together all the materials that compose it.
The Table 1 shows the mass proportion of materials contained in cables. Mass proportions are extracted from MTB monitoring data of cables recycled at the plant between 2011 and 2014. Aluminium in cables represents between 35 and 55% of the total weight. Other metals are mainly steel, lead, copper and zinc. The variety of plastics contained in the sheath is even stronger than for metals: silicone rubber, polyethylene (PE), cross-linking PE (xPE), polypropylene, polychloroprene, vulcanised rubber, ethylene vinyl acetate, ethylene propylene rubber, flexible polyvinyl chloride (PVC), etc. (Union Technique de l'Électricité (UTE), 1990). Although aluminium cables represent about 8% of aluminium products in Western Europe (European Aluminium Association (EAA), 2003), the inherent purity of aluminium used for cables justifies differentiate recycling channels to optimise processing steps and to improve cost efficiency. At the end of life, the challenge concerns the separation of materials from each other. The most economical way to separate different materials rely on a smelting purification (European Aluminium Association (EAA), 2006).
Presentation of MTB recycling process for aluminium cables
An alternative process for EoL cables uses only mechanical steps instead of thermal and wet separation as developed for several years by MTB Recycling, a recycling plant located in south-east of France. The specific processes were developed by MTB engineers and the system is sold worldwide as cables recycling solution. It reaches standard aluminium purity up to 99.6% for quality A and B (Table 2). This performance is obtained using only mechanical separation and optical sorting processes on shredder cables. Aluminium quality D production mainly comes from flexible aluminium, our study does not consider this production.
Each batch of aluminium (25 t) produced by MTB is analysed by laboratory spectrometry. The Table 2 presents the averaged analysis results of the chemical elements present in aluminium batches. Between 2012 and 2014, more than 400 lots were analysed. During this period only 40 batches were below the average. The aluminium obtained from recycled cables is specially appreciated by the smelter. Its high purity makes it easy to produce a wide variability of aluminium alloys. Recycled aluminium can then be used in many aluminium products and not only in applications requiring high alloy aluminium.
Issues of the study
The initial motivation for our study was to rank the environmental performance of the MTB recycling pathway in relation to other aluminium recycling solutions. In addition, we wanted to identify the main process contributing to the global environmental impact. What are the environmental gains to overcome the aluminium recycling by smelting? Firstly, this article attempts to present the environmental assessment results that enabled the comparison of the three aluminium production scenarios. On the one hand, the study demonstrates huge environmental benefits for recycled aluminium in comparison with primary aluminium. And on the other hand, the results show the harmful environmental influence of the heat refining by comparison with the mechanical sorting processes used at the MTB plant. The study demonstrates the interest of recycling waste streams separately from each other.
Although the starting point of the study was to assess and document the environmental impact of a specific recycling pathway; the results of this study have allowed to identify several environmental hotspots of the MTB Recycling process. Thus, leads to the development of the effectiveness implementations to reduce the environmental impacts of the MTB recycled aluminium. This article presents how the Life Cycle Assessment methodology allowed the engineering team to improve the environmental efficiency of MTB Recycling processes.
Methodological considerations
Environmental assessment of aluminium recycling
To evaluate the environmental performances of the MTB cable recycling pathway, we chose to use the Life Cycle Assessment (LCA) methodology. [START_REF] Bertram | Aluminium Recycling in Europe[END_REF]. However, systems modelling always relate to the standard melting solution for recycled aluminium. That is why, this study focuses on the environmental assessment of cable recycling with MTB specific processes that have never been documented using LCA. Environmental impact assessment is done using ILCD Handbook recommendations (JRC -Institute for Environment and Sustainability, 2012a). Two systems are compared to the MTB cable recycling pathway (scenario 3):
• Scenario 1: European primary aluminium • Scenario 2: secondary aluminium from European smelters The primary aluminium production (scenario 1) is used as a reference for guidance on the quality of production. Comparison with scenario 1 should help to translate the environmental benefits of recycling. Foremost, our analysis is intended to compare possible recycling pathways for the aluminium wastes. With this in mind, the scenario 2 (secondary aluminium) is used as a baseline to evaluate the MTB alternative recycling pathway (scenario 3).
Sources of data for the life cycle inventory
The evaluation is designed by modelling input and output flows that describe different systems of aluminium recycling with the software SimaPro 8.04 [START_REF] Goedkoop | Introduction to LCA with SimaPro[END_REF][START_REF] Herrmann | Does it matter which Life Cycle Assessment (LCA) tool you choose? A comparative assessment of SimaPro and GaBi[END_REF]. All the flows are based on processes from Ecoinvent 3.1 library [START_REF] Wernet | Introduction to the ecoinvent version 3.1 database. Ecoinvent User Meeting[END_REF]. The systems are developed according to the local context of Western Europe. To allow comparison, all the inventory elements are compiled based on the Ecoinvent database boundaries and data quality is checked [START_REF] Weidema | Ecoinvent Database Version 3?the Practical Implications of the Choice of System Model[END_REF][START_REF] Weidema | Overview and methodology[END_REF]. Once modelling was done, the characterisation is conducted according to International Reference Life Cycle Data System (ILCD) Handbook (JRC -Institute for Environment and Sustainability, 2012a) recommendations.
This study compares two different modelling systems. Scenarios 1 and 2 using available foreground data from Ecoinvent library without any modifications. And scenario 3 using Ecoinvent data to model the MTB Recycling pathway, the inventory dataset was done using the recommendations from European Joint Research Centre (JRC -Institute for Environment and Sustainability, 2010). For Scenario 1 (European primary aluminium) and scenario 2 (secondary aluminium from European smelter) data has been collected by European Aluminium Association (EAA) and aggregated in Ecoinvent 3.1 (Ruiz Moreno et al., 2014, 2013). The MTB scenario was modelled using specific data from MTB Recycling plant. The data collection method does not allow the use of the results for other cables recycling pathways. The results are only representative of cable recycling solutions developed by MTB. Nevertheless, the three modelling rely on the same system boundary.
Life cycle impact assessment methodology
The Table 3 presents the selected indicator models for the life cycle impact assessment method. In Table 3, the two models in italics are the models, which do not follow the recommended ILCD 2011 impact assessment methodology (JRC -Institute for Environment and Sustainability, 2012b), which was used throughout the study. For human toxicity indicators, USEtox (recommended and interim) v1.04 (2010) [START_REF] Huijbregts | Global Life Cycle Inventory Data for the Primary Aluminium Industry -2010 Data[END_REF] model was implemented to improve our characterisation method with latest calculation factors as recommended by UNEP and SETAC [START_REF] Rosenbaum | USEtox-the UNEP-SETAC toxicity model: recommended characterisation factors for human toxicity and freshwater ecotoxicity in life cycle impact assessment[END_REF]. First results on water resource depletion with default calculation factor from Ecoscarcity [START_REF] Frischknecht | The ecological scarcity method -ecofactors 2006. A method for impact assessment in LCA[END_REF], show anomalies. These anomalies are all related to the Ecoinvent transportation modelling which involves electricity mix of Saudi Arabia. For the water resource depletion indicator, the Pfister water scarcity v1.01 (2009) [START_REF] Pfister | Assessing the environental impact of freshwater consumption in life cycle assessment[END_REF] calculation factor was implemented in our characterisation method. It does not completely remove anomalies in the characterisation, but it significantly reduces the positive impact of transport on the water scarcity indicator. A sensitivity analysis on the characterisation method was conducted using two other characterisation methods: ReCiPe Midpoint v1.1 and CML IA Baseline v3.01. This sensitivity analysis has not yielded conflicting results. These calculations do not show a divergence in the hierarchy of scenarios on all indicators.
Life cycle assessment study scope
This study is based on a life cycle approach, in accordance with the standards of International Organisation for Standardisation (ISO 14010/44) (International Standard Organization, 2006a,b). The Fig. 2 shows the representation of a standard product life cycle including the life cycle stage and the End-of-Life stage. As shown on Fig. 2, product life cycle stage of aluminium is not included in our study scope.
Functional unit proposal
As part of this study, the functional unit used is as follows: producing one ton of aluminium intended for end-user applications, with the purity higher than 97% using current industrial technologies (annual inbound processing higher than 10,000 t) located in Europe.
The matching quality of the products compared can meet the same function as a high purity aluminium can be used for producing many alloys without refining. We selected three scenarios that meet all the conditions of the functional unit:
• Scenario 1 or primary: primary aluminium, resulting from mining.
• Scenario 2 or secondary: secondary aluminium from recycling by smelter.
• Scenario 3 or MTB: MTB aluminium, from recycling using the MTB solution.
Presentation of the system boundaries
The Fig. 3 presents the main steps considered in each scenario of the comparison. The study focuses on transformation steps of aluminium. That is why the system boundaries chosen is a cradle to exit gate modelling [START_REF] Grisel | L'analyse du cycle de vie d'un produit ou d'un service: Applications et mises en pratique[END_REF][START_REF] Jolliet | Analyse du cycle de vie: Comprendre et réaliser un écobilan[END_REF]. For scenarios 1 and 2, the final product is aluminium ingots, while for scenario 3 the final product is aluminium shot. In any case, the three scenarios meet the functional unit. In both forms of packaging, aluminium can be used to produce semi-finished products.
Scenario development
The baseline scenarios (scenarios 1 and 2) refer to the Western European average consumption of aluminium. The scenario 1 and scenario 2 are based on Ecoinvent unit processes modelling. Ecoinvent database uses the EAA Life Cycle Inventory (LCI) [START_REF] Althaus | Life Cycle Inventories of Metals -Final Report Ecoinvent Data v2.1[END_REF]. For Ecoinvent 3.1 (Ruiz Moreno et al., 2014, 2013), the Aluminium processes are built with data collected by EAA in 2013 (European Aluminium Association (EAA), 2013; International Aluminium Institute (IAI), 2014). The Ecoinvent modelling uses data from the average technology available on the market for Western Europe [START_REF] Weidema | Overview and methodology[END_REF].
Scenario 1: primary aluminium production
The Fig. 4 presents the different steps required (and included in the modelling) for the primary aluminium dataset. The figure adds more details about the intermediate steps required to obtain ingots of primary aluminium. The scenario for primary aluminium comes from Ecoinvent data. The data used for the study is aluminium production, primary, ingot. This data meets the purity requirements established in the functional unit. At this stage of the production process, the aluminium contains only 1.08% silicon and the overall purity is 98.9%. The modelling of primary aluminium is based on the average of primary aluminium production for the European market. The technology considered corresponds to the up-to-date technologies used in Europe. The electricity mix used by the primary aluminium industry is a specific electricity mix. Modelling this mix relies on the compilation of specific data for all European primary aluminium producers. This mix is made up with over 80% from hydroelectric power, 8.7% of electricity from nuclear and the remaining part, 11.3% comes from fossil fuel. For the unit process data used, the downstream transport to the market is not considered, but all the upstream logistic for the transformation steps are included in the boundaries. As processing operations, shown on Fig. 4, are conducted in multiple locations, the total distance travelled is 11,413 km (11,056 km by sea, 336 km by road and 21 km by train).
Scenario 2: conventional aluminium recycling
Scenario 2 provides the modelling of the traditional aluminium recycling solution. This scenario is based on shredding steps and melting purification step made by refiners. As well as scenario 1, the scenario 2 is based on average values of European smelters. The data was compiled by the EAA and provided in Ecoinvent database. The collection of waste is not included in the second scenario but the transport from the massification point and the waste treatment plant is included in the modelling. Aluminium wastes travel 322 km (20 km on water, 109 km by train and 193 km by road). The electricity mix used in the modelling is equivalent to the electricity mix provided by the European Network of Transmission System Operators for Electricity (ENTSO-E). It is mainly fossil fuel (48.3%), nuclear power (28.1%) and renewable energy (23.6%) (ENTSO-E, 2015). The Fig. 5 presents aluminium recycling as modelled in the Ecoinvent dataset. The modelling is divided in five steps: four mechanical separation steps and one thermal step. The shredding step is to reduce the size of the material around 15-30 mm. Mechanical separations carried out as part of the scenario 2 are coarse separation. The recycling plants have equipment to handle a wide quantity of waste without warranties about the quality. They are designed only to prepare for the melting not to purify the aluminium. Therefore, the objective is to reduce the amount of plastic and ferrous elements but not fully eliminate such pollution from the waste stream.
In Ecoinvent, two data collections are available. One data collection was done for production scraps (new scrap) and the other one for postconsumer scrap (old scrap). The processes used for recycling new and old scraps are not the same. New scrap needs less operation than old scraps. The inbound logistics is also different because some of the wastes are recycled directly on production plants. For the study the ratio between old and new scrap is based on European aluminium mix (International Aluminium Institute (IAI), 2014). In 2013, old scrap represents 46.3% of aluminium recycled in Europe and new scrap 53.7%. After the recycling process, two outlets are possible: wrought or cast aluminium. For the study, the choice falls on wrought aluminium because it has sufficient purity required by the functional unit (97%). The data chosen for the study is Aluminium, wrought alloy {RER} | Secondary, production mix (Ruiz Moreno, 2014). Ecoinvent modelling not show the co-products separated during the recycling process. 6 by-products are included in environmental impacts calculation, but no benefit of by-products recycling is integrated into the study to remain consistent with the Ecoinvent modelling. For MTB scenario, the distribution between postconsumer cables (54%) and new scraps (46%) is inverted relative to scenario 2. However, the breakdown between old and new scraps has no influence on the recycling steps used at the MTB plant. All the transport steps are made on the road. The distances of transport considered are 540 km for old scraps and 510 km for new scrap from various cable manufacturers. As shown on Table 2, the intrinsic recycled aluminium quality reaches at least 99.6% of aluminium purity.
MTB Recycling has an environmentally friendly strategy at the top management level. One of the company's commitments was to source exclusively renewable energy for the recycling plant. Therefore, they subcontracted with an energy provider that ensures an electricity mix from renewable energy source called EDF Equilibria. Electricity comes almost exclusively from hydroelectric power (96.92% from alpine reservoirs and 2.4% from run of the river). The remaining electricity comes from waste to energy plants (0.51%) and from cogeneration plants (0.17%) [START_REF] Powernext | Lots of Certified Energy Supplies to MTB Plant -Period[END_REF].
To present both the advantages of mechanical refining and the specific results at the Trept MTB Recycling plant, we have divided Scenario 3 into two. Scenario 3a corresponds to the modelling using the same electrical mix as scenario 2: ENTSO-E electricity mix. For scenario 3a, the recycling processes are rigorously compared with the same scope. The scenario 3b corresponds to the modelling using the specific green electricity mix used by the MTB Recycling plant. For scenario 3b case, the MTB recycled aluminium is compared to the other aluminium produced considering the MTB plant specific context.
During MTB cables recycling steps, the various separation steps produce co-products, mainly plastics and other metals. Except for plastics which are considered as waste, other co-products are not included in the study: their environmental impact is considered as zero. Although these by-products are recycled, the full impact of separation steps is transferred to the production of recycled aluminium. A sensitivity analysis was conducted on the allocation method and the results show that the boundaries used for scenario 3 maximise the impact of the aluminium produced by the MTB Recycling pathway [START_REF] Stamp | Limitations of applying life cycle assessment to complex co-product systems: the case of an integrated precious metals smelter-refinery[END_REF]. Fig. 7 presents aluminium recycling steps considered in the modelling of scenario 3. The main difference in the scenarios 2 and 3 pathways is concentrated in the second half of the chart on Fig. 7. Aluminium cables recycling starts with shredding. At the MTB Recycling plant, the shredding is done to obtain homogenous particles of size between 5 and 7 mm. The size reduction is done in four steps: two heavy duty shredding steps and two granulation steps. Between each shredding step, magnets are positioned to capture ferrous elements. After shredding, the mechanical and optical separation steps are used to get the best purity of aluminium. The recycled aluminium D is out of scope for this study but the mixture of plastic and aluminium is considered in the LCA study as a waste.
Life cycle inventory summary
To facilitate the reading of the results, Table 4 gives the main information of the life cycle inventory of each scenario.
Comparison of the life cycle assessment results
Comparison of the 3 scenarios
In this section, we are interested in the three scenarios comparison. The Fig. 8 draws the comparison for the three scenarios, the values used for the characterisation are given on the figure. As expected the scenario 1 emerges as far more significant on all indicators except for freshwater eutrophication where recycling aluminium (scenario 2) takes the lead. On freshwater eutrophication impact category, the scenario 2 (secondary aluminium) has the highest impact, even higher than primary aluminium (scenario1) due to the addition of alloying metals during the aluminium recycling. The alloying elements are required to supply the market with aluminium alloys that meet the market constraints. The copper is the main alloying element contributing to the impact on the freshwater eutrophication. Indeed the copper production chain requires sulphuric tailing [START_REF] Norgate | Assessing the environmental impact of metal production processes[END_REF] and this step represents 96.4% on the impact category. This result seems to be a modelling error into Ecoinvent 3.1. Our team do not consider the results of the freshwater eutrophication impact category from LCA to draw any conclusion.
Average secondary aluminium reaches approximately 10% of the primary aluminium environmental impacts. Those results match with the evaluation already done and meet the values given by the Bureau of International Recycling (BIR) for aluminium recycling benefits. In its report, BIR estimates that the energy gain for recycling aluminium is 94% compared to the production of primary aluminium (Bureau of International Recycling, 2010). It should be noted that the use of a highcarbon electrical mix (ENTSO-E) for recycling tends to reduce the gains once translated into environmental impact categories.
As explained in Life Cycle Performance of Aluminium Applications [START_REF] Huber | Life cycle performance of aluminium applications[END_REF] only the European Aluminium Association has conducted an LCA study to provide generic LCI data about aluminium production and transformation processes which are based on robust data inventory. This work, although focusing primarily on European aluminium production, also provides results for the rest of the world whose production can be imported into Europe. Moreover, the International Aluminium Institute (IAI) concentrates mainly on the production of primary aluminium and omits the scope of secondary aluminium which is only addressed by EAA (International Aluminium Institute (IAI), 2013).
The new contribution of this study concerns the environmental comparison of the mechanical recycling of aluminium cables with a smelting recycling and primary aluminium production. On all the set of indicators, MTB aluminium (scenario 3b) is between 2.5% and 5% of the scenario 1 environmental impacts.
Recycling scenarios comparison
In this section, we are interested in the comparison of the aluminium recycling scenarios. In the previous characterisation, the difference between scenarios 2 and 3 are not clearly shown on the graphical representation. The Fig. 9 gives the opportunity to compare more specifically the two recycling pathways. The values used for the histogram representation in Fig. 9 are given on the figure.
The environmental impacts of the scenario 3a represents between 5% and 82% of scenario 2 environmental impacts, except for the ionising radiation impact category. The results on the ionising radiation impact category for scenario 3a are related to the high electricity consumption during the shredding steps. Using the ENTSO-E which contains a large proportion of nuclear energy (28.1%), the electricity consumption contributes to 70% of the ionising radiation impact category. And the transport contributes to 21%. The high level of nuclear power consumption also contributes significantly to the ozone depletion indicator. The high consumption of electricity from nuclear power contributes largely to the ozone depletion impact category.
Using only mechanical separation steps can halve the environmental impact. For the comparison of aluminium produced using the specific electricity mix, scenario 3b, the environmental impact does not exceed the impact of scenario 2. In addition, the environmental impact of the scenario 3b represents between 2% and 46% of the recycling by melting on the set of impact categories. Thanks to the MTB Recycling pathway (scenario 3b), on the set of indicators the environmental impact of recycled aluminium is divided by four.
Results from Fig. 9 allow us to establish an environmental hierarchy between the different recycling solutions for aluminium cables. Whatever the electricity mix used by the recycling plant, the MTB mechanical recycling process is the most environmentally friendly pathway. It also demonstrates that recycling when driven without loss of quality is a relevant alternative to mining. These results also show the environmental relevance of the product centric recycling approach for cables recycling. The LCA study revealed that the closed loop recycling options (considering aluminium cables) has lower environmental impact over the other recycling scenarios using mixed streams of aluminium wastes. This performance has already been demonstrated for aluminium cans [START_REF] Lacarrière | Emergy assessment of the benefits of closed-loop recycling accounting for material losses[END_REF][START_REF] Niero | Circular economy: to be or not to be in a closed product loop? A life cycle assessment of aluminium cans with inclusion of alloying elements[END_REF].
Uncertainty analysis for recycling scenarios
An uncertainty analysis was conducted between the three scenarios. The uncertainty analysis was performed using the Monte Carlo approach with 10,000 iterations and a 95% confidence interval. With the specific electricity mix, the uncertainty between scenario 2 and scenario 3a do not exceed 5% on all the set of indicators, except for the human toxicity (8%) and the water resource depletion (45%) indicators. With equivalent electricity mix, the results for the uncertainty analysis between scenarios 2 and 3b are present on Fig. 10. The uncertainty exceeds 5% on three indicators: ozone depletion (11%), human toxicity, non-cancer effects (9%) and water resource depletion (45%). The results of these three indicators are therefore subject to further investigations to draw some conclusions. Especially for the water resource depletion indicator, which has a very high uncertainty. However, the results of the uncertainties analysis demonstrated the robustness of our modelling and allow us to confirm the conclusions of the characterisation.
Sensitivity analysis
As seen previously, the electricity supply mix has a huge influence on the overall environmental impact of recycling pathway. A sensitivity analysis was performed on the electricity mix influence for scenario 3. The results are presented on Fig. 11. For this sensitivity analysis two additional electricity mixes were used in the comparison for scenario 3. The electricity sources distribution for each electricity mix is presented in Table 5. For German and French electricity mix, the Ecoinvent 3.1 data used are listed below:
• French electricity: Electricity, medium voltage {FR}| production mix | Alloc Rec, U
• German Electricity: Electricity, medium voltage {DE}| production mix | Alloc Rec, U The comparison on Fig. 11 shows the results for the MTB cables recycling pathway using different electricity mix. The gains from renewable electricity (scenario 3b) are obvious on all the set of indicators. Similarly, the differences between the two national mixes (scenario 3, French electricity and German electricity) are quite pronounced. On climate change and freshwater eutrophication, these differences are largely due to the predominance of fossil fuels in the German electricity mix (62.1%), as for the French electric mix, the share of fossil fuels accounts for less than 10%. While the French electricity mix consists mainly of nuclear energy, that involving domination on ionising radiation and ozone depletion. Overall, the European electricity mix ENTSO-E is the most harmful on our set of indicators. The environmental performances are close to those obtained with the German electricity mix, which is the leading producer of electricity at European level in the ENTSO-E network. Using the European ENTSO-E electricity mix in the scenario comparison is the worst case for modelling aluminium cables recycling at MTB Recycling plant. It is important to note that whatever the electrical mix used scenario 3 remains the most relevant from an environmental point of view with respect to the other scenarios.
Scenario 3b environmental hotspots identification from LCA
LCA results allow us to establish a hierarchy between environmental recycling solutions for aluminium cables. Whatever the electricity mix used by the recycling plant, the MTB mechanical recycling process is the most environmentally friendly. In this section, the MTB recycling pathway characterisation is described. The Fig. 12 shows the results for the characterisation of the scenario 3b. This characterisation used data from MTB recycling pathway supply with green electricity and without any optimisations. The values used for the graphical representation are given on the figure.
On the set of indicators, the MTB recycling steps represent between 11.4% and 79.7% of the total impact, the remaining share of the impact is related to upstream logistic. The upstream logistic includes the transport from massification points to the recycling plant. The average of the 11 indicators is equal to 36.1% and the median is 33.0%. The results show a very strong contribution from the upstream transport for the collection of waste in the overall impact of the scenario 3b.
The shredding step is the most contributing process in the overall impact of scenario 3b on the set of impact categories. Although this step is highly energy-intensive, the use of the hydroelectric power supply strongly limits the contribution. Indeed, electricity consumption contributes on average 10% of the environmental impact of shredding steps. The water resource depletion impact category is singular in the sense that the production of hydroelectricity has a very strong influence on the final impact of this impact category. This observation is due to the depletion factor of water resources used for the hydroelectric production processes.
Since electricity consumption is not the first contributor for environmental impacts and to explain the predominance of the shredding step on the result, we must go further in the analysis of the subprocess. Thus, the shredding consumables used for the grinding equipment are predominant. The specificity of the alloys used for the blades and the screens implies the use of numerous alloying elements that are a burden on the environmental impact, especially on the freshwater eutrophication indicator (as explain for scenario 2 in paragraph 5.1).
The environmental impact of the second part of the MTB Recycling pathway: the mechanical sorting stage is significantly lower than the shredding steps. The consumables of this stage are fewer. The consumption of electricity is also lower in comparison with shredding steps. The electricity consumption is the main contributor to this stage of the recycling pathway. Air separation tables are among the highest electricity consuming processes at the mechanical sorting stage. This stage produces plastic wastes. Also, plastic wastes are currently buried in landfill, all types of plastic are mixed and no industrial processes are available to separate them effectively. In addition, a duty of vigilance is required on a selection of polymers resin that are banned in new plastic products. The impact of this waste is not negligible, between 5 and 10% of the final impact on the set of impact categories.
For the scenario 3a environmental evaluation, namely when the ENTSO-E electricity mix is used for the characterisation, all the recycling steps of MTB scenario represent on the set of indicators half of the total impact on average. Of course, the energy mix shift modifies the distribution and hierarchy of each stage in the environmental impacts. The upstream logistic transport becomes lesser on all indicators. For the shredding and mechanical sorting stages, the contribution distribution is not distorted but the electricity consumption becomes the main source of the environmental impact. The ENTSO-E electricity consumption represents 80% of the recycling processes overall environmental impacts.
Discussion
Recycling process optimisation using LCA results
Although the LCA tool is primarily an assessment tool, it is also intended to support an eco-design approach. Using LCA results to improve the environmental performances give good results to industrial processes [START_REF] Pommier | LCA (Life Cycle Assessment) of EVPengineering veneer product: plywood glued using a vacuum moulding technology from green veneers[END_REF]. In this section, we focus on the optimisation option to reduce the environmental impacts of the scenario Fig. 12. Characterisation of MTB aluminium shot with purity of 99.6% using green electricity mix.
3b.
The impact of transportation is primarily due to the distance travelled by lorry to the recycling plant. Upstream logistic transport is inevitable, it is therefore difficult to plan to do a distance reduction as the deposits are very diffuse on the collection territory. However, the cable is voluminous waste, and lorry loading is limited by the volume of waste and not by the mass. To improve the upstream logistic, the volume of waste cables could be reduced to facilitate its transport. The logistic optimisation is underway in France. However, the influence we have on logistics flows is limited, so we have focus the eco-design efforts on the processes carried out within the MTB plant.
For shredding consumables, in collaboration with subcontractors, research has been done to identify more durable steel alloys for shredding blades. The new alloys are being tested to demonstrate their gain in terms of longevity. The tests carried out with news blades demonstrate an increase of 30-60% of the lifespan. The environmental impact of these new alloys is similar. The modification of the steel used by consumables provides a lower-cost solution to reducing the environmental impacts. Work on energy efficiency of the shredder is also necessary to reduce the electricity consumption of the shredding steps. Work on energy recovery has not allowed yet to implement innovative solutions. Nevertheless, the energy recovery solutions and new electric motors are studied.
For the mechanical sorting stage, a thorough reflexion was conducted on the electrical consumption of equipment and more specifically on the air separation tables. The MTB engineering team made improvements in the design of the new air separation table models. The improvements in the airflow within the equipment were reviewed. In fact, power consumption could be reduced by using smaller electric motors.
The treatment of plastic waste from the cable sheaths does not appear as a major contributor to the overall environmental impacts in our LCA study. Indeed, this step represents about 5-10% of the scenario 3b overall impacts. This stage of the scenario 3b is divided into two parts: on the one hand, the transport of waste by lorry to the storage site (25 km) and the landfill process. However, as a manufacturer of recycling solutions, it is the responsibility of MTB to provide a technological response to solve this problem. All plastic polymers from the cable sheaths are not recycled. The plastic resin mixture and the presence of aluminium dust greatly complicate the mixture recovery. According to the study results, to reduce the overall environmental impact of the scenario 3b, MTB should cut down the environmental impacts of plastic waste management.
To do so, MTB has initiated a reflexion to sort and recycle the plastic resin mixture. A first prototype was developed in late 2015. The synoptic of plastic processing method is shown on Fig. 13. The separation is still based on simple mechanical steps that achieve a uniform separation. The results of this pilot recycling unit are encouraging. The unit reduces landfill by 80%. Other developments are underway to enhance the remaining part as solid recovered fuel. For this, the separation must be perfect to meet the regulatory requirements.
New recycling pathway design using LCA results
To further reduce the environmental impact of transport issues, the MTB engineering team had to review the overall recycling pathway and not just the industrial processes. The challenge was to design a transportable recycling solution capable of achieving the same level of purity as its existing plant but with a lower throughput. So instead of transporting the waste to the recycling plant, it is the plant that moves closer to the deposits. The use of the international container standard sizes ensures maximum transportability by all modes of transport (road, rail, maritime). In addition, the containers offer modularity with upstream and downstream processes that can be easily implemented before or after the CABLEBOX system. The recycling solution is not autonomous, it requires an external power source. The energy mix used for the local supply of the system depends on the location. There are no direct local emissions but only indirect emissions due to energy consumption. The CABLEBOX system includes all the separation steps presented in Fig. 7. A transportable solution for recycling can effectively reduce the environmental and financial impact of upstream logistic. The CABLEBOX solution is especially relevant when waste production is seasonal or/ and geographically concentrated and it makes a step toward to circular economy by offering an industrial solution for close loop recycling.
Conclusions
As already seen in this paper, to recycle the same products, different pathways are available. Life cycle assessment results demonstrate that recycling when driven without loss of quality is a relevant alternative to mining. Recycling pathways can be seen as the assembly of elementary technologies. Designers have the option of the layout to meet the specifications. The indicators that guide the designer choices are exclusively economic indicators [START_REF] Allwood | Material efficiency: a white paper[END_REF]. Environmental considerations are not considered in the layout choice. Some customers and MTB reveal the need for a better understanding of recycling pathway environmental impacts.
Moreover, optimising recycling pathway systems are long and demand powerful assessment tools such as Material Flow Analysis (MFA) and Life Cycle Assessment (LCA) [START_REF] Grimaud | Reducing environmental impacts of aluminium recycling process using life cycle assessment. 12th Bienn[END_REF][START_REF] Peças | Life cycle Engineering-taxonomy and state-of-the-art. 23rd[END_REF][START_REF] Pommier | LCA (Life Cycle Assessment) of EVPengineering veneer product: plywood glued using a vacuum moulding technology from green veneers[END_REF]. The first limitation concerning the results acquisition which are obtained once the industrial solution is implemented. As the financial investment was made by the manufacturer, they are reluctant to improve efficiency [START_REF] Hauschild | Better-but is it good enough? on the need to consider both ecoefficiency and eco-effectiveness to gauge industrial sustainability[END_REF]Herrmann et al., 2015). The second limit, the approach used is empirical and is not based on guidelines. If tools and methods are available for product ecodesign [START_REF] Donnelly | Eco-design implemented through a product-based environmental management system[END_REF][START_REF] Kulak | Eco-efficiency improvement by using integrative design and life cycle assessment. The case study of alternative bread supply chains in France[END_REF][START_REF] Leroy | Ecodesign: tools and methods[END_REF], methodologies for process eco-design are rare. Product eco-design methodologies are largely based on guidelines provide by standards [START_REF] Jørgensen | Integrated management systemsthree different levels of integration[END_REF][START_REF] Kengpol | The decision support framework for developing ecodesign at conceptual phase based upon ISO/TR 14062[END_REF]. For processes, no standard is available as for products.
Therefore, it seems to be necessary to develop an effective methodology to evaluate and guide process design choices to ensure economic, environmental and social efficiency [START_REF] Allwood | Squaring the circular economy: the role of recycling within a hierarchy of material management strategies[END_REF]. Offer to the design team an assessment tool will optimise the eco-efficiency of recycling pathways. Using the Environmental Technology Verification (ETV) certification guidelines, we start building a decision support methodology. The emergence of the ETV program appears as a relevant medium to build a process-oriented methodology. This methodology will allow designers to assess and guide their choices to ensure economic, environmental and social efficiency.
Fig. 2 .
2 Fig. 2. Representation of a standard product life cycle showing the study scope boundaries. Adapted from Zhang, 2014.
Fig. 3 .
3 Fig.3. Main steps of the production processes for the three scenarios.
4. 3 .
3 Scenario 3: MTB cables recycling pathway An intensive inventory analysis was developed during an internal survey conducted in collaboration with EVEA consultant firm at MTB Recycling plant during fall 2014. Foreground data are based on measurement and on stakeholder interviews. The collection of background data comes from Ecoinvent 3.1 or relevant literature. The Fig. 6 presents the details system boundaries used for the life cycle modelling of the aluminium cables recycling pathway at the MTB plant. The boundaries used for MTB scenarios are based on the boundaries of the Ecoinvent modelling. As shown on Fig.
Fig. 4 .
4 Fig. 4. System boundaries of the primary aluminium production from bauxite mining. Adapted from Capral Aluminium, 2012.
Fig. 5 .
5 Fig.5. System boundaries of the smelting recycling scenario for end-of-life aluminium cables. Adapted from[START_REF] Bertram | Aluminium Recycling in Europe[END_REF]
Fig. 7 .
7 Fig. 7. System boundaries of the MTB end-of life recycling pathway for aluminium cables.
Fig. 8 .
8 Fig. 8. Environmental characterisation comparison of the 3 scenarios using specific electricity mix.
Fig. 10 .
10 Fig. 10. Uncertainty analysis between recycling scenarios 2 and 3a (European ENTSO-E electricity mix).
Fig. 11 .
11 Fig.11. Sensitivity analysis on the influence of electricity mix supply for scenario 3.
The engineering team has launched in 2015 a new transportable cable recycling solution called CABLEBOX and presented on Fig. 14. The solution takes place in two 40-foot containers, one 20foot container and one 10-foot container. The flow rate reached with the CBR 2000 version is 2 t/h. Compared to the MTB centralised plant, the flow is divided by two. A first unit of CABLEBOX production has been in operation since December 2016 in the United States and one is in operation since January 2017 in France (MTB Recycling, 2016).
Fig. 13 .
13 Fig.13. Presentation of processes added to the MTB pathway to separate the plastic mixture.
Today, the environmental LCA of European generic
Nomenclature EoL End-of-Life
ETV Environmental technology verification
List of acronyms IAI International aluminium institute
ILCD International life cycle data
BIR Bureau of international recycling JRC Joint research centre
EAA European aluminium association LCA Life cycle assessment
ENTSO-E European network of transmission system operators for LCI Life cycle inventory
electricity PE Polyethylene
primary and secondary aluminium productions are well defined
through the work of the European Aluminium Association (EAA)
(European Aluminium Association (EAA), 2008). Numerous studies
were conducted concerning the sustainability of aluminium recycled by
smelters in comparison with primary aluminium from mining. Out-
comes about global and local environmental impacts show a decrease
up to 90% by using recycled aluminium (European Aluminium
Association (EAA), 2010;
Fig.
1
. Section of a cable with multiple aluminium beams.
Table 1
1 Composition of recycled cables at the MTB plant (average for the period 2011-14).
Material Proportion
Rigid aluminium (a) 48.5%
Plastics and rubber (b) 40.5%
Non-ferrous metals (c) 4.5%
Ferrous metals (steel and stainless steel) 4.0%
Flexible aluminium 2.5%
Table 2
Chemical composition of recycled aluminium produced by the MTB plant (average for the
period 2012-14).
Chemical elements Al Fe Si Cu Pb Mg
Aluminium quality A and B 99.67 0.145 0.090 0.022 0.003 0.026
Aluminium quality C 99.50 0.154 0.064 0.205 0.019 0.010
Aluminium quality D 97.25 0.524 0.791 0.524 0.014 0.427
Table 3
3 List of indicators selected for the life cycle impact assessment (JRC -Institute for Environment and Sustainability, 2011).
Indicators Model
Climate change Baseline model of 100 years of the IPCC
Ozone depletion Steady-state ODPs 1999 as in WMO
assessment
Human toxicity, non-cancer effects USEtox model v1.04 (Rosenbaum et al.,
2008)
Particulate matter RiskPoll model
Ionising radiation HH Human health effects model as developed
by Dreicer
Photochemical ozone formation LOTOS-EUROS
Acidification Accumulated Exceedance
Freshwater eutrophication EUTREND model
Freshwater ecotoxicity USEtox model
Water resource depletion Pfister water scarcity v1.01 (Frischknecht
et al., 2009)
Mineral, fossil & ren resource CML 2002
depletion
Table 4
4 Summary of the main Life Cycle Inventory information.
Scenario 1 2 3a 3b
Name Primary Secondary MTB MTB
aluminium aluminium aluminium Aluminium
ENTSO-E Green
electricity
Process Mining Smelting MTB MTB Recycling
recycling Recycling pathway
pathway
Al Purity 98.9% 97% 99.6% 99.6%
Old scraps - 46.3% 54% 54%
Electricity mix EAA ENTSO-E ENTSO-E EDF Equilibria
electricity
mix
-Nuclear power 8.7% 28.1% 28.1% -
-Fossil fuel 11.3% 48.3% 48.3% -
-Renewable 80% 23.6% 23.6% 100%
Transport 11,413 km 322 km ≈526 km ≈526 km
-Road 336 km 193 km 526 km 526 km
-Train 21 km 109 km
-Sea 11,056 km 20 km
Table 5
5 Electricity source distribution for electricity mix used in the sensitivity analysis.
Electricity mix 3a: ENTSO-E 3b: Green French German
Electricity Electricity Electricity
Source of data Ecoinvent 3.1 Powernext, Ecoinvent Ecoinvent
2014 3.1 3.1
Nuclear 28.1% - 77.2% 16.8%
Fossil Fuel 48.3% - 8.9% 62.1%
-Coal 12.7% 4.2% 19.7%
-Lignite 8.0% 0% 26.8%
-Natural gas 16.5% 3.2% 14.3%
Oil 11.1% 1.5% 1.3%
Renewable 21.9% 100% 13.4% 21.0%
energy
-Hydropower 11.9% 99.3% 11.9% 4.9%
-Wind & Solar 10.0% 0.7% 1.5% 16.1%
and other
Other 1.7% - 0.5% 0.1%
Acknowledgements
This work was performed within the help of Marie Vuaillat from EVEA Consultancy firm and with financial support from French Agency for Environment and Energy Efficiency (ADEME). We also want to thank MTB Recycling and the French National Association for Technical Research (ANRT) for the funding of the PhD study (CIFRE Convention N °2015/0226) of the first author. |
01758699 | en | [
"sdv.bibs",
"sdv.gen.gh",
"sdv.imm.ii",
"sdv.imm.ia",
"sdv.gen.gpo",
"sdv.mhep.mi"
] | 2024/03/05 22:32:10 | 2018 | https://pasteur.hal.science/pasteur-01758699/file/10_Publication_Predictor_humoralImmuneResponse.pdf | Milieu Intérieur Consortium
Petar Scepanovic
Cécile Alanio
Christian Hammer
Flavia Hodel
Jacob Bergstedt
Etienne Patin
Christian W Thorball
Nimisha Chaturvedi
Bruno Charbit
Laurent Abel
Lluis Quintana-Murci
Darragh Duffy
Matthew L Albert
EPFL, Lausanne Jacques Fellay
Hôpital Necker
Andres Alcover
Hugues Aschard
Kalla Astrom
Philippe Bousso
Pierre Bruhns
Ana Cumano
Caroline Demangel
Ludovic Deriano
James Di Santo
Françoise Dromer
Gérard Eberl
Jost Enninga
Magnus Fontes
Antonio Freitas
Odile Gelpi
Ivo Gomperts-Boneca
Serge Hercberg
Olivier Lantz
Claude Leclerc
Hugo Mouquet
Sandra Pellegrini
Stanislas Pol
Olivier Schwartz
Benno Schwikowski
Spencer Shorte
Vassili Soumelis
Marie-Noëlle Ungeheuer
Human genetic variants and age are the strongest predictors of humoral immune responses to common pathogens and vaccines
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Humans are regularly exposed to infectious agents, including common viruses such as cytomegalovirus (CMV), Epstein-Barr virus (EBV) or herpes simplex virus-1 (HSV-1), that have the ability to persist as latent infections throughout life -with possible reactivation events depending on extrinsic and intrinsic factors [START_REF] Traylen | Virus reactivation: a panoramic view in human infections[END_REF]. Humans also receive multiple vaccinations, which in many cases are expected to achieve lifelong immunity in the form of neutralizing antibodies. In response to each of these stimulations, the immune system mounts a humoral response, triggering the production of specific antibodies that play an essential role in limiting infection and providing long-term protection. Although the intensity of the humoral response to a given stimulation has been shown to be highly variable [START_REF] Grundbacher | Heritability estimates and genetic and environmental correlations for the human immunoglobulins G, M, and A[END_REF][START_REF] Tsang | Global analyses of human immune variation reveal baseline predictors of postvaccination responses[END_REF][START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF], the genetic and non-genetic determinants of this variability are still largely unknown. The identification of such factors may lead to improved vaccination strategies by optimizing vaccine-induced immunoglobulin G (IgG) protection, or to new understanding of autoimmune diseases, where immunoglobulin levels can correlate with disease severity [START_REF] Almohmeed | Systematic review and metaanalysis of the sero-epidemiological association between Epstein Barr virus and multiple sclerosis[END_REF].
Several genetic variants have been identified that account for inter-individual differences in susceptibility to pathogens [START_REF] Timmann | Genome-wide association study indicates two novel resistance loci for severe malaria[END_REF][START_REF] Mclaren | Association study of common genetic variants and HIV-1 acquisition in 6,300 infected cases and 7,200 controls[END_REF][START_REF] Casanova | The genetic theory of infectious diseases: a brief history and selected illustrations[END_REF], and in infectious [START_REF] Mclaren | Polymorphisms of large effect explain the majority of the host genetic contribution to variation of HIV-1 virus load[END_REF] or therapeutic [START_REF] Ge | Genetic variation in IL28B predicts hepatitis C treatment-induced viral clearance[END_REF] phenotypes. By contrast, relatively few studies have investigated the variability of humoral responses in healthy humans [START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF][START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF][START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF]. In particular, Hammer C., et al. examined the contribution of genetics to variability in human antibody responses to common viral antigens, and finemapped variants at the HLA class II locus that associated with IgG responses. To replicate and extend these findings, we measured IgG responses to 15 antigens from common infectious agents or vaccines as well as total IgG, IgM, IgE and IgA levels in 1,000 wellcharacterized healthy donors. We used an integrative approach to study the impact of age, sex, non-genetic and genetic factors on humoral responses in healthy humans.
Methods
Study participants
The Milieu Intérieur cohort consists of 1,000 healthy individuals that were recruited by BioTrial (Rennes, France). The cohort is stratified by sex (500 men, 500 women) and age (200 individuals from each decade of life, between 20 and 70 years of age). Donors were selected based on stringent inclusion and exclusion criteria, previously described [START_REF] Thomas | Milieu Intérieur Consortium. The Milieu Intérieur study -an integrative approach for study of human immunological variance[END_REF]. Briefly, recruited individuals had no evidence of any severe/chronic/recurrent medical conditions. The main exclusion criteria were: seropositivity for human immunodeficiency virus (HIV) or hepatitis C virus (HCV); ongoing infection with the hepatitis B virus (HBV) -as evidenced by detectable HBs antigen levels; travel to (sub-)tropical countries within the previous 6 months; recent vaccine administration; and alcohol abuse. To avoid the influence of hormonal fluctuations in women during the peri-menopausal phase, only pre-or postmenopausal women were included. To minimize the importance of population substructure on genomic analyses, the study was restricted to self-reported Metropolitan French origin for three generations (i.e., with parents and grandparents born in continental France). Whole blood samples were collected from the 1,000 fasting healthy donors on lithium heparin tubes, from September 2012 to August 2013.
Serologies
Total IgG, IgM, IgE, and IgA levels were measured using clinical grade turbidimetric test on AU 400 Olympus at the BioTrial (Rennes, France). Antigen-specific serological tests were performed using clinical-grade assays measuring IgG levels, according to the manufacturer's instructions. A list and description of the assays is provided in Table S1. Briefly, anti-HBs and anti-HBc IgGs were measured on the Architect automate (CMIA assay, Abbott). Anti-CMV IgGs were measured by CMIA using the CMV IgG kit from Beckman Coulter on the Unicel Dxl 800 Access automate (Beckman Coulter). Anti-Measles, anti-Mumps and anti-Rubella IgGs were measured using the BioPlex 2200 MMRV IgG kit on the BioPlex 2200 analyzer (Bio-Rad). Anti-Toxoplasma gondi, and anti-CMV IgGs were measured using the BioPlex 2200 ToRC IgG kit on the BioPlex 2200 analyzer (Bio-Rad). Anti-HSV1 and anti-HSV2 IgGs were measured using the BioPlex 2200 HSV-1 & HSV-2 IgG kit on the BioPlex 2200 analyzer (Bio-Rad). IgGs against Helicobacter Pylori were measured by EIA using the PLATELIA H. Pylori IgG kit (BioRad) on the VIDAS automate (Biomérieux). Anti-influenza A IgGs were measured by ELISA using the NovaLisa IgG kit from NovaTec (Biomérieux). In all cases, the criteria for serostatus definition (positive, negative or indeterminate) were established by the manufacturer, and are indicated in Table S2. Donors with an unclear result were retested, and assigned a negative result if borderline levels were confirmed with repeat testing.
Non-genetic variables
A large number of demographical and clinical variables are available in the Milieu Intérieur cohort as a description of the environment of the healthy donors [START_REF] Thomas | Milieu Intérieur Consortium. The Milieu Intérieur study -an integrative approach for study of human immunological variance[END_REF]. These include infection and vaccination history, childhood diseases, health-related habits, and sociodemographical variables. Of these, 53 where chosen for subsequent analysis of their impact on serostatus. This selection is based on the one done in [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], with a few variables added, such as measures of lipids and CRP.
Testing of non-genetic variables
Using serostatus variables as the response, and non-genetic variables as treatment variables, we fitted a logistic regression model for each response and treatment variable pair. A total of 14 * 52 = 742 models where therefore fitted. Age and sex where included as controls for all models, except if that variable was the treatment variable. We tested the impact of the clinical and demographical variables using a likelihood ratio test. All 742 tests where considered a multiple testing family with the false discovery rate (FDR) as error rate.
Age and sex testing
To examine the impact of age and sex we performed logistic and linear regression analyses for serostatus and IgG levels, respectively. All continuous traits (i.e. quantitative measurements of antibody levels) were log10-transformed in donors assigned as positive using a clinical cutoff. We used false discovery rate (FDR) correction for the number of serologies tested (associations with P < 0.05 were considered significant). To calculate odd ratios in the age analyses, we separated the cohort in equal numbers of young (<45 years old) and old (>=45 years old) individuals, and utilized the epitools R package (v0.5-10).
DNA genotyping
Blood was collected in 5mL sodium EDTA tubes and was kept at room temperature (18-25°) until processing. DNA was extracted from human whole blood and genotyped at 719,665 single nucleotide polymorphisms (SNPs) using the HumanOmniExpress-24 BeadChip (Illumina). The SNP call rate was higher than 97% in all donors. To increase coverage of rare and potentially functional variation, 966 of the 1,000 donors were also genotyped at 245,766 exonic variants using the HumanExome-12 BeadChip. The HumanExome variant call rate was lower than 97% in 11 donors, which were thus removed from this dataset. We filtered out from both datasets genetic variants that: (i) were unmapped on dbSNP138, (ii) were duplicated, (iii) had a low genotype clustering quality (GenTrain score < 0.35), (iv) had a call rate < 99%, (v) were monomorphic, (vi) were on sex chromosomes, or (vii) diverged significantly from Hardy-Weinberg equilibrium (HWE P < 10 -7 ). These quality-control filters yielded a total of 661,332 and 87,960 variants for the HumanOmniExpress and HumanExome BeadChips, respectively. Average concordance rate for the 16,753 SNPs shared between the two genotyping platforms was 99.9925%, and individual concordance rates ranged from 99.8% to 100%.
Genetic relatedness and structure
As detailed elsewhere [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], relatedness was detected using KING [START_REF] Manichaikul | Robust relationship inference in genome-wide association studies[END_REF]. Six pairs of related participants (parent-child, first and second degree siblings) were detected and one individual from each pair, randomly selected, was removed from the genetic analyses. The genetic structure of the study population was estimated using principal component analysis (PCA), implemented in EIGENSTRAT (v6.1.3) [START_REF] Patterson | Population structure and eigenanalysis[END_REF].
Genotype imputation
We used Positional Burrows-Wheeler Transform for genotype imputation, starting with the 661,332 quality-controlled SNPs genotyped on the HumanOmniExpress array. Phasing was performed using EAGLE2 (v2.0.5) [START_REF] Loh | Reference-based phasing using the Haplotype Reference Consortium panel[END_REF]. As reference panel, we used the haplotypes from the Haplotype Reference Consortium (release 1.1) [START_REF] Mccarthy | A reference panel of 64,976 haplotypes for genotype imputation[END_REF]. After removing SNPs that had an imputation info score < 0.8 we obtained 22,235,661 variants. We then merged the imputed dataset with 87,960 variants directly genotyped on the HumanExome BeadChips array and removed variants that were monomorphic or diverged significantly from Hardy-Weinberg equilibrium (P < 10 -7 ). We obtained a total of 12,058,650 genetic variants to be used in association analyses. We used SNP2HLA (v1.03) [START_REF] Jia | Imputing amino acid polymorphisms in human leukocyte antigens[END_REF] to impute 104 4-digit HLA alleles and 738 amino acid residues (at 315 variable amino acid positions of the HLA class I and II proteins) with a minor allele frequency (MAF) of >1%. We used KIR*IMP [START_REF] Vukcevic | Imputation of KIR Types from SNP Variation Data[END_REF] to impute KIR alleles, after haplotype inference on chromosome 19 with SHAPEIT2 (v2.r790) [START_REF] O'connell | A general approach for haplotype phasing across the full spectrum of relatedness[END_REF]. A total of 19 KIR types were imputed: 17 loci plus two extended haplotype classifications (A vs. B and KIR haplotype). A MAF threshold of 1% was applied, leaving 16 KIR alleles for association analysis.
Genetic association analyses
For single variant association analyses, we only considered SNPs with a MAF of >5% (N=5,699,237). We used PLINK (v1.9) [START_REF] Chang | Second-generation PLINK: rising to the challenge of larger and richer datasets[END_REF] to perform logistic regression for binary phenotypes (serostatus: antibody positive versus negative) and linear regression for continuous traits (log10-transformed quantitative measurements of antibody levels in donors assigned as positive using a clinical cutoff). The first two principal components of a PCA based on genetic data, age and sex were used as covariates in all tests. In order to correct for baseline difference in IgG production in individuals, total IgG levels were included as covariates when examining associations with antigen-specific antibody levels, total IgM, IgE and IgA levels. From a total of 53 additional variables additional co-variates, selected by using elastic net [START_REF] Zhou | Efficient multivariate linear mixed model algorithms for genome-wide association studies[END_REF] and stability selection [START_REF] Meinshausen | Stability selection[END_REF] as detailed elsewhere [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], were included in some analyses (Table S3). For all antigen-specific genome-wide association studies, we used a genome-wide significant threshold (Pthreshold < 3.3 x 10 -9 ) corrected for the number of antigens tested (N=15). For genome-wide association tests with total Ig levels we set the threshold at Pthreshold < 1.3 x 10 -8 , correcting for the four immunoglobulin classes tested. For specific HLA analyses, we used PLINK (v1.07) [START_REF] Purcell | PLINK: a tool set for whole-genome association and population-based linkage analyses[END_REF] to perform conditional haplotype-based association tests and multivariate omnibus tests at multi-allelic amino acid positions.
Variant annotation and gene burden testing
We used SnpEff (v4.3g) [START_REF] Cingolani | A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w[END_REF] to annotate all 12,058,650 variants. A total of 84,748 variants were annotated as having (potentially) moderate (e.g. missense variant, inframe deletion, etc.) or high impact (e.g. stop gained, frameshift variant, etc.) and were included in the analysis. We used bedtools v2.26.0 [START_REF] Quinlan | BEDTools: a flexible suite of utilities for comparing genomic features[END_REF] to intersect variant genomic location with gene boundaries, thus obtaining sets of variants per gene. By performing kernel-regression-based association tests with SKAT_CommonRare (testing the combined effect of common and rare variants) and SKATBinary implemented in the SKAT v1.2.1 [START_REF] Ionita-Laza | Sequence kernel association tests for the combined effect of rare and common variants[END_REF], we tested 16,628 gene sets for association with continuous and binary phenotypes, respectively. By SKAT default parameters, variants with MAF ≤ ! √#$ are considered rare, whereas variants with MAF ≥ ! √#$ were considered common, where N is the sample size. We used Bonferroni correction for multiple testing, accounting for the number of gene sets and phenotypes tested (Pthreshold < 2 x 10 -7 for antigen-specific tests and Pthreshold < 7.5 x 10 -7 for tests with total Ig levels).
Results
Characterization of humoral immune responses in the 1,000 study participants
To characterize the variability in humoral immune responses between healthy individuals, we measured total IgG, IgM, IgA and IgE levels in the plasma of the 1,000 donors of the Milieu Interieur (MI) cohort. After log10 transformation, total IgG, IgM, IgA and IgE levels showed normal distributions, with a median ± sd of 1.02 ±0.08 g/l, 0.01 ±0.2 g/l, 0.31 ±0.18 g/l and 1.51 ±0.62 UI/ml, respectively (Figure S1A).
We then evaluated specific IgG responses to multiple antigens from the following infections and vaccines: (i) 7 common persistent pathogens, including five viruses: CMV, EBV (EA, EBNA, and VCA antigens), herpes simplex virus 1 & 2 (HSV-1 & 2), varicella zoster virus (VZV), one bacterium: Helicobacter pylori (H. Pylori), and one parasite: Toxoplasma gondii (T. Gondii); (ii) one recurrent virus: influenza A virus (IAV); and (iii) four viruses for which most donors received vaccination: measles, mumps, rubella, and HBV (HBs and HBc antigens) (Figure 1). The distributions of log10-transformed antigen-specific IgG levels in the 1,000 donors for the 15 serologies are shown in Figure S1B. Donors were classified as seropositive or seronegative using the thresholds recommended by the manufacturer (Table S2).
The vast majority of the 1,000 healthy donors were chronically infected with EBV (seropositivity rates of 96% for EBV VCA, 91% for EBV EBNA and 9% for EBV EA) and VZV (93%). Many also showed high-titer antibodies specific for IAV (77%), HSV-1 (65%), and T. Gondii (56%). By contrast, fewer individuals were seropositive for CMV (35%), HSV-2 (21%), and H. Pylori (18%) (Figure 1, Figure S2A and Table S2). The majority of healthy donors carried antibodies against 5 or more persistent/recurrent infections of the 8 infectious agents tested (Figure S2B). 51% of MI donors were positive for anti-HBs IgG -a large majority of them as a result of vaccination, as only 15 study participants (3% of the anti-HBs positive group) were positive for anti-HBc IgG, indicative of previous HBV infection (spontaneously cured, as all donors were negative for HbS antigen, criteria for inclusion in the study). For rubella, measles, and mumps, seropositivity rates were 94%, 91%, and 89% respectively. For the majority of the donors, this likely reflects vaccination with a trivalent vaccine, which was integrated in 1984 as part of national recommendations in France, but for some -in particular the >40 year-old individuals of the cohort, it may reflect acquired immunity due to natural infection.
Associations of age, sex, and non-genetic variables with serostatus
Subjects included in the Milieu Interieur cohort were surveyed for a large number of variables related to infection and vaccination history, childhood diseases, health-related habits, and socio-demographical variables (http://www.milieuinterieur.fr/en/researchactivities/cohort/crf-data). Of these, 53 where chosen for subsequent analysis of their impact on serostatus. This selection is based on the one done in [START_REF] Patin | Natural variation in innate immune cell parameters is preferentially driven by genetic factors[END_REF], with a few variables added, such as measures of lipids and CRP. Applying a mixed model analysis that controls for potential confounders and batch effects, we found expected associations of HBs seropositivity with previous administration of HBV vaccine, as well as of Influenza seropositivity with previous administration of Flu vaccine (Figure S3A and Table S4). We also found associations of HBs seropositivity with previous administration of Typhoid and Hepatitis A vaccines -which likely reflects co-immunization, as well as with Income, Employment, and Owning a house -which likely reflects confounding epidemiological factors.
We observed a significant impact of age on the probability of being seropositive for antigens from persistent or recurrent infectious agents and/or vaccines. For 14 out of the 15 examined serologies, older people (> 45 years old) were more likely to have detectable specific IgG, with an odds ratio (OR; mean ± SD) of 5.4 ± 8.5 (Figure 2A, Figure S3B and Table S5). We identified four different profiles of age-dependent evolution of seropositivity rates (Figure 2B and Figure S4). Profile 1 is typical of childhood-acquired infection, i.e. microbes that most donors had encountered by age 20 (EBV, VZV, and influenza). We observed in this case either (i) a limited increase in seropositivity rate after age 20 for EBV; (ii) stability for VZV; or (iii) a small decrease in seropositivity rate with age for IAV (Figure S4A-E). Profile 2 concerns prevalent infectious agents that are acquired throughout life, with steadily increasing prevalence (observed for CMV, HSV-1, and T. gondii). We observed in this case either (i) a linear increase in seropositivity rates over the 5 decades of age for CMV (seropositivity rate: 24% in 20-29 years-old; 44% in 60-69 years-old; slope=0.02) and T. Gondii (seropositivity rate: 21% in 20-29 years-old; 88% in 60-69; slope=0.08); or (ii) a nonlinear increase in seropositivity rates for HSV-1, with a steeper slope before age 40 (seropositivity rate: 36% in 20-29 years-old; 85% in 60-69; slope=0.05) (Figure S4F-H). Profile 3 showed microbial agents with limited seroprevalence -in our cohort, HSV-2, HBV (anti-HBS and anti-HBc positive individuals, indicating prior infection rather than vaccination), and H. Pylori. We observed a modest increase of seropositivity rates throughout life, likely reflecting continuous low-grade exposure (Figure S4I-K). Profile 4 is negatively correlated with increasing age and is unique to HBV anti-HBs serology (Figure S4L). This reflects the introduction of the HBV vaccine in 1982 and the higher vaccination coverage of younger populations. Profiles for Measles, Mumps and Rubella are provided in Figure S4M-O.
We also observed a significant association between sex and serostatus for 7 of the 15 antigens, with a mean OR of 1.5 ± 0.5 (Figure 2C, Figure S3C and Table S5). For six serological phenotypes, women had a higher rate of positivity, IAV being the notable exception. These associations were confirmed when considering "Sharing house with partner", and "Sharing house with children" as covariates.
Impact of age and sex on total and antigen-specific antibody levels
We further examined the impact of age and sex on the levels of total IgG, IgM, IgA and IgE detected in the serum of the patients, as well as on the levels of antigen-specific IgGs in seropositive individuals. We observed a low impact of age and sex with total immunoglobulin levels (Figure 3A and Table S5), and of sex with specific IgG levels (Mumps and VZV; Figure S5A andC). In contrast, age had a strong impact on specific IgG levels in seropositive individuals, affecting 10 out of the 15 examined serologies (Figure 3B, Figure S5B and Table S5). Correlations between age and IgG were mostly positive, i.e. older donors had more specific IgG than younger donors, as for example in the case of Rubella (Figure 3C, left panel). The notable exception was T. gondii, where we observed lower amounts of specific IgG in older individuals (b=-0.013(-0.019, -0.007), P=3.7x10 -6 , Figure 3C, right panel).
Genome-wide association study of serostatus
To test if human genetic factors influence the rate of seroconversion upon exposure, we performed genome-wide association studies. Specifically, we searched for associations between 5.7 million common polymorphisms (MAF > 5%) and the 15 serostatus in the 1,000 healthy donors. Based on our results regarding age and sex, we included both as covariates in all models. After correcting for the number of antigens tested, the threshold for genomewide significance was Pthreshold = 3.3 x 10 -9 , for which we did not observe any significant association. In particular, we did not replicate the previously reported associations with H. Pylori serostatus on chromosome 1 (rs368433, P = 0.67, OR = 0.93) and 4 (rs10004195, P = 0.83, OD = 0.97) [START_REF] Mayerle | Identification of genetic loci associated with Helicobacter pylori serologic status[END_REF].
We then focused on the HLA region and confirmed the previously published association of influenza A serostatus with specific amino acid variants of HLA class II molecules [START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF]. The strongest association in the MI cohort was found with residues at position 31 of the HLA-DRβ1 subunit (omnibus P = 0.009, Table S6). Residues found at that position, isoleucine (P = 0.2, OD (95% CI) = 0.8 (0.56, 1.13)) and phenylalanine (P = 0.2, OR (95% CI) = 0.81 (0.56, 1.13)), are consistent in direction and in almost perfect linkage disequilibrium (LD) with the glutamic acid residue at position 96 in HLA-DRβ1 that was identified in the previous study (Table S7). As such, our result independently validates the previous observation.
Genome-wide association study of total and antigen-specific antibody levels
To test whether human genetic factors also influence the intensity of antigen-specific immune response, we performed genome-wide association studies of total IgG, IgM, IgA and IgE levels, as well as antigen-specific IgG levels.
Using a significance threshold of Pthreshold < 1.3 x 10 -8 , we found no SNPs associated with total IgG, IgM, IgE and IgA levels. However, we observed nominal significance and the same direction of the effect for 3 out of 11 loci previously published for total IgA [START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF][START_REF] Swaminathan | Variants in ELL2 influencing immunoglobulin levels associate with multiple myeloma[END_REF][START_REF] Viktorin | IgA measurements in over 12 000 Swedish twins reveal sex differential heritability and regulatory locus near CD30[END_REF][START_REF] Frankowiack | The higher frequency of IgA deficiency among Swedish twins is not explained by HLA haplotypes[END_REF][START_REF] Yang | Genome-wide association study identifies TNFSF13 as a susceptibility gene for IgA in a South Chinese population in smokers[END_REF], 1 out of 6 loci for total IgG [START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF][START_REF] Swaminathan | Variants in ELL2 influencing immunoglobulin levels associate with multiple myeloma[END_REF][START_REF] Liao | Genome-wide association study identifies common variants at TNFRSF13B associated with IgG level in a healthy Chinese male population[END_REF] and 4 out of 11 loci for total IgM [START_REF] Jonsson | Identification of sequence variants influencing immunoglobulin levels[END_REF][START_REF] Yang | Genome-wide scan identifies variant in TNFSF13 associated with serum IgM in a healthy Chinese male population[END_REF] (Table S8). Finally, we also report a suggestive association (genome-wide significant, P < 5.0 x 10 -8 , but not significant when correcting for the number of immunoglobulin classes tested in the study) of a SNP rs11186609 on chromosome 10 with total IgA levels (P = 2.0 x 10 -8 , beta = -0.07 for the C allele). The closest gene for this signal is SH2D4B.
We next explored associations between human genetic variants and antigen-specific IgG levels in seropositive donors (Pthreshold < 3.3 x 10 -9 ). We detected significant associations for anti-EBV (EBNA antigen) and anti-rubella IgGs. Associated variants were in both cases located in the HLA region on chromosome 6. For EBV, the top SNP was rs74951723 (P = 3 x 10 -14 , beta = 0.29 for the A allele) (Figure 4A). For rubella, the top SNP was rs115118356 (P = 7.7 x 10 -10 , beta = -0.11 for the G allele) (Figure 4B). rs115118356 is in LD with rs2064479, which has been previously reported as associated with titers of anti-rubella IgGs (r 2 = 0.53 and D' = 0.76) [START_REF] Lambert | Polymorphisms in HLA-DPB1 are associated with differences in rubella virusspecific humoral immunity after vaccination[END_REF].
To fine map the associations observed in the HLA region, we tested 4-digit HLA alleles and variable amino positions in HLA proteins. At the level of HLA alleles, HLA-DQB1*03:01 showed the lowest P-value for association with EBV EBNA (P = 1.3 x 10 -7 ), and HLA-DPB1*03:01 was the top signal for rubella (P = 3.8 x 10 -6 ). At the level of amino acid positions, position 58 of the HLA-DRb1 protein associated with anti-EBV (EBNA antigen) IgG levels (P = 2.5 x 10 -11 ). This is consistent with results of previous studies linking genetic variations in HLA-DRβ1 with levels of anti-EBV EBNA-specific IgGs [START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF][START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF][START_REF] Pedergnana | Combined linkage and association studies show that HLA class II variants control levels of antibodies against Epstein-Barr virus antigens[END_REF] (Table S9). In addition, position 8 of the HLA-DPb1 protein associated with anti-rubella IgG levels (P = 1.1 x 10 -9 , Table 1). Conditional analyses on these amino-acid positions did not reveal any additional independent signals.
KIR associations
To test whether specific KIR genotypes, and their interaction with HLA molecules, are associated with humoral immune responses, we imputed KIR alleles from SNP genotypes using KIR*IMP. First, we searched for potential associations with serostatus or IgG levels for 16 KIR alleles that had a MAF > 1%. After correction for multiple testing, we did not find any significant association (Pthreshold < 2.6 x 10 -4 ). Second, we tested specific KIR-HLA combinations. We filtered out rare combinations by removing pairs that were observed less then 4 times in the cohort. After correction for multiple testing (Pthreshold < 5.4 × 10 -7 ), we observed significant associations between total IgA levels and the two following HLA-KIR combinations: HLA-B*14:02/ KIR3DL1 and HLA-C*08:02/ KIR2DS4 (P = 3.9 x 10 -9 and P = 4.9 x 10 -9 respectively, Table 2).
Burden testing for rare variants
Finally, to search for potential associations between the burden of low frequency variants and the serological phenotypes, we conducted a rare variant association study. This analysis only included variants annotated as missense or putative loss-of-function (nonsense, essential splice-site and frame-shift, N=84,748), which we collapsed by gene and tested together using the kernel-regression-based association test SKAT [START_REF] Ionita-Laza | Sequence kernel association tests for the combined effect of rare and common variants[END_REF]. We restricted our analysis to genes that contained at least 5 variants. Two genes were identified as significantly associated with total IgA levels using this approach: ACADL (P = 3.4 x 10 -11 ) and TMEM131 (P=7.8 x 10 -11 ) (Table 3). By contrast, we did not observe any significant associations between rare variant burden and antigen-specific IgG levels or serostatus.
Discussion
We performed genome-wide association studies for a number of serological phenotypes in a well-characterized age-and sex-stratified cohort, and included a unique examination of genetic variation at HLA and KIR loci, as well as KIR-HLA associations. As such, our study provides a broad resource for exploring the variability in humoral immune responses across different isotypes and different antigens in humans.
Using a fine-mapping approach, we replicated the previously reported associations of variation in the HLA-DRb1 protein with influenza A serostatus and anti-EBV IgG titers [START_REF] Rubicz | Genetic factors influence serological measures of common infections[END_REF][START_REF] Hammer | Amino Acid Variation in HLA Class II Proteins Is a Major Determinant of Humoral Response to Common Viruses[END_REF], implicating amino acid residues in strong LD with the ones previously reported (Hammer et al.). We also replicated an association between HLA class II variation and anti-Rubella IgG titers [START_REF] Lambert | Polymorphisms in HLA-DPB1 are associated with differences in rubella virusspecific humoral immunity after vaccination[END_REF], and further fine-mapped it to position 8 of the HLA-DPb1 protein. Interestingly, position 8 of HLA-DPb1, as well as positions 58 and 31 of HLA-DRb1, are all part of the extracellular domain of the respective proteins. Our findings confirm these proteins as critical elements for the presentation of processed peptide to CD4 + T cells, and as such may reveal important clues in the fine regulation of class II antigen presentation. We also identified specific HLA/KIR combinations, namely HLA-B*14:02/KIR3DL1 and HLA-C*08:02/KIR2DS4, which associate with higher levels of circulating IgA. Given the novelty of KIR imputation method and the lack of possibility of benchmarking its reliability in the MI cohort further replication of these results will be needed. Yet these findings support the concept that variations in the sequence of HLA Class II molecules, or specific KIRs/HLA class I interactions play a critical role in shaping humoral immune responses in humans. In particular, our findings confirm that small differences in the capacity of HLA class II molecules to bind specific viral peptides can have a measurable impact on downstream antibody production. As such, our study emphasizes the importance of considering HLA diversity in disease association studies where associations between IgG levels and autoimmune diseases are being explored.
We identified nominal significance for some but not all of the previously reported associations with levels of total IgG, IgM and IgA, as well as a suggestive association of total IgA levels with an intergenic region on chromosome 10 -closest gene being SH2D4B. By collapsing the rare variants present in our dataset into gene sets and testing them for association with the immunoglobulin phenotypes, we identified two additional loci that participate to natural variation in IgA levels. These associations mapped to the genes ACADL and TMEM131. ACADL encodes an enzyme with long-chain acyl-CoA dehydrogenase activity, and polymorphisms have been associated with pulmonary surfactant dysfunction [START_REF] Goetzman | Long-chain acyl-CoA[END_REF]. As the same gene is associated with levels of circulating IgA in our cohort, we speculate that ACADL could play a role in regulating the balance between mucosal and circulating IgA. Further studies will be needed to test this hypothesis, as well as the potential impact of our findings in other IgA-related diseases.
We were not able to replicate previous associations of TLR1 and FCGR2A locus with serostatus for H. Pylori [START_REF] Mayerle | Identification of genetic loci associated with Helicobacter pylori serologic status[END_REF]. We believe this may be a result of notable differences in previous exposure among the different cohorts as illustrated by the different levels of seropositivity; 17% in the Milieu Interieur cohort, versus 56% in the previous ones, reducing the likelihood of replication due to decreased statistical power.
In addition to genetics findings, our study re-examined the impact of age and sex, as well as non-genetic variables, on humoral immune responses. Although this question has been previously addressed, our well-stratified cohort brings interesting additional insights. One interesting finding is the high rate of seroconversion for CMV, HSV-1, and T. Gondii during adulthood. In our cohort, the likelihood of being seropositive for one of these infections is comparable at age 20 and 40. Given the high prevalence of these microbes in the environment, this raises questions about the factors that prevent some individuals from becoming seropositive upon late life exposure. Second, both age and sex have a strong correlation with serostatus, i.e. older and female donors were more likely to be seropositive. Although increased seropositivity with age probably reflects continuous exposure, the sex effect is intriguing. Indeed, our study considered humoral responses to microbial agents that differ significantly in terms of physiopathology and that do not necessarily have a childhood reservoir. Also, our analysis show that associations persist after removal of potential confounding factors such as marital status, and/or number of kids. As such, we believe that our results may highlight a general impact of sex on humoral immune response variability, i.e. a tendency for women to be more likely to seroconvert after exposure, as compared to men of same age. This result is in line with observations from vaccination studies, where women responded to lower vaccine doses [39]. Finally, we observed an age-related increase in antigen-specific IgG levels in seropositive individuals for most serologies, with the notable exception of toxoplasmosis. This may indicate that aging plays a general role in IgG production. An alternative explanation that requires further study is that this could be the consequence of reactivation or recurrent exposure.
In sum, our study provides evidence that age, sex and host genetics contribute to natural variation in humoral responses in humans. The identified associations have the potential to help improve vaccination strategies, and/or dissect pathogenic mechanisms implicated in human diseases related to immunoglobulin production such as autoimmunity.
dehydrogenase deficiency as a cause of pulmonary surfactant dysfunction. (A) Relationships between Log10-transformed IgG (upper left), IgA (upper right), IgM (bottom left) and IgE (bottom right) levels and age. Regression lines were fitted using linear regression, with Log10-transformed total antibody levels as response variable, and age and sex as treatment variables. Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method. (B) Effect sizes of significant associations (adjusted P-values (adj. P<0.05) between age and Log10-transformed antigenspecific IgG levels in the 1,000 healthy individuals from the Milieu Intérieur cohort. Effect sizes were estimated in a linear mixed model, with Log10-transformed antigen-specific IgG levels as response variables, and age and sex as treatment variables. Dots represent the mean of the beta. Lines represent the 95% confidence intervals. (C) Relationships between Log10-transformed anti-rubella IgGs (left), and Log10-transformed anti-toxoplasma gondii IgGs (right) and age. Regression lines were fitted using linear regression described in (B). Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method.
Fig. 3
3 Fig.3 Age and sex impact on total and antigen-specific antibody levels.(A) Relationships between Log10-transformed IgG (upper left), IgA (upper right), IgM (bottom left) and IgE (bottom right) levels and age. Regression lines were fitted using linear regression, with Log10-transformed total antibody levels as response variable, and age and sex as treatment variables. Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method. (B) Effect sizes of significant associations (adjusted P-values (adj. P<0.05) between age and Log10-transformed antigenspecific IgG levels in the 1,000 healthy individuals from the Milieu Intérieur cohort. Effect sizes were estimated in a linear mixed model, with Log10-transformed antigen-specific IgG levels as response variables, and age and sex as treatment variables. Dots represent the mean of the beta. Lines represent the 95% confidence intervals. (C) Relationships between Log10-transformed anti-rubella IgGs (left), and Log10-transformed anti-toxoplasma gondii IgGs (right) and age. Regression lines were fitted using linear regression described in (B). Indicated adj. P were obtained using the mixed model, and corrected for multiple testing using the FDR method.
Fig. 4
4 Fig.4 Association between host genetic variants and serological phenotypesManhattan plots of association results for (A) EBV anti-EBNA IgG, (B) Rubella IgG levels. The dashed horizontal line denotes genome-wide significance (P = 3.3 x 10 -9 ).
Fig. S1
S1 Fig.S1 Distribution of serological variables, and clinical threshold used for determination of serostatus. (A) Distribution and probability density curve of Log10-transformed IgG, IgM, IgA, IgE levels in the 1,000 study participants. (B) Distribution of Log10-transformed antigen-specific IgG levels. The vertical lines indicate the clinical threshold determined by manufacturer, and used for determining the serostatus of the donors for each serology.
Fig.S2 Seroprevalence data in the 1 ,
1 Fig.S2 Seroprevalence data in the 1,000 healthy donors. (A) Percentage of seropositive donors for each indicated serology in the MI study (for HBV serology, percentages of anti-HBs IgGs are indicated). (B) Distribution of the number of positive serologies in the 1,000 healthy donors regarding the 8 persistent or recurrent infections tested in our study (i.e. CMV, Influenza, HSV1, HSV2, TP, EBV_EBNA, VZV, HP).
Fig. S3
S3 Fig.S3 Impact of non-genetic factors, age, and sex on serostatus.(A) Adjusted P-values (FDR) of the large-sample chi-square likelihood ratio tests of effect of non-genetic variables on serostatus, obtained from mixed models. (B-C) Adjusted P-values (adj. P) of the tests of effect of age (<45 = reference, vs. >45 years old) (B) and sex (Men = reference, vs. Women) (C) on serostatus, obtained using a generalized linear mixed model, with serostatus as response variables, and age and sex as treatment variables. Odd ratios were color-coded. Vertical black line indicates the -log10 of the chosen threshold for statistical significance (-log10(0.05) = 1.30103).
Fig. S4
S4 Fig.S4 Evolution of serostatus with age and sex.(A-O) Odds of being seropositive for each of the 15 antigens considered in our study, as a function of age in men (blue) and women (red). Indicated P-values were obtained using a logistic regression with Wald test, with serostatus binary variables (seropositive, versus seronegative) as the response, and age and sex as covariates.
Fig. S5
S5 Fig.S5 Impact of age and sex on antigen-specific IgG levels.(A) Effect sizes of significant associations (adjusted P-values (adj. P<0.05) between sex and Log10-transformed antigen-specific IgG levels in the 1,000 healthy individuals from the Milieu Intérieur cohort. Effect sizes were estimated in a linear mixed model described in (Figure3B). Dots represent the mean of the beta. Lines represent the 95% confidence intervals. (B-C) Adjusted P-values (adj. P) of the tests of effect of age (B) and sex (C) on Log10-transformed antigen-specific IgG levels, obtained using a linear mixed model, with Log10-transformed antigen-specific IgG levels as response variables, and age and sex as treatment variables. Normalized effect sizes were color-coded. The vertical black line indicates the -log10 of the threshold for statistical significance (-log10(0.05) = 1.30103).
The clinical study was approved by the Comité de Protection des Personnes -Ouest 6 on June 13th, 2012, and by the French Agence Nationale de Sécurité du Médicament (ANSM) on June 22nd, 2012. The study is sponsored by Institut Pasteur (Pasteur ID-RCB Number: 2012-A00238-35), and was conducted as a single center study without any investigational product. The protocol is registered under ClinicalTrials.gov (study# NCT01699893).
Table 1 .
1 Significant associations with EBV EBNA and Rubella antigens at the level of SNP, HLA allele and protein amino acid position Phenotype
EBV EBNA IgG levels Rubella IgG levels
Table 2 .
2 Association testing between KIR-HLA interactions and serology phenotypes
Phenotype KIR HLA Estimate Std. Error P-value
IgA levels KIR3DL1 HLA-B*14:02 0.456 0.077 3.9x10 -09
IgA levels KIR2DS4 HLA-B*14:02 0.454 0.077 4.5x10 -09
IgA levels KIR3DL1 HLA-C*08:02 0.449 0.076 4.9x10 -09
IgA levels KIR2DS4 HLA-C*08:02 0.448 0.076 5.7x10 -09
Table 3 .
3 Significant associations of rare variants collapsed per gene set with IgA levels.
N o of Rare N o of Common
Phenotype Chromosome Gene P-value Q Markers Markers
IgA levels 2 2 ACADL TMEM131 7.83x10 -11 17.89 3.42x10 -11 18.09 5 13 2 2
J Biol Chem. 2014 Apr 11;289(15):10668-79. 39. Giefing-Kröll C, Berger P, Lepperdinger G, Grubeck-Loebenstein B. How sex and age affect immune responses, susceptibility to infections, and response to vaccination. Aging Cell. 2015 Jun;14(3):309-21.Serum samples from the 1,000 age-and sex-stratified healthy individuals of the Milieu Intérieur cohort were used for measuring total antibody levels (IgA, IgM, IgG and IgE), as well as for qualitative (serostatus) and quantitative (IgG levels) assessment of IgG responses against cytomegalovirus, Epstein-Barr virus (anti-EBNA, anti-VCA, anti-EA), herpes simplex virus 1 & 2, varicella zoster virus, Helicobacter pylori, Toxoplasma gondii, influenza A virus, measles, mumps, rubella, and hepatitis B virus (anti-HBs and anti-HBc), using clinical-grade serological assays. Odd ratios of significant associations (adjusted P-values (adj. P<0.05) between age (<45 = reference, vs. >45 yrs.old) and serostatus as determined based on clinical-grade serologies in the 1,000 healthy individuals from the Milieu Intérieur cohort. Odd ratios were estimated in a generalized linear mixed model, with serostatus as response variable, and age and sex as treatment variables. Dots represent the mean of the odd ratios. Lines represent the 95% confidence intervals.
Figure legends
Fig.1 Overview of the study.
Fig.2 Age and sex impact on serostatus.
(A)
(B) Odds of being seropositive towards EBV EBNA (Profile 1; upper left), Toxoplasma gondii (Profile 2; upper right), Helicobacter Pylori (Profile 3; bottom left), and HBs antigen of HBV (Profile 4; bottom right), as a function of age in men (blue) and women (red) in the 1,000 healthy donors. Indicated P-values were obtained using a logistic regression with Wald test, with serostatus binary variables (seropositive, versus seronegative) as the response, and age and sex as treatments. (C) Odd ratios of significant associations (adjusted P-values (adj. P<0.05) between sex (Men=reference, vs. Women) and serostatus. Odd ratios were estimated in a generalized linear mixed model, with serostatus as response variable, and age and sex as treatment variables. Dots represent the mean of the odd ratios. Lines represent the 95% confidence intervals.
Acknowledgements
This work benefited from support of the French government's Invest in the Future Program, managed by the Agence Nationale de la Recherche (ANR, reference 10-LABX-69-01). It was also supported by a grant from the Swiss National Science Foundation (31003A_175603, to JF). C.A. received a PostDoctoral Fellowship from Institut National de la Recherche Médicale. |
01758764 | en | [
"info.info-hc"
] | 2024/03/05 22:32:10 | 2018 | https://hal.sorbonne-universite.fr/hal-01758764/file/posture-author-version.pdf | Yvonne Jansen
email: jansen@isir.upmc.frkash@di.ku.dk
Kasper Hornbaek
How Relevant are Incidental Power Poses for HCI?
Keywords: Incidental postures, power pose, Bayesian analysis. ACM Classification H.5.m. Information Interfaces and Presentation (e.g. HCI): Miscellaneous
C D Figure 1. User interfaces can take on a variety of forms affording different body postures. We studied two types of postures: constrictive (A, C) and expansive (B, D); in two settings: on a wall-sized display (A, B) and on a large touchscreen (C, D).
INTRODUCTION
In 2010 Carney et al. asserted that "a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful [which] has real-world, actionable implications" [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] thereby coining the concept of power poses.
However, incidental body postures may only be leveraged in HCI if they can be reliably elicited. In 2015, a large-scale replication project [START_REF]Estimating the reproducibility of psychological science[END_REF] re-opened the files on 100 published experiments and found that a considerable number of reported effects did not replicate, leading to the so-called "replication crisis" in Psychology. Neither the study by Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] nor the one by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] was among the replicated studies, but multiple high powered and pre-registered studies have since then failed to establish a link between power poses and various behavioral measures [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF][START_REF] Garrison | Embodying power: a preregistered replication and extension of the power pose effect[END_REF][START_REF] Victor N Keller | Meeting your inner super (wo) man: are power poses effective when taught?[END_REF][START_REF] Ronay | Embodied power, testosterone, and overconfidence as a causal pathway to risk-taking[END_REF][START_REF] Bailey | Could a woman be superman? Gender and the embodiment of power postures[END_REF][START_REF] Bombari | Real and imagined power poses: is the physical experience necessary after all?[END_REF][START_REF] Jackson | Does that pose become you? Testing the effect of body postures on self-concept[END_REF][START_REF] Latu | Power vs. persuasion: can open body postures embody openness to persuasion[END_REF][START_REF] Klaschinski | Benefits of power posing: effects on dominance and social sensitivity[END_REF]. While a Bayesian meta-analysis of six pre-registered studies [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF] provides credible evidence for a small effect of power poses on self-reported felt power (d ≈ 0.2), the practical relevance of this small effect remains unclear [START_REF] Jonas | Power poses-where do we stand?[END_REF].
It should be noted that all of the failed replications focused on explicitly elicited postures as studied by Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF], that is, participants were explicitly instructed to take on a certain posture and afterwards were tested on various measures. Most relevant to HCI are, however, the experiments by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] on incidental power poses which so far appear to have not been replicated or refuted. Thus it remains unclear whether these effects replicate in an HCI context, and we offer the following contributions with this article:
• We operationalize power poses as incidental body postures which can be brought about by interface and interaction design. • We measure in a first experiment effects on self-reported felt power. Our results on their own are inconclusive as our data are consistent with a wide range of possible effect sizes, including zero. • In a second experiment we measure behavioral effects on risk-taking behavior while playing a computer game. Results indicate that the manipulation of incidental body posture does not predict willingness to take risks.
BACKGROUND
In this section we clarify the terminology we use in this article, motivate our work through two scenarios, summarize work on body posture in HCI and previous work in Psychology including the recent controversies around power poses.
Postures versus Gestures
Our use of the terms posture and gesture is consistent with the definitions of the American Heritage Dictionary:
posture: position of a person's body or body parts gesture: a motion of the limbs or body made to express or help express thought or to emphasize speech.
Accordingly, a gesture could be described as a dynamic succession of different hand, arm, or body postures. This article is mainly concerned with body postures as we are interested in features of postures "averaged" over the course of interaction, for example, the overall expansiveness of someone's posture during the use of a system.
Motivation
Within a classic desktop environment, that is, a desktop computer equipped with an external display, keyboard, and mouse, a user interface designer has little influence on a user's posture besides requiring or avoiding frequent changes between keyboard and mouse, or manipulating the mouse transfer function. As device form factors diversified, people now find themselves using computers in different environments such as small mobile phones or tablets while sitting, standing, walking, or lying down, or large touch sensitive surfaces while sitting or standing. Device form factors combined with interface design can thus impose postures on the user during interaction. For example, an interface requiring two-handed interaction on a small-screen device (phone, tablet, or laptop) requires that users bring together both hands and their gaze thereby leading to a constrictive incidental posture. On a large touchscreen interface, a UI designer can spread out elements which would require more reaching and lead to more expansive incidental postures (see Fig. 1B andD) or use techniques to bring elements closer together (e.g., [START_REF] Bezerianos | The Vacuum: Facilitating the Manipulation of Distant Objects[END_REF]) which can make postures more constrictive (see Fig. 1A andC).
We now sketch two scenarios to illustrate how work on body posture from Psychology applies to HCI and why it is relevant for UI design guidelines to determine whether expansive and constrictive body postures during interface use can influence people's motivation, behavior, or emotions.
Education
Riskind and Gotay [START_REF] John | Physical posture: Could it have regulatory or feedback effects on motivation and emotion[END_REF] reported that expansive postures led to a higher persistence when solving problems while people in constrictive postures gave up more easily. Within the area of interfaces for education purposes, say, in schools, it would be important to know whether learning environments designed for small tablet devices incur detrimental effects due to incidental constrictive postures during their use. Should this be the case then design guidelines for such use cases would need to be established recommending larger form factors combined with interfaces leading to more expansive postures.
Risky decision making
Yap et al. reported that driving in expansive car seats leads to riskier driving in a driving simulation [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. Some professions require decision making under risk on a regular basis, such as air traffic controllers, power plant operators, or financial brokers. Should the interface designs used in such professions (e.g., see Fig. 2) have an influence on people's decisions, then it would be important to minimize such effects accordingly. However, currently we know neither whether such effects exist in these contexts nor how they would need to be counteracted, should they exist.
Body Posture in HCI
The role of the body in HCI has been receiving increased attention. Dourish [START_REF] Dourish | Where the action is: the foundations of embodied interaction[END_REF] as well as Klemmer and colleagues [START_REF] Scott R Klemmer | How Bodies Matter: Five Themes for Interaction Design[END_REF] emphasize in a holistic manner the importance of the body for interaction design and also consider social factors. However, its role as a feedback channel to emotion and cognition has remained largely unstudied within HCI.
Body postures have received most attention in the context of affective game design. Bianchi-Berthouze and colleagues studied people's body movements during video-game play and how different gaming environments such as desktop or body-controlled games lead to different movement types [START_REF] Bianchi-Berthouze | On posture as a modality for expressing and recognizing emotions[END_REF][START_REF] Pasch | Immersion in Movement-Based Interaction[END_REF]. Savva and colleagues studied how players' emotions can be automatically recognized from their body movements and be used as indicators of aesthetic experience [START_REF] Savva | Continuous Recognition of Player's Affective Body Expression as Dynamic Quality of Aesthetic Experience[END_REF], and Bianchi-Berthouze proposed a taxonomy of types of body movements to facilitate the design of engaging game experiences [START_REF] Bianchi-Berthouze | Understanding the Role of Body Movement in Player Engagement[END_REF]. While this body of work also builds on posture work from Psychology, their interest is in understanding the link between players' body movement and affective experience, not on testing downstream effects of postures on behavior in an HCI context.
Isbister et al. [START_REF] Isbister | Scoop!: using movement to reduce math anxiety and affect confidence[END_REF] presented scoop! a game using expansive body postures with the intention to overcome math anxiety in students. The focus of this work is on the system's motivation and description and does not include empirical data. Snibbe and Raffle [START_REF] Scott | Social immersive media: pursuing best practices for multi-user interactive camera/projector exhibits[END_REF] report on their use of body posture and gestures to imbue visitors of science museums with intended emotions. Only little empirical work on concrete effects directly related to the work in Psychology has been published so far. De Rooij and Jones [START_REF] De | E)Motion and Creativity: Hacking the Function of Motor Expressions in Emotion Regulation to Augment Creativity[END_REF] studied gesture pairs based on these ideas. Their work builds on the hypothesis that movements are related to approach and avoidance behaviors, and therefore inherently linked to emotion. They test the hypothesis through an application for creative tasks such as idea generation. In one variant of their application, users extend their arm to record ideas (avoidance gesture); in another variant, they move their arm towards their body (approach gesture). Results show that avoidance gestures lead to lower creativity and more negative emotion than approach gesture.
Two studies within Psychology made use of interactive devices to manipulate incidental postures: (1) Hurtienne and colleagues report in an abstract that sitting hunched or standing upright during the use of a touchscreen leads to different behaviors in a dictator game [START_REF] Hurtienne | Zur Ergonomie prosozialen Verhaltens: Kontextabhängige Einflüsse von Körperhaltungen auf die Ergebnisse in einem Diktatorspiel [On the ergonomics of prosocial behaviour: context-dependent influences of body posture on the results in a dictator game[END_REF]. If participants were primed with "power concepts" they behaved more self-interested in an upright posture; if they were primed with "moral concepts" the effect was reversed. (2) Bos and Cuddy published a tech report [START_REF] Maarten | iPosture: The size of electronic consumer devices affects our behavior[END_REF] on a study linking display size to willingness to wait. They asked participants to complete a series of tasks, and then let them wait in a room with the device they were using for the tasks (iPod, iPad, MacBook Pro, or iMac). The smaller the device, the longer participants waited and the less likely they went to look for the experimenter. As no details are given about participants' actions during the waiting time (such as playing around with the device or not) nor about the postures participants took on while using the devices, it is unclear whether this correlation can indeed be linked solely to the different display sizes. Further studies are required to determine causal effects due to differences in postures.
Effects of Body Posture on Thought and Behavior
In Psychology, body posture has been linked to a wide range of behavioral and affective effects [START_REF] Cacioppo | Rudimentary determinants of attitudes. II: Arm flexion and extension have differential effects on attitudes[END_REF][START_REF] Yang | Embodied memory judgments: a case of motor fluency[END_REF][START_REF] Raedy M Ping | Reach For What You Like: The Body's Role in Shaping Preferences[END_REF][START_REF] Jasmin | The QWERTY effect: how typing shapes the meanings of words[END_REF][START_REF] Tom | The Role of Overt Head Movement in the Formation of Affect[END_REF]. We focus here only on those closely related to the expansive versus constrictive dyad of power poses.
In 1982, Riskind and Gotay [START_REF] John | Physical posture: Could it have regulatory or feedback effects on motivation and emotion[END_REF] presented four experiments on the relation between physical posture and motivation to persist on tasks. They asked participants to take on either slumped or expansive, upright postures. The former group gave up much faster on a standardized test for learned helplessness than the latter group whereas both groups gave similar self-reports.
More recently, the popular self-help advice to take on a "power pose" before delivering a speech has been linked by multiple studies to increases in confidence, risk tolerance, and even testosterone levels [START_REF] John | Physical posture: Could it have regulatory or feedback effects on motivation and emotion[END_REF][START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF][START_REF] Amy | Preparatory power posing affects nonverbal presence and job interview performance[END_REF]. Further, Yap and colleagues reported that expansiveness of postures can also affect people's honesty [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. In contrast to Carney et al., the latter explicitly studied incidental postures, that is, postures imposed by the environment such as a small versus a large workspace or a narrow versus a spacious car seat. Their research suggests that expansive or constrictive postures which are only incidentally imposed by environments (thus allowing more variation between people's postures), can affect people's honesty: people interacting in workspaces that impose expansive postures are supposedly "more likely to steal money, cheat on a test, and commit traffic violations in a driving simulation" [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF].
Recent Controversies
In 2015, Ranehill et al. [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF] published a high-powered replication attempt of Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] contradicting the original paper. Carney and colleagues responded with an analysis of the differences between their study and the failed replication [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF]. They aimed to identify potential moderators by comparing results from 33 studies, to provide alternative explanations for the failed replication. They indicate three variables which they believe most likely determine whether an experiment will detect the predicted effect: (i) whether participants were told a cover story (Carney) or the true purpose of the study (Ranehill), (ii) how long participants had to hold the postures, i.e., comfort, (Carney 2 x 1 min, Ranehill 2 x 3 min) and (iii) whether the study was placed in a social context, i.e., "either a social interaction with another person [...] during the posture manipulation or participants were engaging in a real or imagined social task" [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF] (Carney yes, Ranehill no).
In 2016, Carney published a statement on her website where she makes "a number of methodological comments" regarding her 2010 article and expresses her updated belief that these effects are not real [START_REF] Carney | My position on "Power Poses[END_REF]. She went on to co-edit a special issue of a psychology journal containing seven pre-registered replication attempts of power pose experiments testing the above discussed moderators to provide "a 'final word' on the topic" [START_REF] Cesario | CRSP special issue on power poses: what was the point and what did we learn[END_REF]. All studies included the self-reported sense of power and one of the following behavioral measures: risk-taking (gambling), performance in mock job interviews, openness to persuasive messages, or self-concept content and size (number and quality of self descriptors). While none of the studies included in the special issue found evidence for behavioral effects, a Bayesian meta-analysis combining the individual results on felt power found a reliable small effect (d ≈ 0.2) [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF].
Outside of Psychology the methodology of power pose studies was criticized by statisticians such as Andrew Gelman and Kaiser Fung who argued that most of the published findings on power poses stem from low-powered studies and were likely due to statistical noise [START_REF] Gelman | The Power of the "Power Pose[END_REF]. Other statisticians analyzed the evidence base collected by Carney et al. [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF] using a method called p-curve analysis [START_REF] Simonsohn | p-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results[END_REF] whose purpose is to analyze the strength of evidence for an effect while correcting for publication bias. Their analyses "conclusively reject the null hypothesis that the sample of existing studies examines a detectable effect" [START_REF] Joseph | Power Posing: P-Curving the Evidence[END_REF][START_REF] Simmons | Power Posing: Reassessing The Evidence Behind The Most Popular TED Talk[END_REF].
OBJECTIVES OF THIS ARTICLE
At this point it seems credible that at least some of the initially reported effects of power poses are nonexistent.
Claims related to hormone changes have been definitively refuted [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF][START_REF] Ronay | Embodied power, testosterone, and overconfidence as a causal pathway to risk-taking[END_REF], and none of the recent replications was able to detect a reliable effect on the tested behavioral measures [START_REF] Jonas | Power poses-where do we stand?[END_REF].
Nonetheless, a small effect on felt power seems credible [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF].
It is however still unclear whether "this effect is a methodological artifact or meaningful" [START_REF] Cesario | CRSP special issue on power poses: what was the point and what did we learn[END_REF]: demand characteristics are an alternative explanation for the effect, that is, participants' responses could be due to the context of the experiment during which they are explicitly instructed to take on certain postures, which may suggest to participants that these postures must be a meaningful experimental manipulation. Such demand characteristics have previously been shown to be explanatory for an earlier finding claiming that people wearing a heavy backpack perceive hills as steeper (see Bhalla and Proffitt [START_REF] Bhalla | Visual-motor recalibration in geographical slant perception[END_REF] for the original study and Durgin et al. [START_REF] Frank H Durgin | The social psychology of perception experiments: hills, backpacks, glucose, and the problem of generalizability[END_REF] for an extended study showing that the effect can be attributed to demand characteristics).
As all recent replications focused on explicitly elicited postures, i.e., participants were explicitly instructed by experimenters to take on a certain posture, demand characteristics are indeed a plausible alternative explanation. This is, however, much less plausible for studies concerned with incidental postures. For the latter, participants are simply instructed to perform a task within an environment, as for a typical HCI experiment, without being aware that different types of environments are part of the experiment, thereby reducing demand characteristics.
Rationale
The experiments on incidental postures reported by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] have to our knowledge so far not been replicated or refuted. Thus it is currently unclear whether the behavioral effects reported in these experiments can be reproduced and whether they are relevant for HCI.
We argue that the potential impact of such effects for HCI justifies more studies to determine whether the effect exists and, if so, under which conditions the effect can be reproduced. We investigate the potential effects of incidental power poses in two HCI scenarios: first when interacting with a touchoperated wall-sized display, then, when interacting with a large tabletop display. We consider both self-reported sense of power (experiment 1) and risk-taking behavior (experiment 2) as potential outcomes, similar to the studies reported by Yap et al. Again, we only consider incidental postures, that is, postures that are the result of a combination of device form factor and interface layout. As we are only interested to study whether these two factors alone can produce a reliable effect, we do not control for possible variations in posture which are beyond these two factors, such as whether people sit straight up or cross their legs, since controlling for these would make demand characteristics more likely. Instead, our experiment designs only manipulate factors which are in the control of a UI designer. In particular we identify the following differences to previous work in Psychology:
• The existing body of work on power poses comes from the Psychology community where postures were carefully controlled by experimenters. We only use device form factors and interface design to impose postures on participants. • We do not separate a posture manipulation phase and a test phase (in experiment 2) but integrate the two which is more relevant in an HCI context. • Similar to the existing literature we measure felt power and risk-taking behavior. In contrast to previous studies which measured risk-taking behavior only through binary choices (one to three opportunities to take a gamble) we use a continuous measure of risk-taking. • For exploratory analysis, we additionally collect a taskrelevant potential covariate that has been ignored in previous work: people's baseline tendency to act impulsively (i.e., to take risks).
EXPERIMENT 1: WALL DISPLAY
In a first experiment, we tested for an effect of incidental posture while interacting with a touch-operated wall display. We asked 44 participants who had signed up for an unrelated pointing experiment whether they were interested to first participate in a short, unrelated "pilot study" which would only last about 3 min. All 44 participants agreed and gave informed consent. The experimenter, who was blind to the experimental hypothesis, instructed participants to stand in front of a 3 m x 1.2 m wall display, and started the experimental application. The experiment was between-subjects and participants were randomly assigned to receive either instructions for a constrictive interface or an expansive interface. Instructions were shown on the display, and participants were encouraged to confirm with the experimenter if something was unclear to them.
To make the interface independent of variances in height and arm span, participants were asked to adapt it to their reach. Participants in the expansive condition were instructed to "move these two circles such that they are located at the height of your head, then move them apart as far as possible so that you can still comfortably reach both of them" (as in Figure 1B). In the constrictive condition, participants were asked to "move these two circles such that you can comfortably tap them with your index fingers while keeping your elbows close to your body" (as in Figure 1A). Once participants had adjusted the position of the circles, they were instructed that the experiment would start once they tapped one of the targets, and that they should continue to alternately tap the two targets for 90 sec. In comparison, Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] used two poses for one minute each, Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] (study 1) one pose for one minute, and Ranehill et al. [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF] two poses for three minutes each.
After participants finished the tapping task, the experimenter handed them a questionnaire inquiring about their level of physical discomfort (3 items), then, on a second page, participants were asked how powerful and in charge they felt at that moment (2 items) similar to Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] but on a 7-point scale instead of their 4-point scale.
An a priori power analysis using the G*Power tool [START_REF] Faul | G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences[END_REF] indicated that we would require 48 participants, if we hypothe-sized a large effect size1 of d = 0.73 and aimed for statistical power of 0.8. Since the experiment only took 3 min to complete, and we relied on participants coming in for an unrelated experiment, we stopped after 44 participants resulting in an a priori power of 0.76.
Results
Figures 3-5 summarize the results of experiment 1. The respective left charts show histograms of the responses (7point Likert scale), the charts in the middle show estimates of means with 95% bootstrap confidence intervals 2 and the right charts show the respective differences between the means, also with 95% bootstrap confidence intervals.
Felt power. As Figure 3 indicates, our data results in large confidence intervals and their difference clearly includes zero. The estimated effect size is d = 0.38 [-0.23, 1.05] 95% CI. There might be a small effect, but to confirm an effect with d = 0.38 and statistical power of 0.76, we would need to run 156 participants. A higher powered experiment aiming for 0.95 statistical power would even require 302 participants. Sense of feeling in charge. For the feeling in charge item we find no overall difference between the two postures. We should note here that this item caused confusion among participants as many asked the experimenter to explain what the question meant. The experimenter was instructed to advise participants to simply answer what they intuitively felt, which might have led to random responses. We nonetheless report our results in Figure 4. Discomfort. The discomfort measure is derived from three items, inquiring about participants' impressions of how difficult, fatiguing, and painful they found the task. Ratings across the three items were generally similar, thus we computed one derived measure discomfort from an equal-weighted linear combination of the three items. Here, we find a very large effect between the postures, with expansive being rated as leading to much higher levels of discomfort (Fig. 5) with d = 1.53 [0.84, 2.30] 95% CI. Bayesian Meta-Analysis on the Power Item. The Bayesian meta-analysis from Gronau et al. [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF] made data and R scripts for their analysis of six pre-registered studies measuring felt power available (see osf.io/fxg32). This allowed us to rerun their analysis including our data. Figure 6 shows the results of the analysis for the original meta-analysis and for the extension including our results. The range of plausible effect sizes given our data is wider than for the previous, higher powered studies using explicit power poses. Our results are consistent with a small, positive effect of expansive postures on felt power as identified by the meta-analysis (d ≈ 0.23 [0.10, 0.38] 95% highest density interval).
Discussion
While inconclusive on their own, our results on felt power are consistent with a small effect size d ≈ 0.2 for expansive versus constrictive postures when using a touch interaction on a wall-sized display. More importantly though, we observed a much larger effect, d ≈ 1.5 for discomfort as participants in the expansive condition were asked to hold their arms stretched out for 90 sec to complete the task. Given the small expected effect size, we find the large increase in discomfort more important and do not recommend to attempt to affect users' sense of power through the use of expansive postures on touch-operated wall-sized displays.
These considerations played into the design of a second experiment. We identified as most important factors to control: maintaining equal levels of comfort for both postures, and using an objectively quantifiable and continuous behavioral measure instead of self-evaluation.
EXPERIMENT 2: INCLINED TABLETOP
Our second experiment is inspired by experiment 2 from Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. There, participants' incidental posture was imposed by either arranging their tools around a large (0.6 m 2 ) or a small (0.15 m 2 ) workspace. Yap et al. study investigated the effect of incidental postures imposed by the different workspaces on people's dishonesty, whereas we applied the paradigm to risk-taking behavior which is a common behavioral measure in multiple studies on explicit power poses [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF][START_REF] Cesario | Bodies in Context: Power Poses As a Computation of Action Possibility[END_REF][START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF]. These previous studies all gave binary choices to participants, asking them whether they were willing to take a single gamble [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF][START_REF] Cesario | Bodies in Context: Power Poses As a Computation of Action Possibility[END_REF] or to make several risky choices in both gain and loss domain [START_REF] Ranehill | Assessing the robustness of power posing: no effect on hormones and risk tolerance in a large sample of men and women[END_REF] using examples taken from Tversky and Kahneman [START_REF] Tversky | The framing of decisions and the psychology of choice[END_REF]. There, participants' binary response was the measure for risk-taking. We opted for a more continuous measure for risk-taking as it results in higher resolution for responses, and used the balloon analog risk task (BART), a behavioral measure for risk-taking propensity [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. We again study one main factor: incidental posture with two levels, expansive and constrictive, implemented as two variations of the same graphical user interface (see Figure 7). To keep comfort constant across conditions, we used a slightly inclined 60" tabletop display instead of a wall display so that participants in both conditions could rest their arms while performing the task (see Figure 1C&D).
BART: The Balloon Analogue Risk Task
The BART is a standard test in Psychology to measure people's risk-taking behavior in the form of a game [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. The basic task is to pump up 30 virtual balloons using on-screen buttons. In our implementation, two buttons were placed as indicated in Figure 7 and players were asked to place their hands near them. With each pump, the balloon grows a bit and the player gains a point. Points are commonly linked to monetary rewards and the more players pump up the balloons, the higher their payout. The maximum size of a balloon is reached after 128 pumps. The risk is introduced through a random point of explosion for each balloon with the average and median explosion point at 64 pumps. A balloon needs to be cashed in before it explodes to actually gain points for that balloon. Participants are only told that a balloon can explode (Fig. 7-D) at any point between the minimum size, i.e., after 1 pump, and the maximum, when it touches the line drawn underneath the pump (see Figure 7-B), and that they need to cash in a balloon before it explodes to gain points. The
Measures
The measure of the BART is the average number of pumps people make on balloons which did not explode, called adjusted number of pumps. It is used in Psychology as a measure of people's tendency to take risks: with each pump players have to weigh the risk of the balloon exploding against the possible gain of points [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF][START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF][START_REF] Melissa | The assessment of risky decision making: A factor analysis of performance on the Iowa Gambling Task, Balloon Analogue Risk Task, and Columbia Card Task[END_REF][START_REF] Fecteau | Activation of prefrontal cortex by transcranial direct current stimulation reduces appetite for risk during ambiguous decision making[END_REF]. The theoretically optimal behavior would be to perform 64 pumps on all balloons. It would maximize payout and also lead to 50% exploding balloons. Yet, previous studies found that participants stop on average much earlier [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF].
Adjusted number of pumps
According to a meta-analysis of 22 studies using this measure [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF], the average adjusted number of pumps is 35.6 (SE 0.28) . However, the meta-analysis showed that means varied considerably between studies from 24.60 to 44.10 (with a weighted SD = 5.93). Thus, only analyzing the BART's main measure would probably not be sensitive enough to identify a difference between the studied postures. We account for this by also computing a normalized measure (percent change) and by capturing people's general tendency to take risks as a covariate.
Percent change of pumps
The game can be conceptually divided into 3 phases: during the first 10 balloons, players have no prior knowledge of when balloons will explode. This phase has been associated with decision making under uncertainty [START_REF] Melissa | The assessment of risky decision making: A factor analysis of performance on the Iowa Gambling Task, Balloon Analogue Risk Task, and Columbia Card Task[END_REF]. In the second phase, players mostly consolidate the impressions gained in the first phase, whereas the last phase indicates decision making under risk: players developed intuitions and aim to maximize their payout. While the BART is widely used [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF], little data is available for the individual phases. Most studies only report the main measure which is averaged over all phases. Still, we know from the original study that the average increase of pumps between the first and the last phase is about 33% [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF].
Since we hypothesize that a possible effect of incidental posture should occur over the course of the experiment, we expect that it should not be present while pumping up the first balloons. By comparing data from this first phase with data from the last phase, we derive a normalized measure for how people's behavior changed over the course of the experiment (∼10 min). We define this measure, percent change as follows: number of pumps required to achieve the maximum size and, most importantly, the number of pumps needed to optimize the payout is unknown to the participant. ¯X(adj. pumps in phase 3) � X(adj. pumps in phase 1) % change = X ¯(adj. pumps in phase 1)
Covariate: impulsiveness
We additionally tested participants on the BIS-11 Barrett impulsiveness scale [START_REF] J H Patton | Factor structure of the Barratt impulsiveness scale[END_REF][START_REF] Matthew S Stanford | Fifty years of the Barratt Impulsiveness Scale: An update and review[END_REF] to capture their general tendencies to react impulsively. The scale is a 30 items questionnaire inquiring about various behaviors such as planning tasks, making decisions quickly, or buying things on impulse. We included it as a covariate as Lejuez et al. reported a correlation with the BART measure (r = 0.28 [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]).
Covariate: comfort
In light of our findings from experiment 1, we also included an extended questionnaire relating to both physical and mental comfort as well as fatigue (items 1-12 from the ISO 9241-9 device assessment questionnaire [START_REF] Douglas | Testing Pointing Device Performance and User Assessment with the ISO 9241, Part 9 Standard[END_REF]).
Participants
We recruited a total of 80 participants (42 women, 38 men, mean age 26) in two batches. Similar to experiment 1, we initially recruited 40 participants. A Bayes factor analysis [START_REF] Dienes | Using Bayes to get the most out of non-significant results[END_REF] at that point indicated that our data was not sensitive enough to draw any conclusions, and we decided to increase the total number of participants to 80. As is common in this type of experiment and as suggested by Carney et al. [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF], we used a cover story to keep our research question hidden from participants. The study was advertised as a usability study for a touchscreen game. Participants were unaware of the different interface layouts since posture was manipulated between subjects, making it more difficult for them to guess the real purpose of the study.
Procedure
Similar to experiment 1, participants were alternately assigned in order of arrival to either the constrictive or the expansive condition. After signing an informed consent form for the "usability study", participants were introduced to the tabletop setup and asked to go through the on-screen instructions of the game. They were informed that the amount of their compensation would depend on the number of points they achieved in the game. They then pumped up 30 balloons. Once finished, they filled a questionnaire on their level of comfort during the game (12 items), and the BIS-11 impulsivity test [START_REF] J H Patton | Factor structure of the Barratt impulsiveness scale[END_REF] (30 items). Finally, participants filled a separate form to receive a cinema voucher for their participation. The value of the voucher was between 13e and 20e, depending on how many points they accumulated in the game following the original BART protocol [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. The entire experiment lasted about 20 min.
BAYESIAN ANALYSIS
We analyze our data using Bayesian estimation following the analysis steps described by Kruschke [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF] for the robust analysis of metric data in nominal groups with weakly informed skeptical priors which help to avoid inflated effect sizes [START_REF] Kay | Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI[END_REF].
We reuse R code supplied by Kruschke [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF] combined with the tidybayes 3 package for R to plot posterior distributions.
Our analysis setup can be seen as a Bayesian analog to a standard ANOVA analysis yet without the prerequisites of an 3 Tidybayes by Matthew Kay, github.com/mjskay/tidybayes ANOVA, normality and equal variances, and with the possibility of accepting the null hypothesis if the posterior credibility for parameter ranges falls into a pre-defined region of practical equivalence (ROPE) [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF]. For example, we could decide that we consider any difference between groups of less than + � 5% as too small a difference to be of practical relevance. As we did not decide on a ROPE before data collection, we refrain from using this tool. Most importantly, the outcome of the analysis are distributions for credible ranges of parameter estimates which is more informative than dichotomous hypotheses testing [START_REF] Kay | Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI[END_REF].
Model
Figure 8 shows that for adjusted number of pumps, the distributions from both groups are rather similar and mostly symmetric. For percent change, the data is positively skewed. .
y[i] ∼ T (ν, a 0 + a[x[i]], σ y [x[i]])
Priors
We choose weakly informed skeptical priors. Since previous work on the BART reports large variances between studies [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF], we scale the prior for the intercept a 0 based on our data and not on estimates from previous work. For the deflection parameters a[x[i]], we choose a null hypothesis of no difference between groups expressed through a normally distributed prior centered at 0 with individual standard deviations per group. For the scale parameters σ y and σ a we assume a gamma distribution with shape and rate parameters chosen such that its mode is SD(y)/2 and its standard deviation is 2 * SD(y) [46, page 560f]. The regularizing prior for degrees of freedom ν is a heavy-tailed exponential.
a 0 ∼ N(X ¯(y), (5 * SD(y)) 2 ) a[x[i]] ∼ N(0, σ a ) σ y ∼ G(β, γ) σ a ∼ G(β, γ) ν ∼ Exp(1/30)
Fitting the model
We fit the model using Markov chain Monte Carlo (MCMC) sampling in JAGS [START_REF] Kruschke | Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan[END_REF]. We ran three chains with 10,000 steps burnin, thinning of 10 for a final chain length of 50,000. Convergence of chains was assessed through visual inspection of diagnostic plots such as trace plots, density plots, and autocorrelation plots as well as by checking that all parameters passed the Gelman-Rubin diagnostic [START_REF] Gelman | Inference from Iterative Simulation Using Multiple Sequences[END_REF]. The results presented in the next section are computed from the respective first chains.
RESULTS
The outcome of our analysis is posterior distributions for the parameters in our model. These distributions indicate credible values for the parameters. One way of representing these is to plot the density of these distributions together with a 95% highest density interval (HDI) as so-called eyeplots [START_REF] Kay | Researcher-Centered Design of Statistics: Why Bayesian Statistics Better Fit the Culture and Incentives of HCI[END_REF] (as done in Figures 9 & 11). Any value within an HDI is more credible than all values outside an HDI. The width of an HDI is an indicator for the certainty of our beliefs: narrow intervals indicate high certainty in estimates whereas wide ones indicate uncertainty. Finally, not all values within an HDI are equally credible which is indicated through the density plot around the HDI: values in areas with higher density have a higher credibility than values in less dense areas.
We now present our results by first analyzing the posterior parameter estimates for our Bayesian model for both the standard BART measure and our percent change measure (summarized in Figure 9) and analyze contrasts pertaining to our research question as to whether incidental posture had an influence on people's behavior.
Posterior Parameter Estimates
Posterior distributions for our parameter estimates are summarized in Figure 9. The intercept, a 0 , indicates the estimate for the overall mean across both groups, whereas the groupwise estimates, a 0 + a[x i ], show distributions for estimates of the means split by expansive-constrictive posture. The difference plots in the middle indicate whether a group differs from the overall mean, and the third plot to the right indicates the difference between the two groups.
Standard BART Measure
The results for the standard BART measure are shown in Figure 9-left. For the adjusted number of pumps we find a shared intercept a 0 of 42.6 with a [39.1, 46.0] 95% highest density interval (HDI). This value is within the upper range of previous studies using the BART which varied between 24.60 and 44.1 [START_REF] Lauriola | Individual differences in risky decision making: A meta-analysis of sensation seeking and impulsivity with the balloon analogue risk task[END_REF]. The estimates for the group-wise means for the two body postures are both close to the overall mean which is confirmed by the HDIs for the credible differences to the intercept as well as the difference between postures: point estimates are all within the range of [-1,1] from the intercept with HDIs smaller than [-5,5].
Percent Change Measure
The results for the percent change measure are illustrated in Figure 9-right. For the percent change measure we find an overall intercept a 0 of 24.7% [15.0, 34.7] 95% HDI which is below the average increase of 33% found by Lejuez et al. [START_REF] Lejuez | Evaluation of a behavioral measure of risk taking: the Balloon Analogue Risk Task (BART)[END_REF]. Similar to the standard BART measure we find very small differences for the two posture groups which are within [-0.5, 0.5] for the point estimates with 95% HDIs smaller than [-9,9]. Not only is the credible range for the estimates considerably larger than for the BART measure, but also the posterior distribution for the difference between the two postures is rather uncertain with a wide HDI spanning [-17.3, 15.8].
Effects and Interactions with Covariates
We captured comfort, impulsiveness [START_REF] J H Patton | Factor structure of the Barratt impulsiveness scale[END_REF], and gender as covariates. Both comfort and gender showed only negligible variance both across postures and within groups. We therefore only report the analysis for impulsiveness in more detail.
Impulsiveness
To test for possible influence of the impulsiveness covariate, we split participants into either "high risk takers" (BIS11 index >= 64) or "low risk takers" (BIS11 index < 64, where 64 is the median value within our sample population). This split leads to different profiles between the resulting four groups as Body posture accounts for some of the uncertainty but similarly for both conditions.
For high impulsiveness indices positive values are slightly more credible than negative values and vice versa.
It seems most credible that the interaction parameters crossing body posture and impulsiveness account for most of the observed differences. To analyze the data for this measure taking the covariate into account, we extend our previous one-factor model with a second factor including an interaction term as follows:
y[i] ( [i] [x [i] x [i]] 2 ∼ T µ , σ y 1 , 2 , ν) with µ[i] ∼ a 0 + a 1 [x 1 [i]] + a 2 [x 2 [i]] + a 1 a 2 [x 1 [i], x 2 [i]]
Priors were chosen skeptically as detailed before.
Results. The results are summarized visually in Figure 11. We find again almost completely overlapping credible intervals for the posture factor centered within [-0.5,0.5] with HDIs smaller than [-10,10]. The impulsiveness factor also played a rather negligible role. Surprisingly, we find an interaction between posture and impulsiveness: it appears that body posture affected low risk-takers as predicted by Yap et al. whereas it seems to have reversed the effect for high risktakers. However, this part of the analysis was exploratory and a confirmatory study would be needed to verify this finding. Additionally, the two experimental groups were slightly unbalanced, that is, the BIS scores in the expansive group had a slightly lower mean than in the constrictive group (µ exp = 63.2, µ cons = 66.0, [-7.1, 1.5] 95% CI on difference).
DISCUSSION
We first summarize our findings and then discuss them in light of our research question and approach.
Summary of our findings
We ran two experiments designed to identify possible effects of incidental power poses on the sense of power (experiment 1) and on risk-taking behavior (experiment 2). While multiple replication attempts on explicitly elicited power poses had failed to show reliable effects for behavioral effects and only a small effect on felt power, it remained unclear whether the effects for incidental power poses, reported by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] would replicate and whether incidental power poses are important to consider when designing user interfaces.
Experiment 1
The first experiment found a considerably larger effect for discomfort (d ≈ 1.5 [0.8, 2.3]) than for felt power (d ≈ 0.4 [-0.2, 1.1]). On its own the first experiment thus failed to find the effect expected based on Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF], and the optimism for incidental power poses generated from that study is not supported by our findings. Our results are however consistent with a much smaller effect of d ≈ 0.2 as was recently suggested by a meta-analysis [START_REF] Frederik Gronau | A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power[END_REF]. Thus, we can at best conclude that a small effect might exist. In practice, the effect remains difficult to study as the small effect size requires large participant pools to reliably detect the effect. Such large participant pools are rather uncommon in HCI [START_REF] Caine | Local Standards for Sample Size at CHI[END_REF] with the exception of crowdsourced online experiments where the reduced experimental control might negatively effect the signal to noise ratio of an already small effect. Besides such practical considerations, the very large effect on (dis)comfort severely limits the range of acceptable expansive interfaces.
Experiment 2
The second experiment found that incidental body posture did not predict participants' behavior. As with experiment 1, this is consistent with the findings of the recent replications which elicited postures explicitly; none of those were able to detect an effect on behavior either. Again, a large effect as reported by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] is highly unlikely in light of our results. We thus conclude that incidental power poses are unlikely to produce measurable differences in risk-taking behavior when tested across a diverse population. An exploratory analysis of interaction effects on the normalized measure suggests that an effect of body posture as predicted by Yap et al. could be observed within the group of participants showing low BIS-11 scores, while the effect was reversed for participants with high BIS-11 scores. Should this interaction replicate, then it would explain why overall no effect for the expansiveness of postures can be found. However, a confirmatory study verifying such an interaction is needed before one can draw definitive conclusions and possibly amend design guidelines.
Relevance of Power Poses for HCI
Overall we found an apparent null or at best negligible effect of body postures on behavior. For a user interface targeted at diverse populations, it thus seems futile to attempt to influence people's behavior through incidental postures. As a general take-away, we recommend avoiding both overly expansive as well as constrictive postures and to rather focus on factors such as general comfort or efficiency as appropriate to the purpose of an intended user interface.
In some previous work it was argued that a social interaction would be necessary to observe a power pose effect [START_REF] Cesario | Bodies in Context: Power Poses As a Computation of Action Possibility[END_REF][START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF].
While our experiments did not investigate this claim, recent work by Cesario and Johnson [START_REF] Cesario | Power Poseur: Bodily Expansiveness Does Not Matter in Dyadic Interactions[END_REF] provides evidence against this claim. It thus seems equally unlikely that power poses would be of concern for social user interfaces. However, our research only concerned power poses and tested downstream effects, that is, whether posture manipulations led to changes in behavior. We cannot draw any conclusions about the other direction: for example, posture seems to be indicative of a user's engagement or affective state [START_REF] Savva | Continuous Recognition of Player's Affective Body Expression as Dynamic Quality of Aesthetic Experience[END_REF].
Need for Replication
Concerning the interaction observed in our second experiment, we want to again caution that this finding needs to be replicated to confirm such an interaction. The analysis that brought forward this finding was exploratory, and our experiment included only 80 participants -more than usual inperson experiments in HCI [START_REF] Caine | Local Standards for Sample Size at CHI[END_REF] but less than the failed replications of explicitly elicited power poses. We suggest that replications could focus on specific, promising or important application areas where effects in different directions might have an either desirable or detrimental impact on people's lives, and participants should be screened for relevant personality traits, such as impulsiveness or the "the big-five" [START_REF] Lewis R Goldberg | An alternative" description of personality": the big-five factor structure[END_REF], to examine interaction effects with these covariates.
Replication is still not very common within HCI [START_REF] Kasper Hornbaek | Is Once Enough?: On the Extent and Content of Replications in Human-Computer Interaction[END_REF] despite various efforts to encourage more replications such as the repliCHI panel and workshops between 2011 and 2014 (see www.replichi.com for details and reports) as well as the "repliCHI badge" given to some CHI articles published at CHI'13/14. Original results are generally higher valued than confirmations or refutations of existing knowledge. A possible approach to encourage more replications could be through special issues of HCI journals. For example, the (Psychology) journal that published the special issue on power poses took a progressive approach to encourage good research practices, such as preregistered studies [START_REF] Cockburn | HARK No More: On the Preregistration of CHI Experiments[END_REF] or replications, by moving the review process before the collection of data, thereby removing possible biases introduced by a study's outcomes [START_REF] Kai | How can preregistration contribute to research in our field?[END_REF]: only the introduction, background, study designs, and planned analyses are sent in for review, possibly revised upon reviewer feedback, and only once approved, the study is actually executed and already guaranteed to be published, irrespective of its findings. We believe such an approach could be equally applied in HCI to work towards a conclusive evidence base for research questions the community deems interesting and important.
Reflections on our Approach
Power poses are an example of a construct from Psychology that has received extensive scientific and public coverage; both soon after publication and once the results of the studies were challenged. Transferring this construct to HCI raised several challenges: (i) practical relevance: identifying which areas of HCI could be impacted by this construct, (ii) ecological validity: operationalizing the construct for HCI such that the resulting manipulations and tasks resemble "realistic" user interfaces which could be encountered outside the lab, and (iii) respecting the boundary conditions within which the construct can be evoked.
Concerning (i), the literature on incidental power poses provides a rich set of behaviors such as cheating and risk-taking.
We gave examples in the background section for areas relevant to HCI -education and risky decision-making -in which an effect of power poses would be pivotal to understand. Concerning (ii) and (iii), the challenges were less easy to address. Carney et al. argued in their summary of past research on explicitly elicited postures [START_REF] Dana R Carney | Review and Summary of Research on the Embodied Effects of Expansive (vs. Contractive) Nonverbal Displays[END_REF] that replications might fail if the postures are not replicated closely enough. The experiments by Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF] did not carefully control the postures but only modified the environment. So it was unclear whether we would need to consider a wide set of gestures and poses and how to find out which of those instantiated the construct well. We addressed these challenges by considering the relevance for HCI as the most important experiment design criterion: since an interface designer has very little influence on users' posture beyond the positioning of interface elements, we decided to consider power poses as irrelevant for HCI if they require very specific positioning of users.
CONCLUSION
We investigated whether incidental postures, in particular constrictive and expansive postures, influence how users behave in human-computer interaction. The literature raised the expectation that such postures might set about cognitive and physiological reactions, most famously from findings by Carney et al. [START_REF] Dana R Carney | Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance[END_REF] as well as Yap et al. [START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF]. While the findings from Carney et al. on explicitly elicited power poses did not hold up to replications, the experiments by Yap et al. had so far not been replicated. We reported findings from two experiments which conceptually replicated experiments on incidental power poses in an HCI context. We observed an at best small effect for felt power and an at best negligible effect for a behavioral measure for risk-taking. Most surprisingly, an exploratory analysis suggested that an interaction with a personality trait, impulsiveness, might reverse the hypothesized effect for posture manipulations. However, replications controlling for this interaction are needed to determine if this interaction reliably replicates and thus poses a relevant design consideration for HCI. Overall we conclude that incidental power poses are unlikely to be relevant for the design of human-computer interfaces and that factors such as comfort play a much more important role.
To support an open research culture and the possibility to replicate our work or to reanalyze our data, we share all experimental data and software as well as all analysis scripts at github.com/yvonne-jansen/posture.
Figure 2 .
2 Figure 2. Left: an air traffic controller workstation. Photo courtesy: US Navy 100714-N-5574R-003 CC-BY 2.0. Right: a Bloomberg terminal featuring a double screen controlled by keyboard and mouse. Photo courtesy: Flickr user Travis Wise CC-BY 2.0.
Figure 3 .
3 Figure 3. Self-reported sense of power. Error bars indicate 95% bootstrap confidence intervals.
Figure 4 .
4 Figure 4. Self-reported sense of "feeling in charge". Error bars indicate 95% bootstrap confidence intervals.
Figure 5 .
5 Figure 5. Level of discomfort while performing the task. Error bars indicate 95% bootstrapped confidence intervals.
Figure 6 .
6 Figure 6. Extended Bayesian meta-analysis from Gronau et al. estimating effect sizes of felt power. Individual studies show fixed effect estimates, meta analysis items indicate mixed model estimates. The two bottom items include our data. Error bars indicate 95% highest density intervals.
Figure 7 .
7 Figure 7. Screenshots of our implementation of the BART showing (A) the initial and (B) the maximum size of the balloon in the constrictive condition as well as (C) the initial size and (D) the explosion feedback in the expansive condition. The circles represent the buttons used to pump up the balloon.
Figure 8 .
8 Figure 8. Density plots of the raw data for both measures. We model our data through a robust linear model using as likelihood a heteroskedastic scaled and shifted t distribution with degrees of freedom ν [46, page 573ff]. We assume our data to have a common intercept a 0 from which groups may differ, captured by parameter a[x[i]] where x[i] indicates group membership. The model assumes independent scale parameters per group σ y [x[i]].
Figure 9 .
9 Figure 9. Eye plots of the posterior distributions of parameters with 95% HDI (highest density interval). Left: parameter estimates for the standard BART measure; right: parameter estimates for the percent change measure.
Figure 10 .
10 Figure 10. Density plots of the raw data for both measures with data split by condition and impulsiveness covariate.
the two factors combined is 25.2%[15.2, 35.7].
Figure 11 .
11 Figure 11. Summary of our two-factor analysis for percent change indicating the highest density intervals for the different components of the extended linear model.
Figure 10
10 Figure 10 indicates. For the adjusted # of pumps measure, the split indicates rather similar profiles across groups. For the percent change measure, however, the split separates groups with seemingly different profiles.
This experiment ran in
before data from failed replications were available. We chose d = 0.73 based on Yap et al.[START_REF] Yap | The ergonomics of dishonesty: the effect of incidental posture on stealing, cheating, and traffic violations[END_REF].[START_REF] Bezerianos | The Vacuum: Facilitating the Manipulation of Distant Objects[END_REF] We used the BCa method which corrects the bootstrap distribution for bias (skew) and acceleration (nonconstant variance)[START_REF] Thomas | Bootstrap Confidence Intervals[END_REF].
ACKNOWLEDGMENTS
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 648785). We thank Dan Avram and Peter Meyer for their support in running the experiments, Sebastian Boring, Gilles Bailly, Emmanouil Giannisakis, Antti Oulasvirta, and our reviewers for feedback on various drafts, and Pierre Dragicevic for detailed comments. |
01758581 | en | [
"chim.mate"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01758581/file/2018-055.pdf | Glenna L Drisko
Christophe Gatel
Pier-Francesco Fazzini
Alfonso Ibarra
Stefanos Mourdikoudis
Vincent Bley
Katia Fajerwerg
Pierre Fau
Myrtil Kahn
Air-stable Anisotropic Monocrystalline Nickel Nanowires Characterized using Electron Holography
g Laboratoire plasma et conversion d'énergie, UMR 5213, Université de Toulouse, CNRS, Toulouse France.
Abstract: Nickel is capable of discharging electric and magnetic shocks in aerospace materials thanks to its conductivity and magnetism. Nickel nanowires are especially desirable for such an application as electronic percolation can be achieved without significantly increasing the weight of the composite material. In this work, single-crystal nickel nanowires possessing a homogeneous magnetic field are produced via a metal-organic precursor decomposition synthesis in solution.
The nickel wires are 20 nm in width and 1-2 μm in length. The high anisotropy is attained through a combination of preferential crystal growth in the <100> direction and surfactant templating using hexadecylamine and stearic acid. The organic template ligands protect the nickel from oxidation, even after months of exposure to ambient conditions. These materials were studied using electron holography to characterize their magnetic properties. These thin nanowires display homogeneous ferromagnetism with a magnetic saturation (517±80 emu cm -3 ), which is nearly equivalent to bulk nickel (557 emu cm -3 ). Nickel nanowires were incorporated into carbon composite test pieces and were shown to dramatically improve the electric discharge properties of the composite material. KEYWORDS. Electron holography, Electric discharge, Ligand stabilization, Magnetism, Nanowires, Nickel Lightning can and does strike the same place twice. In the case of airplanes, lightning hits each plane on average once per year and enters almost exclusively through the nose. Spacecraft are currently built of carbon fiber-reinforced composites, a material that is lightweight and has desirable mechanical properties, however which suffers from low electrical conductivity. Damage can be caused by low attenuation of electromagnetic radiation and electrostatic discharge (i.e. lightning strikes), 1 creating a security risk in the spacecraft and requiring expensive repairs.
Typically, aluminum or copper are incorporated into the carbon fiber-reinforced composites in order to quickly dissipate the charge. However, copper suffers from oxidation and aluminum from galvanic corrosion. Nickel can effectively dissipate concentrated magnetic and electrical fields, it is resistant to extensive oxidation thanks to the natural formation of a passivating oxide, it has a reasonably low density and is comparatively inexpensive. Conductive Composites sells nickel nanostrands TM for aeronautics applications, which have been proven to effectively shield composites from electromagnetic interference and electrostatic discharge-induced damage even after 2 million cycles of fatigue loading. 1 Nickel nanostructures have been synthesized in a variety of shapes and sizes by employing several chemical protocols, [2][3][4][5] yielding nanomaterials with various physical properties. However, this current report is the first solution based synthesis of individual monocrystalline nanowires.
Previously, monocrystalline nickel nanowires have been created via electrodeposition using porous templates, with the smallest nanowire diameter produced to date being 50 nm. 6 A similar technique has been used to produce Au/Ni composite wires using a porous template with a 40 nm diameter. 7 Solution chemistry protocols have produced isotropic nanoparticles, 8 short nanorods, 4 a variety of other structures 2 and polycrystalline nanowires. 9,10 Monocrystallinity is important because conductivity is related to the number of grain boundaries, as grain boundaries are a barrier to electrical transport. 11 Moreover, a protective layer of nickel oxide forms typically upon exposure to air. Oxidized nickel can be either non-magnetic or antiferromagnetic, radically decreasing the magnetization values compared to those of pure fccnickel. 12 Long monocrystalline wires of metallic nickel are ideal materials for applications that require high electrical conductivity and magnetization saturation.
We report the metal-organic synthesis of highly anisotropic nickel nanowires having no grain boundaries. The Ni nanowires are obtained through the reduction of a nickel stearate complex using hydrogen gas at 150 °C, in the presence of hexadecylamine and stearic acid (experimental details in SI). The nanowires grow along a particular crystallographic axis (i.e. c), forming a singlecrystalline nanowire for the first time using solution chemistry techniques. Using the appropriate relative concentrations of ligand and nickel precursor allowed us to increase the length of the nanowires and to transition away from nanorod-based sea urchin structures. We investigate the magnetic properties of these anisotropic structures using off-axis electron holography and discuss the correlation of such properties with the nanowire structure. The organic ligand layers capping the nickel nanowires protected them from oxidation.
The nickel nanostructures appear either as sea urchin-like structures or as highly anisotropic nanowires, depending on the synthesis conditions (Figure 1, a movie showing the tomography can be found as SI). Anisotropic structures can result from templating or from a difference in the rate of crystal growth along a certain axis. A difference in crystallographic growth rate can occur to minimize the surface energy 11 or from capping certain facets with surfactants, ions or solvent. [14][15][16] The sea urchin-like nanostructures are collections of individual nanowires growing from a single nucleus. 17 The predominance of a wire versus urchin morphology can be explained using nucleation and growth kinetics, as has been seen in CoNi nano-objects. 18 High nucleation and growth rates led to CoNi nanowires, where slow nucleation and fast growth led to a sea urchin morphology. The same likely applies to the nickel nanostructures presented here. When the nickel precursor and ligand were highly diluted, nucleation was favored over growth and spherical particles were produced (Figure 2). By decreasing the quantity of solvent, growth was favored over nucleation, producing a dense sea urchin nanostructure (Figure 2b). Upon further concentrating the solution, a less highly branched nanostructure was observed (Figure 2c), which cannot be explained with nucleation and growth kinetics, but rather to surfactant organization and templating effects. The stearic acid ligand played a major role in the formation of anisotropic nanowires. In the absence of stearic acid, spherical particles were produced (Figure 1b). By increasing the concentration of stearic acid, the anisotropy of the nanoparticles increased (Fig. 1cd). In this later case, branched nanowires were still present, but unbranched nanowires were commonly found. The nanowires were about 20 nm in width and up to 2 μm in length. Thus, both crystal growth kinetics and surfactant templating seem responsible for the nickel nanostructure morphology. The nickel nanowires terminate with a square pyramid tip, with faces of (111) crystallographic orientation, the natural extension of the <001> crystallographic lattice. The sea urchin shape with capped tips has been previously observed in CoNi nanostructures. 17,18 The capped tips can be explained using growth kinetics. 18 Towards the end of the synthesis, the nickel precursor is nearly consumed and the consequential drop in its concentration slows the particle growth rate significantly. Similarly for nickel nanoparticle growth, at the end of the reaction the extremely dilute conditions allow simultaneous growth along other axes, thus generating an arrowhead.
The nickel nanowires are monocrystalline, thus they grew continuously from a nucleus until the nickel precursor depleted (Figure 3a). By measuring the interplanar distances in the diffraction pattern, we determined that the nickel nanorods grew in the <001> direction, and thus the faces present (100) or (111) crystallographic planes. These planes minimize the surface energy. 17 No grain boundaries, crystallographic defects or nickel oxide are visible in the microscopy images (Figure 3a). Normally nickel forms a 2 nm passivating nickel oxide shell upon exposure to air. 20 If a 2 nm crystalline oxide shell were present, the diffraction pattern would show differences in the interplane distances between Ni and NiO (Figure 3b inset). The diffraction pattern, which is characteristic of a single crystal lattice, was obtained for the 1085 nm long segment of the nanowire shown in Figure 3b and is representative of the other nanowires analyzed. Thus, the representative TEM images and the associated diffraction pattern prove the nanowires are monocrystalline, and lack a NiO passivating layer. It seems that the organic ligands used are highly effective at protecting the surface from oxidation. Using TGA, it was found that the organic ligands composed 6.8 wt% of the sample. Taking the surface area of the Ni nanowires as 22.6 m 2 g -1 and assuming that the hexadecyl amine and the stearic acid form ion pairs, 22 the surface coverage is 6.9 ligand molecules/nm 2 . This is an extremely high charge, indicating that there is probably two or more layers of ligands protecting the nanowire. SQUID measurements show that the material presents ferromagnetic properties (Figure 3c). The saturation magnetization at room temperature is equal to 460 emu cm -3 , slightly lower than bulk nickel (557 emu cm -3 ). This lower value may be due to the coordination of the stearic acid to the nickel surface. 4 SQUID measurements of the magnetism at 2 K and ambient temperature also confirm the absence of oxidation. An oxide layer around nickel modifies its magnetic properties. 22 Nickel is ferromagnetic, where nickel oxide is antiferromagnetic. A thin shell of NiO around a Ni core generates a temperature dependent exchange bias, observed as a horizontal shift of the magnetic hysteresis loop. 20,23 As the hysteresis loop corresponding to the field-cooled measurement is not shifted along the H-axis, there is no detectable amount of NiO around the Ni nanowires. The nickel nanowires show no evidence of surface oxidation even months after preparation, stored under ambient conditions. Magnetic and electric maps can be obtained by electron holography, which measures the phase shift of the electron beam after interaction with the electromagnetic field of the sample. Electron holography thus provides the high spatial resolution known to electron microscopy and a quantitative analysis of the local magnetic configuration (Figure 4). The exact magnetic configuration can thus be correlated to the structural properties of a nanostructure, such as the crystal structure, grain boundaries, geometry, and defects. Electron holography measurements can be used to reconstruct the 3D geometry of the nano-object. In our case, these correspond to what was observed with electron tomography images (movie in supplementary information). Electron holography proved that the nickel nanowires are ferromagnetic with a magnetization laid along the nanowire axis due to shape anisotropy (Figure 4). 24,25 An off-axis electron holography experiment in the Lorentz mode was performed using a Hitachi HF 3300C microscope operating at 300 kV and achieving a 0.5 nm spatial resolution in a field-free magnetic environment (less than 10 -3 T). All the holograms were recorded in a 2 biprism configuration and the fringe spacing was set to 1.1 nm in this study. Phase and amplitude images were extracted from the holograms using homemade software with a spatial resolution of 4 nm.
From the measured magnetic phase shift of 0.3 rad, we obtain a Ni magnetization of about 0.65±0.1 T, i.e. 517±80 emu cm -3 in agreement with values obtained from SQUID. The whole nanowire demonstrated a homogeneous magnetism, although some nanowires exhibited domain walls where the magnetism changed direction. The domain walls show 180° angular displacement and may have been nucleated by saturating the sample during observation. The domain walls were found to exist at the thinnest part of the nanowire, bearing in mind that the nanowire is monocrystalline, but slightly irregular in width. The domain walls were in the form of pure transverse walls, with no magnetization induction observed in the very center of the domain wall.
At this center the magnetization is either parallel to the +Z or -Z direction, as the electron phase shift is only sensitive to the components perpendicular to the electron beam. Vortex states are absent, even in the nanowire arrowheads. The anisotropy of the nanowire is known to cause spin alignment in plane with the wire axis, creating a uniform magnetic state. 26 To study the electric dissipation of the nickel nanowires, they were dispersed in a polyamide epoxy resin at 0.5, 1 and 5 wt% relative to the quantity of resin, and then infiltrated into a carbon tissue (see supporting information for details). This composite was cured at 80 °C under vacuum, and then cut into test pieces using micromilling (Figure 5b, Figure S1). Potential decay measurements (Figure 5a, Figure S3) were performed on these test pieces to study how an applied surface charge is dissipated by the surface of the material. We can see in Figure 5a that the charge dissipation occurs much more quickly when nickel nanowires are incorporated into the resin relative to the non-doped carbon composite, which has a much higher concentration of electrical charge. The quantity of charge at the beginning of the measurement is already inferior for the nickel loaded samples, as the charge was largely dissipated during the charging phase. The infiltration of the nickel-charged resin into the tissue was not perfectly homogeneous, as can be seen in Figure 5c, which led to inhomogeneities in the dissipation measurements. However, the measured trend was constant: with 5 wt% nickel nanowire loading, the dissipation was much more efficient and complete within around 1 min. In conclusion, we report the first solution-based synthesis of monocrystalline nickel nanowires.
The nickel nanowires are 20 nm in diameter and up to 2 μm in length, and are synthesized via the decomposition of metal-organic compounds under air-free and water-free conditions. These nanostructures nucleated and then grew progressively in the <100> direction, where the anisotropy results from a combination of crystal growth kinetics and surfactant templating. There are no grain boundaries within the nanostructure. However, the nanowires are not perfectly homogeneous in width and the thinner portions are susceptible to the formation of magnetic domain walls. Further experiments will show whether the magnetic domain wall was nucleated during observation or whether it was naturally present within the wire. The intensity of the magnetic response is constant and does not show any vortexes. We are currently studying the aging properties of these nickel nanowires in the aerospatial carbon composite test pieces, to study the dissipation behavior upon electric and magnetic shocks with time and under temperature and humidity variations. Supporting Information. A description of the experimental methods used for nickel nanowire growth and characterization using microscopy, magnetic measurements and electron holography experiments, the fabrication of test pieces and measurement of their electric dissipation (PDF). A movie showing the 3D tomography of a nickel nanowire (movie clip). The following files are available free of charge.
Corresponding Author
* Glenna Drisko, ICMCB, glenna.drisko@icmcb.cnrs.fr; Myrtil Kahn, LCC, myrtil.kahn@lcctoulouse.fr
Figure 1 .
1 Figure 1. (a) TEM tomographic image of a nickel nanowire with shadows projected in the xy, xz
Figure 2 .
2 Figure 2. Nickel nanostructures prepared from nickel stearate dissolved in anisole at a
Figure 3 .
3 Figure 3. (a) High resolution transmission electron microscopy image showing the continuity of
Figure 4 .
4 Figure 4. Electron holography of a single, isolated nickel nanowire showing: (a) The mean inner
Figure 5 .
5 Figure 5. (a) The measured resistance of the carbon composite with variable mass loading of
Acknowledgment
Stéphanie Seyrac, Jean-François Meunier and Lionel Rechignat provided technical support. Didier Falandry from CRITT mécanique et composites Toulouse prepared composite carbon samples.
Funding Sources
Financial support was provided by the RTRA Sciences et Technologies pour l'Aéronautique et l'Espace. GLD was supported while writing this manuscript by the LabEx AMADEus (ANR-10-LABX-42) in the framework of IdEx Bordeaux (ANR-10-IDEX-03-02); the Investissements d'Avenir program is run by the French Agence Nationale de la Recherche. A.I. thanks the Gobierno de Aragón (Grant E81) and Fondo Social Europeo. |
00175884 | en | [
"phys.cond.cm-gen"
] | 2024/03/05 22:32:10 | 2007 | https://hal.science/hal-00175884/file/HIREL_2007.pdf | Pierre Hirel
Sandrine Brochard
Laurent Pizzagalli
Pierre Beauchamp
Effects of temperature and surface step on the incipient plasticity in strained aluminium studied by atomistic simulations
Keywords: computer simulation, aluminium, surfaces & interfaces, dislocations, nucleation
come
Effects of temperature and surface step on the incipient plasticity in strained aluminium studied by atomistic simulations
The study of mechanical properties takes a new and more critical aspect when applied to nanostructured materials. While plasticity in bulk systems is related to dislocations multiplying from pre-existing defects, such as Franck-Read sources [START_REF] Hirth | Theory of dislocations[END_REF], nanostructured materials are too small for such sources to operate, and their plasticity is more likely initiated by dislocations nucleation from surfaces and interfaces [START_REF] Albrecht | Surface ripples, crosshatch pattern, and dislocation formation : cooperating mechanisms in lattice mismatch relaxation[END_REF][START_REF] Xu | Homogeneous nucleation of dislocation loops under stress in perfect crystals[END_REF][START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF][START_REF] Godet | Theoretical study of dislocation nucleation from simple surface defects in semiconductors[END_REF]. In particular, nucleation from grain boundaries is of great interest for the understanding of elementary mechanisms occuring in work hardening of nano-grained materials [START_REF] Spearot | Nucleation of dislocations from [001] bicrystal interfaces in aluminum[END_REF][START_REF] Swygenhoven | Atomic mechanism for dislocation emission from nanosized grain boundaries[END_REF][START_REF] Yamakov | Length-scale effects in the nucleation of extended dislocations in nanocrystalline al by molecular dynamics simulation[END_REF][START_REF] Yamakov | Dislocation processes in the deformation of nanocrystalline aluminium by moleculardynamics simulation[END_REF]. The mechanisms involving the nucleation of dislocations from crack tips are also of great importance to account for brittle to ductile transition in semiconductors [START_REF] Cleri | Atomic-scale mechanism of cracktip plasticity : dislocation nucleation and crack-tip shielding[END_REF][START_REF] Zhou | Large-scale molecular dynamics simulations of three-dimensional ductile failure[END_REF][START_REF] Zhu | Atomistic study of dislocation loop emission from a crack tip[END_REF].
In epitaxially-grown thin films, misfit induces a strain and can lead to the formation of dislocations at interfaces [START_REF] Ernst | Interface dislocations forming during epitaxial growth of gesi on (111) si substrates at high temperatures[END_REF][START_REF] Wu | The first stage of stress relaxation in tensile strained in 1-x ga x as 1-y p y films[END_REF][START_REF] Trushin | Surface instability and dislocation nucleation in strained epitaxial layers[END_REF]. The presence of defects in a surface, such as steps, terraces or hillocks, can also initiate plasticity [START_REF] Xu | Analysis of dislocation nucleation from a crystal surface based on the peierls-nabarro dislocation model[END_REF]. In particular, experimental and theoretical investigations have established that stress concentration near surface steps facilitates the nucleation of dislocations from these sites [START_REF] Brochard | Grilhé Stress concentration near a surface step and shear localization[END_REF][START_REF] Zimmerman | Surface step effects on nanoindentation[END_REF]. Dislocations formation in such nanostructures changes their mechanical, electrical, and optical properties, and then may have a dramatic effect on the behaviour of electronic devices [START_REF] Carrasco | Characterizing and controlling surface defects[END_REF]. Hence, the understanding of the mechanisms initiating the formation of dislocations in these nanostructures is of high importance.
Since these mechanisms occur at small spatial and temporal scales, which are difficult to reach experimentally, atomistic simulations are well suited for their study. Face-centered cubic metals are first-choice model materials, because of their ductile behaviour at low temperatures, involving a low thermal activation energies. In addition, the development of semi-empirical potentials for metals has made possible the modelling of large systems, and the accurate reproduction of defects energies and dislocation cores structures. Aluminium is used here as a model material.
In this study we investigate the first stages of plasticity in aluminum f.c.c. slabs by molecular dynamics simulations. Evidence of the role of temperature in the elastic limit reduction and in the nucleation of dislocation half-loops from surface steps is obtained. Steps in real crystals are rarely straight, and it has been proposed that a notch or kinked-step would initiate the nucleation of a dislocation half-loop [START_REF] Pirouz | Partial dislocation formation in semiconductors: structure, properties and their role in strain relaxation[END_REF][START_REF] Edirisinghe | Relaxation mechanisms in single in x ga 1-x as epilayers grown on misoriented gaas( 1 11)b substrates[END_REF]. This is investigated here by comparing the plastic events obtained from either straight and non-straight steps.
Our model consists of a f.c.c. monocrystal, with two {100} free surfaces (Fig. 1). Periodic boundary conditions are applied along the two other directions, X= [0 11] and Z= [011]. On one {100} surface, two opposite, monoatomic steps are built by removing atoms. They lie along Z, which is the intersection between a {111} plane and the surface. Such a geometry is therefore well suited to study glide events occuring in {111} planes. We investigate tensile stress orthogonal to steps. In this case, Schmid analysis reveals that Shockley partials with a Burgers vector orthogonal to the surface step are predicted to be activated in {111} planes in which glide reduces the steps height [START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF]. In some calculations, consecutive atoms have been removed in the step edge, forming a notch (Fig. 1), for investigating the effect of step irregularities on the plasticity. Various crystal dimensions have been considered, from 24 × 16 × 10 (3680 atoms), up to 60 × 40 × 60 (142800 atoms). The latter crystal size was shown to be large enough to have no influence on the results.
Interactions between aluminum atoms are described by an embedded atom method (EAM) potential, fitted on experimental values of cohesive energy, elastic moduli, vacancy formation and intrinsic stacking fault energies [START_REF] Aslanides | Atomistic study of dislocation cores in aluminum and copper[END_REF]. It is well suited for our investigations since it correctly reproduces the dislocations core structures. Fig. 1. System used in simulations, with periodic boundary conditions along X and Z, and free {100} surfaces. The {111} glide planes passing through the steps edges are drawn (dashed lines). Here, a notch is built on the right-side step.
Without temperature, the system energy is minimized using a conjugategradient algorithm. The relaxation is stopped when all forces fall below 6.24 × 10 -6 eV. Å-1 . Then the crystal is elongated by 1% of its original length along the X direction, i.e. perpendicular to the step. The corresponding strain is applied along the Z direction, according to the isotropic Poisson's ratio of aluminum (0.35). Use of isotropic elasticity theory is justified here by the very low anisotropy coefficient of this material: A = 2C 44 /(C 11 -C 12 ) = 1.07 (EAM potential used here) ; 1.22 (experiments [START_REF] Zener | Elasticity and Anelasticity of Metals[END_REF][START_REF] Thomas | Third-order elastic constants of aluminum[END_REF]). After deformation, a new energy minimization is performed, and this process is repeated until a plastic event, such as the nucleation of a dislocation, is observed. The occurence of such an event defines the elastic limit of the material at 0K. At finite temperature, molecular dynamics simulations are performed with the xMD code [START_REF] Rifkin | Xmd molecular dynamics program[END_REF], using the same EAM potential. Temperature is introduced by initially assigning an appropriate Maxwell-Boltzmann distribution of atomic velocities, and maintained by smooth rescaling at each dynamics step. The time step is 4 × 10 -15 s, small enough to produce no energy drift during a 300K run. After 5000 steps, ie. 20 ps, the crystal is deformed by 1%, similarly to what is done at 0K, and then the simulation is continued. If a nucleation event occurs, the simulation is restarted from a previously saved state and using a lower 0.1% deformation increment.
To visualize formed defects, atoms are colored as a function of a centrosymmetry criterion [START_REF] Li | Atomeye : An efficient atomistic configuration viewer[END_REF]: atoms not in a perfect f.c.c. environment, ie. atoms on surfaces, in dislocation cores and stacking faults, can then be easily distinguished. In case of dislocation formation, the core position and Burgers vector are determined by computing the relative displacements of atoms in the glide plane. These displacements are then normalized to the edge and screw components of a perfect dislocation. At 0K, the deformation is found to be purely elastic up to an elongation of 10%. Then a significant decrease of the total energy suggests an important atomic reorganisation. Crystal vizualisation reveals the presence of defects located in {111} planes passing through step edges, and a step height reduction by 2/3 (Fig. 2). Atomic displacements analysis in these planes shows that plasticity has occured by the nucleation of dislocations, with Burgers vectors orthogonal to the steps and with a magnitude corresponding to a 90 • partial. This is consistent with the 2/3 reduction of the steps height. The dislocations are straight, the strain being homogeneous all along the steps, and intrinsic stacking faults are left behind. The formation of dislocations from a surface step has already been investigated from 0K simulations, using quasi-bidimensional aluminum crystals [START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF]. It has been shown that straight 90 • Shockley partials nucleate from the steps. However, the small dimension of the step line did not allow the bending of dislocations. Here, although this restriction has been removed by considering larger crystals (up to 90 atomic planes along Z), only straight partial dislocations have been obtained.
In order to bring the role of steps to light, calculations are performed with a 30 × 20 × 30 crystal with two free surfaces, but without step. In that case, plasticity occurs for a much larger elongation, 20%, and leads to a complex defects structure. It clearly shows the important role played by steps, by significantly reducing the energy barrier due to dislocation-surface interaction, and initiating the nucleation in specific glide planes.
The effect of temperature has been first investigated at 300K. Plasticity occurs for a 6.6% elongation, showing that thermal activation significantly reduces the elastic limit. Another important difference due to temperature is the geometry of the formed defect. Instead of a straight dislocation, a dislocation half-loop forms and propagates throughout the crystal (Fig. 3). As expected, the nucleation of a half-loop dislocation is thermally activated. Contrary to the 0K simulation, a dislocation has nucleated from only one step: no dislocation is emitted from the other surface step, which remains intact. Atomic dis- placements at different simulation times (Fig. 4) indicates that this half-loop dislocation has a Burgers vector orthogonal to the step, the screw component being almost zero. The formed dislocation is then a Shockley partial, leaving a stacking fault in its path. Atomic displacements have been fitted with an arctan function, according to elasticity theory. This allows to monitor the position of the dislocation core defined as the maximum of the derivative, during the simulation.
Before the complete propagation of a dislocation, several half-loop embryos starting from both steps have been observed, appearing and disappearing (Fig. 3). Only one of them will eventually become large enough and propagate into the crystal (Fig. 3). This is related to the existence of a critical size for the dislocation formation, due to attractive interaction with the free surface. As the dislocation moves through the crystal and reaches the opposite surface, a trailing partial does not nucleate. Though it would significantly reduce the total energy of the system, especially in aluminum which have a high stacking-fault energy, this would require the crossing of a high energy barrier. On the contrary, the successive nucleation of dislocations in adjacent {111} can be achieved with a much lower energy barrier. So, although it relaxes less energy than a trailing partial would, this mechanism is more likely to be activated. This is what we obtained in most simulations, similar to the twinning mechanism proposed by Pirouz [START_REF] Pirouz | Partial dislocation formation in semiconductors: structure, properties and their role in strain relaxation[END_REF]. The remaining smaller step on the top surface, as well as the step created by the emergent dislocation on the bottom surface, become privileged sites for the nucleation of other dislocations in adjacent {111} planes, leading to the formation of a twin. While sufficient stress remains, successive faulted half-loops will be formed in adjacent planes, increasing the thickness of the twin. After 76 ps, the crystal structure does not evolve anymore. The plastic deformation is then characterized by a micro-twin (Fig. 3), located around the previous position of the step, with an extension of eight atomic planes, and delimited by two twin boundaries whose total energy equals the energy of an intrinsic stacking fault.
We have also investigated how the dislocation formation process is modified in the case of irregular steps. We used a crystal with the same geometry, except that 10 consecutive atoms have been removed from one surface step edge (see Fig. 1), creating two step kinks between which lies a notch. The other step remains straight. First, at 0K, no defect is obtained up to 10% elongation, beyond which plasticity occurs. This elastic limit is similar to the one obtained for the system with perfect steps. Moreover, nucleated dislocations are also Shockley partials with a Burgers vector orthogonal to the step, and are emitted from both surface steps, despite the system asymmetry. However, two dislocations have been formed from the irregular step. In fact, a second partial nucleates and propagates in the {111} plane passing through the notch (Fig. 5). Both dislocations remain in their respective glide plane, leaving two stacking faults. This suggests that kinks are strong anchors for dislocations. Nevertheless, at 0K, it seems they have a negligible effect on the elastic limit, or regarding the nature of the nucleation event. At 300K and for the same geometry, the elastic limit is reached for a 6.6% elongation, i.e. similar to the crystal with straight steps. Again, it suggests that irregular steps have no effect on the elastic limit. The dislocation half-loop does not nucleate from a step kink, but about 15 atomic planes away from it (Fig. 6a). It propagates into the crystal, but stays anchored to the kink, which acts like an obstacle to the movement. Then, another dislocation nucleates in the adjacent {111} plane, within the notch (Fig. 6b). Another simulation on a similar system leads to a dislocation nucleation from the straight step, despite the presence of a kinked step. These results show that kinks are not preferential sites for nucleation. It can be explained because step kinks are 0-D defects, contrary to straight steps, what prevents to initiate 1-D defects such as dislocations. After the first nucleation, the twinning mechanism, already described above, is observed. At about 70 ps, the formed twin cannot be distinguished from the one obtained in the crystal with straight steps. Finally, there is no indication left whether the step was initially irregular or not.
Molecular dynamics simulations have been used to investigate the influence of temperature and of step geometry on the first stages of plasticity in f.c.c. aluminum slabs. Surface steps were shown to be privileged sites for the nucleation of dislocations, significantly reducing the elastic limit compared to a perfect surface. Simulations with straight surface steps have revealed that only straight 90 • dislocations could nucleate at 0K. Temperature reduces the elastic limit, and makes possible the nucleation of faulted dislocation half-loops. Due to the system geometry and the strain orientation, only Shockley partials were obtained. Successive nucleation of partials in adjacent {111} planes are observed, similar to the twinning mechanism described by Pirouz in semiconductors. Simulations with an irregular step have shown that a kink is not a systematic site for nucleation. Instead, half-loops have been obtained from a straight portion of the step. The kinks introduced along a step seem to be strong anchor points for dislocations, making their motion more difficult along the step.
During all simulations including temperature, several dislocation half-loop embryos were observed before one eventually becomes large enough and propagates into the crystal. Calculations are in progress to determine the critical size a half-loop must reach to fully propagate. To determine the activation energy of the nucleation from surface steps, two methods may be used. First, the nudged elastic band method [START_REF] Jonsson | Classical and Quantum Dynamics in Condensed Phase Simulations[END_REF][START_REF] Henkelman | Improved tangent estimate in the nudged elastic band method for finding minimum energy paths and saddle points[END_REF][START_REF] Henkelman | A climbing image nudged elastic band method for finding saddle points and minimum energy paths[END_REF], applied to the nucleation and propagation of a half-loop, would provide the minimum energy path for this event.
Second, by performing several simulations at a given strain, one would obtain the average nucleation time as a function of temperature, thus allowing determination of the activation energy from Arrhenius plots. The dislocations speeds, as well as the size and shape of the dislocation half-loops, can be expected to depend on temperature, which will also be investigated through simulations. As a sequel to the nucleation event, several scenarii were observed. The twinning mechanism is supposed to be in competition with the nucleation of a trailing partial, which requires the crossing of a higher energy barrier. However this last mechanism was obtained during a simulation, showing it is still possible. More investigations would allow to determine the exact dependancy on temperature, strain, or other parameters.
P. Hirel's PhD work is supported by the Région Poitou-Charentes. We greatly aknowledge the Agence Nationale de la Recherche for financing the project (number ANR-06-blan-0250).
Fig. 2 .
2 Fig. 2. Formation of two dislocations at 0K, after a 10% elongation of a 60 × 40 × 60 crystal. Initial positions of the surface steps are shown (arrows). Only atoms which are not in a perfect f.c.c. environment are drawn: surfaces (yellow-green), stacking fault (dark blue), dislocation cores (light blue) (color online).
Fig. 3 .
3 Fig. 3. Evolution of the aluminum crystal after a 6.6% elongation at 300K. Same color convention as Fig. 2. The origin of time is when the applied strain is increased to 6.6%. (a) At 12 ps, several dislocation embryos appeared on both steps (arrows). (b) At 20 ps, a faulted half-loop dislocation has nucleated on one step. (c) After 76 ps, a stable twin was formed. The other step (on the right) remains intact.
Fig. 4 .
4 Fig.[START_REF] Brochard | Grilhé Dislocation nucleation from surface steps: atomistic simulation in aluminum[END_REF]. Calculated edge component of the relative displacements of atoms in the activated glide plane and in the Z-layer corresponding to the dislocation front line, at 300K and for different times (triangles). They are fitted with an arctan function (solid lines) according to elasticity theory, for monitoring the dislocation core position during the simulation. The abcissa labels the depth of the atoms from the top surface : 1 corresponds to atoms at the edge of the initial step, and 40 corresponds to the opposite surface, at the bottom of the system.
Fig. 5 .
5 Fig. 5. Dislocations nucleated in a crystal with one straight step, and one irregular, elongated by 10% at 0K.
Fig. 6 .
6 Fig. 6. Evolution of the aluminum crystal with an irregular step, under a 6.6% elongation at 300K. Same color and time conventions as in Fig. 3. The position of the notch is highlited in red. (a) After 7.4 ps, a faulted half-loop dislocation nucleates in the original {111} plane. (b) At 10 ps, another dislocation is emitted in the adjacent {111} plane, passing through the notch. |
01742595 | en | [
"phys.phys.phys-flu-dyn"
] | 2024/03/05 22:32:10 | 2018 | https://inria.hal.science/hal-01742595/file/paper.pdf | Francois Sanson
email: francois.sanson@inria.fr
Francesco Panerai
Thierry E Magin
Pietro M Congedo
Robust reconstruction of the catalytic properties of thermal protection materials from sparse high-enthalpy facility experimental data
Keywords: Uncertainty Quantification, Bayesian Inference, Catalysis, Thermal Protection Systems
Quantifying the catalytic properties of reusable thermal protection system materials is essential for the design of atmospheric entry vehicles. Their properties quantify the recombination of oxygen and nitrogen atoms into molecules, and allow for accurate computation of the heat flux to the spacecraft. Their
rebuilding from ground test data, however, is not straightforward and subject to uncertainties. We propose a fully Bayesian approach to reconstruct the catalytic properties of ceramic matrix composites from sparse high-enthalpy facility experimental data with uncertainty estimates. The results are compared to those obtained by means of an alternative reconstruction procedure, where the experimental measurements are also treated as random variables but propagated through a deterministic solver. For the testing conditions presented in this work, the contribution to the measured heat flux of the molecular recombination is negligible. Therefore, the material catalytic property
Introduction
In the design of thermal protection systems for atmospheric entry vehicles, the catalytic properties of the heatshield material allow us to quantify the influence of the highly exothermic molecular recombinations occurring at the surface. In order to estimate these properties for a given material, groundbased high-enthalpy facilities are used to simulate flight conditions at the material surface and to provide relevant experimental data [START_REF] Chazot | Hypersonic Nonequilibrium Flows: Fundamentals and Recent Advances[END_REF]. The plasma flow can be achieved using different techniques. In inductively-coupled plasma (ICP) wind tunnels, often referred to as plasmatrons, the plasma is generated by electromagnetic induction. A strong electromagnetic field ionizes the flow confined into a cylindrical torch and the plasma jet exits at subsonic speed into a low pressure test chamber that hosts material probes. The stagnation point conditions corresponding to a given spacecraft entry are reproduced for several minutes and the plasma flow carries sufficient energy to reproduce actual aerothermal loads experienced by a thermal protection system (TPS) in flight. Thanks to a flow of high chemical purity, plasmatron facilities are particularly suited to study gas/surface interaction phenomena for reusable TPS materials [START_REF] Kolesnikov | RTO-EN-AVT-008 -Measurement Techniques for High Enthalpy and Plasma Flows[END_REF][START_REF] Marschall | [END_REF]4,5,6,7,8,9,10,11] or composite ablative material [12] . High-temperature experiments enable characterizing the catalytic properties of the tested TPS sample by combining direct measurements using various diagnostics and a numerical reconstruction based on computational fluid dynamics (CFD) simulations.
Even for well-characterized facilities, the determination of catalytic properties is affected by the noise present in the experimental data. The quantification of uncertainties in high-enthalpy experiments has previously been studied in the literature [13,14,15,16]. In particular, in our previous work [16], we evaluated the uncertainties on catalytic properties by coupling a deterministic catalytic property estimation with a Polynomial Chaos (PC) expansion method. The probabilistic treatment of the uncertainties helped mitigating over-conservative uncertainty estimates found in the literature by computing confidence intervals. The influence of the epistemic uncertainty on the catalytic property of a reference calorimeter used in the reconstruction was also investigated in [16]. However, the method developed has two shortcomings: the number of experiments is limited and statistics about the measurements distribution are not available, even though they are an essential input for the PC approach.
Two important aspects are explored in the present work. First, we develop a robust methodology for quantifying the uncertainties on the catalytic property following a Bayesian approach. The Bayesian framework has already been successfully applied to the study of graphite nitridation [14] and hightemperature kinetics [17], for model parameter estimation as well as for experimental design [18], but it is a novel approach for the case of reusable materials, bringing a new insight on the ceramic matrix composites on which this paper focuses. In a Bayesian approach, one computes the probability distribution of the possible values of a quantity of interest compatible with the experimental results and with prior information about the system. This is fundamentally different from the PC approach proposed in [16]. While both approaches aim at quantifying the uncertainty on the catalytic properties, the experimental data are direct inputs of the deterministic solver combined to the PC method, whereas they are observed outputs of a model for the Bayesian method.
Second, a thorough comparison between the two methods is developed in order to explain the results obtained in view of their conceptual differences. We investigate the case of two experiments necessary for the reconstruction of the flow enthalpy and material catalytic property. The PC approach sequentially considers the experiments, whereas the Bayesian approach merges them into a unique simultaneous reconstruction. Additionally, the Bayesian approach has a major advantage: it allows us to determine the catalytic property of a reference copper calorimeter used in the reconstruction methodology, along with the catalytic property of the sample material. The robustness of the method is also examined for cases where the problem is not well posed, for instance when there are too many parameters to rebuild, and no sufficient information from the experiments.
In this contribution, we propose to revisit measurements performed in a high-enthalpy ICP wind-tunnel (Plasmatron) at the von Karman Institute for Fluid Dynamics (VKI) to characterize the catalytic response of ceramic matrix composites. Based on the robust uncertainty quantification methodology developed, we will assess whether accurate information on the catalytic properties of these thermal protection materials can be extracted from the experimental data. The paper is structured as follows. In section 2, we recall the main features of the combined experimental/numerical methodology developed at VKI to analyze data obtained in the Plasmatron facility, and then, present the sources of experimental uncertainties involved in the process. In section 3, we reformulate the problem of determining the catalytic properties in a Bayesian framework. In Section 4, we apply this approach to experimental data presented in [4] and compare our results to the uncertainty estimate obtained in [16] by means of the PC approach.
Experimental/numerical methodology
The present study uses a set of data measured during an experimental campaign documented in [4]. The first section briefly recalls the quantities measured experimentally for each testing conditions and their associated uncertainties, whereas the next section introduces the numerical simulations performed to rebuild quantities that cannot be directly measured. The last section introduces some uncertainty quantification terminology.
Experimental setup
In order to derive the catalytic property γ of a ceramic matrix composite sample, the reconstruction methodology used in [4] is based on two sequential experiments. The first step consists in rebuilding the free stream enthalpy h e of the plasma flow, using the cold wall heat flux measurement q cw from a copper calorimeter (see Fig. 1) of catalytic property γ ref . The uncertainties on the heat flux measurements were computed to be ±10%. Note that the quantity γ ref is a source of large uncertainties [16]. A commonly adopted assumption is to consider the surface as fully catalytic [START_REF] Kolesnikov | RTO-EN-AVT-008 -Measurement Techniques for High Enthalpy and Plasma Flows[END_REF]19]. While this is a conservative practice, there is compelling evidence that the actual surface of copper calorimeters is not fully catalytic, owing to the rapid oxidation of copper upon exposure to plasma. Numerous studies have been dedicated to characterize the catalytic properties of copper and its surface oxides (CuO and Cu 2 O) [10,13,20,21,[START_REF] Prok | Effect of surface preparation and gas flow on nitrogen atom surface recombination[END_REF][START_REF] Rosner | [END_REF][START_REF] Ammann | Heterogeneous recombination and heat transfer with dissociated nitrogen[END_REF][START_REF] Dickens | [END_REF]26,27,28,29,30,31,32,33,[START_REF] Nawaz | 44th AIAA Thermophysics Conference, AIAA[END_REF][START_REF] Cipullo | [END_REF][START_REF] Viladegut | 45th AIAA Thermophysics Conference[END_REF][START_REF] Driver | 45th AIAA Thermophysics Conference, AIAA 2015-2666[END_REF].
Together with the heat flux, the total pressure is measured during the first experiment. A water-cooled Pitot probe is introduced in the Plasmatron flow in order to measure the dynamic pressure P d (featuring an uncertainty of ±6%). The surface temperature of water-cooled probes T cw is known by measuring the differential of temperature between the inlet and outlet water lines. The static pressure P s of the test chamber is measured with a 2 Pa accuracy.
In a second step, hot wall measurements are performed on the TPS material sample in order to determine its catalytic property γ, for a known test condition determined through the rebuilding of cold-wall measurements.
The emissivity ε of the sample is measured with 10% accuracy. The front It is further assumed that the free stream flow is identical during both experiments and that local thermodynamic equilibrium (LTE) holds at the edge of the boundary layer. At steady state, the surface radiated heat flux is assumed to be equal to the incoming heat flux from the plasma flow.
Numerical computations
The Plasmatron flow conditions in front of the TPS test sample are rebuilt using experimental data and a 1D non-equilibrium Boundary Layer (BL) solver [START_REF] Barbante | Accurate and Efficient Modelling of High Temperature Nonequilibrium Air Flows[END_REF][START_REF] Barbante | [END_REF] that propagates the flow field quantities from the outer edge of the BL to the stagnation point. The rebuilding methodology is sketched in Fig. 2. The BL solver computes the stagnation point heat flux q cw (or q w for the TPS sample) that, mathematically, is a function of the probe geometry, the surface temperature T cw (or T w for the TPS sample), and the wall catalytic property of the reference calorimeter γ ref (or γ for the TPS sample), given the following set of the plasma flow free stream parameters: enthalpy h e , pressure p e , and velocity v e . The PEGASE library [40], embedded with the boundary layer solver, provides the physico-chemical properties of the plasma flow.
The BL solver can be called by a rebuilding code using Newton's method to determine the quantities h e and γ in a two-step strategy involving one rebuilding per experiment. The static pressure p e is assumed to be equal to the static pressure P s measured in the chamber. The enthalpy rebuilding uses the measured dynamic pressure P d to compute the free stream velocity v e using a viscous correction, as well as the heat flux q cw measured at the surface of the reference calorimeter to reconstruct the free stream enthalpy h e . In a second step, the results from the second experiment and the flow field parameters computed during the first step are combined to determine the sample material catalytic property γ.
Despite the fact that a large number of inputs are measured or unknown, the method is fully deterministic and provides no indication about the outputs uncertainty. Our previous work [16] was based on the propagation of uncertainties using this inverse deterministic solver.
Uncertainty characterization in catalytic property reconstruction
The determination of the TPS catalytic property directly depends on experimental data, and intrinsically carries the uncertainty associated with actual measurements. Uncertainty Quantification (UQ) tools model and quantify the error associated to the variables computed using uncertain inputs. Table 1 reviews the measured quantities and their uncertain. The uncertainties can be classified into three categories:
• The measured quantities (MQ) come from the two experimental steps described earlier. The following quantities are measured: T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas , namely the calorimeter probe temperature, the calorimeter probe heat flux, the sample temperature, the sample emissivity, the plasma jet dynamic pressure and static pressure. Note that the heat flux from the second experiment (q w,meas )
is not directly measured but derived from quantities T w,meas and ε meas using Stefan-Boltzmann's law: q w,meas = σε meas T 4 w,meas . The MQ are aleatory quantities that are assumed to be noisy versions of their true values denoted T cw , q cw , T w , ε, P d , P s . In this study, they are modeled as realization of a Gaussian distribution. The quantity T cw,meas denotes the measurement of the probe temperature, so we have:
T cw,meas = T cw + ζ, ( 1
)
where ζ is the realization of a zero mean Gaussian random variable.
• The quantities of interest (QoI) are the unknown quantities crucial to engineering applications. In this study, the sample and the probe catalytic properties denoted γ and γ ref , along with the flow enthalpy h e , are the QoIs. The objective is not only to compute the most likely value of the catalytic property or the one that minimizes the square error but to compute the full probability distribution of all admissible values of the QoI given the measurements for a thorough quantification of uncertainties.
• The Nuisance Parameters (NP) are unknown quantities that must be estimated along with the QoI in order to estimate the sample catalytic property. Quantities T cw , T w , P d , P s , ε are NPs as they have to be estimated in order to run the BL solver used to derive the sample catalytic property.
Bayesian-based approach
One objective of this work is to make a joint estimation of the catalytic properties γ ref and γ of the reference calorimeter and sample material, re-spectively, along with the flow enthalpy h e , for a given set of experiments.
In [16], a polynomial chaos expansion was built on top of the inverse deterministic solver described earlier. In this section, we detail the derivation of the probability distribution of these quantities given the experimental results using a Bayesian approach. This probability distribution of these quantities is referred to as the posterior distribution. This distribution carries all the necessary information for the uncertainty quantification analysis. It provides a robust estimate of the uncertainty through confidence intervals and the variance.
In section 3.1, the posterior distribution is decomposed into a ratio of probabilities using Bayes' rule (Eq. 4) that can be numerically evaluated.
Detailed calculations of each terms of the decomposition are then presented in section 5. Finally, the posterior distribution is numerically evaluated using a Markov Chain Monte Carlo (MCMC) algorithm described in appendix.
Figure 4 summarizes the rebuilding methodology from a Bayesian perspective.
Note that, contrary to the deterministic strategy illustrated in Figure 2, the QoI are rebuilt using both experiments simultaneously. The differences in the two approaches are further discussed in section 4.
Bayesian framework
We recall that the heat flux to the sample material wall q w,meas is completely defined by the material emissivity ε meas and temperature T w,meas through Stefan-Boltzmann's law. Introducing the vector of measured quantities m = (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas ), the posterior probability is then indicated as follows: Furthermore, the vector of NP is introduced as ω nuis = (T cw , T w , ε, P d , P s ).
P (γ ref , h e , γ|m) . ( 2
)
The posterior distribution (Eq.
Let us now focus on the non-marginalized posterior from Eq. 3. The flowchart in Fig. 4 shows the relationships between the unknowns γ ref , h e , γ, ω nuis and the MQ (i.e., the vector m) and how they interact with each other.
In order to evaluate Eq. 3, Bayes' rule is applied as follows:
P (γ ref , h e ,
Results
This section illustrates the results derived from the application of the Bayesian framework to the problem of interest. The objective is twofold: i)
to compute an estimate of the QoI (flow enthalpy h e and catalytic properties γ ref and γ of the reference calorimeter and sample material) and ii) to compare the results with the uncertainty estimates obtained in [16] from a more standard PCE approach. In order to demonstrate the potential of the Bayesian approach, two sets of experimental conditions are selected among the experiments presented in [4]. They are denoted as S1 and S8, as detailed in Table 2. For both experiments, we study the following two cases:
a. The calorimeter reference probe catalytic property γ ref is assumed to be constant and equal to 0.1 (Section 4.1). The results for the posterior distribution are presented. Uncertainty estimates are compared with the ones obtained in [16]. Qualitative and quantitative explanations of the differences between the results obtained by the two approaches are given.
b. Secondly, the probe catalytic property is treated as an unknown quantity determined along with the other NPs and QoIs (section 4.3). Again the results are compared against the method developed in [16].
Constant calorimeter reference probe catalytic property
Quantity γ ref is assumed to be constant and equal to 0.1, focusing on the computation of the posterior distribution of the flow enthalpy h e and material catalytic property γ. The statistical moment and the 95% confidence interval are given in Tables 4 and3 for quantities h e and γ, respectively.
Their mean values are in good agreement with the nominal results obtained in [4]. Figure 5 shows their distributions for sample S1. It is observed that the reconstructed quantities h e and γ both have symmetrical distributions.
Theses results can be related to the typical S-shape enthalpy versus catalytic property curve reported in the literature [16,[START_REF] Viladegut | 45th AIAA Thermophysics Conference[END_REF]. In this case, most of the posterior lies within the high gradient zone on the S-shape, meaning that the small changes in catalytic property induce large variations in the computed heat flux at the wall as they are related through a one to one mapping in that region. In other words, if the measured heat flux takes values in that region, it is expected that the catalytic property posterior will have limited variance. The Maximum A Posteriori (MAP) is defined as the maximum of the posterior density probability. It is an alternative point estimator to the mean of a QoI. In the special case of a Gaussian posterior, the MAP and the mean are equals. The analysis of sample S8 yields similar results and conclusions. The relative error computed as the ratio of the mean and the 95% confidence interval (CI) is one order of magnitude large for the catalytic property compared to the rebuilt enthalpy.
Comparison with Polynomial-Chaos approach
This section compares the proposed Bayesian approach to the PC approach presented in [16]. In the deterministic solver, the two steps of the experiments are taken sequentially (cf. Figure 2 in [16]): first the flow field is computed using the results from the first experiment, namely measurements of cold-wall heat flux, as well as the static and dynamic pressures. Then, the sample catalytic property is determined using the quantities rebuilt from the first experiment. In order to propagate uncertainties, a polynomial approximation of the solver was derived and used to generate the statistical moments of the sample catalytic property. More precisely, the MQ are the only inputs to the Sample q cw,meas polynomial approximation in [16], whereas the probe catalytic property γ ref is kept constant. In order to include the uncertainty of the probe catalytic property, several polynomial approximations of the solver are computed in [16] for different values of γ ref .
P s P d T cw,meas h e T w,meas ε meas γ [kW.m -2 ] [Pa] [Pa] [K] [MJ.kg -1 ] [K] [-] [-] S1 195
In the following section, we highlight the differences between the PC and Bayesian methods. Both qualitative (section 4.2.1) and quantitative (section 4.2.2) illustrations are provided. Sample S1 conditions are chosen for this exercise.
Qualitative differences between the PC and Bayesian methods
The PC and Bayesian approaches tackle the problem from different angles leading to different results. The main differences between the two methods can be summarized as follows.
• The experimental data accumulated during the two reconstructions are not exploited in the same way. In the Bayesian formulation, the measurements are treated silmutaneously in order to reconstruct the catalytic property distribution at once (cf. Fig. 4), whereas the PC approach coupled with the deterministic inverse problem use sequential reconstructions of each quantity (see Fig. 2). In particular, the flow enthalpy is estimated in the PC approach only using the first experiment, whereas the Bayesian approach uses information from both experiments to rebuild the flow enthalpy. As mentioned in [41], in [4,16] the link between the two experiments acts like a valve: the information (or uncertainty) only goes one way. The information from the second experiment does not flow back to the determination of the flow enthalpy. Only information from the first experiment goes to the second reconstruction via the boundary layer edge enthalpy h e . This method presents some similarities with the Cut-model used in Bayesian networks [41], but it generally leads to a wrong posterior distribution.
• Input uncertainties are modeled differently. The PC approach makes stronger hypothesis about the input distribution by assuming that its mean is the experimental value. In the Bayesian framework, it is only assumed that the experimental value obtained is sampled from a Gaussian distribution with mean function of the NP and QoI. This is a strong assumption since a single experimental result can be significantly different from the mean value.
• Not only the input measurements are not modeled the same way, but the way they are propagated is also different. The PC approach, and the results presented in [16], depend on the deterministic method used to solve the inverse problem. In fact, the PC approach only provides the variance of the outputs and higher statistical moments. On the other hand, the Bayesian method leads to an unbiased asymptotically efficient estimation of the sample catalytic property [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF][START_REF] Gelman | Bayesian Data Analysis, Third Edition, Chapman & Hall/CRC Texts in Statistical Science[END_REF].
• Finally, the Bayesian approach offers more flexibility in order to add uncertainties without major issues in the computational time, whereas the PC approach is limited by the problem of the curse of dimensionality [START_REF] Bellman | Applied dynamic programming[END_REF][START_REF] Foo | [END_REF], i.e., the lack of convergence speed of the numerical method when an increasing number of uncertainties is considered. Moreover, the Bayesian framework is well-suited for modeling epistemic uncertainty, such as the reference probe catalytic property. In the method developed in [16], this property is not modeled as a distribution, since no information is available to characterize it. Therefore, a limited set of values of γ ref on an arbitrary interval are tested to provide an envelope of the uncertainty on the QoI. On the other hand, the Bayesian implementation can use the information collected during the experiments to compute a posterior distribution of the reference probe catalytic property. Using that posterior distribution, the method yields a much more precise estimation of the uncertainty in the QoI along with an estimation of γ ref .
Quantitative differences between the PC and Bayesian methods
In this section, numerical tests are performed with sample S1 (see Table 2). The comparison focuses on the distributions of the material catalytic properties, as well as on the modeling uncertainties coming from the unknown catalytic property γ ref of the reference calorimeter.
The reconstructions of the material catalytic property γ are first compared using a constant value of γ ref equal to 0.1. Although this case may be unrealistic, since the probe catalytic property is rarely well known, it illustrates the differences between the two methods in a basic setting. Figure 6 shows differences in the sample catalytic property distribution obtained with the PC [16] and Bayesian methods. Note that, in Table 5, the first moment of the two distributions are very close, however the standard deviations and the confidence intervals are significantly larger for the distribution obtained with the Bayesian approach. This explains the much larger magnitude of the relative error. Moreover, the MAP estimates are substantially different: for the Bayesian case, the distribution is skewed and the most probable value and the mean values of the sample catalytic property are different. This is not observed when using the Polynomial Chaos, since the catalytic property distribution is Gaussian.
Since γ ref is rarely known, its variability and influence on the QoI are also investigated here. In particular, the approach used in [16] for including the epistemic uncertainty due to γ ref is compared to the Bayesian implementation.
For the PC method, the uncertainty on the unknown. In Figure 7, the cumulative density function (CDF) of the flow enthalpy derived from the PC approach is plotted for the extreme values of γ ref , i.e. 1 and 0.01, as well as derived from the Bayesian approach with an a priori unknown value of γ ref .
For both values of γ ref , the CDF obtained by means of the PC approach exhibits a much steeper increase compared to the state-of-the-art Bayesian approach, leading to a much more precise estimate of the uncertainty on the enthalpy. This is due to the different degree of knowledge of the probe catalytic property for the two methods. Since the Bayesian implementation uses the measurements to estimate the probe catalytic property, the uncertainty due to the epistemic quantity decreases.
Conversely, for the PC implementation, no information about the probe catalytic property is available, leading to an overestimation of the uncertainty in the enthalpy. In summary, the Bayesian method makes a better use of the information available from the experiments and provides an optimal, reliable estimate of the uncertainty. The distributions of the material catalytic property obtained by means of the Bayesian approach with γ ref a priori unknown will be studied in the following section.
Case where the reference probe catalytic property is unknown
In contrast to an approach commonly followed in the literature, we consider here the value of the probe catalytic property to be unknown, instead of arbitrarily set to a constant value. Therefore, γ ref is determined along with the other unknown quantities and the target distribution is the new posterior:
P (γ ref , h e , γ
, ω nuis |T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas ). Hence, the influence of the probe catalytic property on the sample catalytic property uncertainty can be rigorously quantified. Due to the increase in the number of unknowns and in order to increase the speed of convergence of the MCMC algorithm, the Markov Chain is adapted using the Adapted Metropolis (AM) algorithm presented in [46] with a modification from [47] (see algorithms 4 and 5). This approach is more precise and more flexible than the approach used in [16] where a robust brute force method is presented to explore the influence of the probe catalytic property. In this work, the Bayesian approach gives finer results thanks to a better knowledge of the probe catalytic property.
Results obtained for the estimation of the material catalytic property γ, flow enthalpy h e , and reference calorimeter catalytic property γ ref for the two samples S1 ans S8 are presented. Figure 8 shows the distribution of γ ref and Table 6 summarizes their statistics. Means and variances results should be used with care as the computed distributions are extremely far from Gaussian. Based on the experimental data of sample S1, the computed value for the reference probe catalytic property is 0.018, as shown in table 6. This result indicates that the assumption of γ ref = 0.1 utilized in [4] is over-conservative. The results obtained for γ ref for the two conditions (S1 and S8) are rather different but not contradictory. The relative error is extremely large. Note that with sample S1, γ ref can be estimated with slightly more accuracy than with sample S8. This observation shows that the precision of the determination of the estimation of γ ref depends on the experimental conditions and not only on the accuracy on the measurements.
The addition of an extra NP increases the uncertainty on the QoI and other NP. Figure 9 shows the distribution of h e for sample S1 that can be compared to earlier results presented in Figure 5 for the case with a constant γ ref .
The distribution support is significantly increased and shifted toward higher values.
This change can be explained by a simple physical reasoning: for the same value of the experimental heat flux measurement, the reference probe catalytic property has been estimated by means of the Bayesian approach to a value of 0.018 much lower than 0.1. Consequently, the contribution to the heat flux due to catalytic recombination is lower than in the γ ref = 0.1 case and the contribution from the convective heat flux therefore becomes larger and the flow enthalpy is estimated as well to a higher value than in the γ ref = 0.1 case.
Figure 10 shows the distribution of the material catalytic property for samples S1 and S8. For both samples, the material catalytic property uncertainty is much more widespread with respect to the previous case where quantity γ ref was assumed to be constant. In particular, the support of the distribution covers eight orders of magnitudes and does not present a clear maximum for a precise a posteriori estimation. In the case of an unknown quantity γ ref , the experiments do not contain sufficient information. Indeed, one can notice that the posterior distribution is similar to the beta prior distribution, meaning that the likelihood is not informative in this case. even though the support of the distribution is extremely large and seems non informative, some remarks can be made about the CDF. In particular even the slight uncertainty on the determination of the flow enthalpy is associated with large uncertainty on the catalytic property of the material. This means that for the range of enthalpy between 4 MJ/kg and 8 MJ/kg (see Fig. 9) it is challenging to precisely estimate the sample catalytic property for those testing conditions. To illustrate the problem, Figure 12 shows the Bayesian reconstruction of the sample catalytic property for a case where the probe catalytic property γ ref is set to a constant value of 0.02. The sample material experiment considered here is S1. The bivariate distribution of the flow enthalpy h e and material catalytic property γ show that, for a given flow enthalpy, the curve of enthalpy versus catalytic property has a very low gradient. Even though the probe catalytic property is known and constant, the uncertainty is comparable to the case where the probe catalytic property has to be computed. Therefore the increase of uncertainty in the sample catalytic property is due to the experimental conditions rather than to the precision of the measurements. This remark shows that while the specific experimental condition had been selected based on a relevant flight environment, it is not optimal for accurately estimating the TPS material catalytic property. A similar conclusion can be made for sample S8.
Conclusion
In this study, a rigorous method for estimating the catalytic property of a material and the associated uncertainties is presented. By comparing a Bayesian approach with an alternative uncertainty quantification method presented in [16], we showed that the two methods do not yield the same results.
By construction, the Bayesian approach is more adapted to cases where a limited number of experiments are available while the approach presented in [16] makes stronger assumptions on the measurement distribution that are only valid when a large number of experiments is available. Moreover, we found that the Bayesian approach is also more flexible as it can naturally include epistemic variables such as the unknown reference calorimeter catalytic property uncertainty.
The uncertainty analysis carried out in the case of the unknown reference calorimeter catalytic property showed that the experimental set up is not adequate to precisely estimate the catalytic property of a given material.
For the testing conditions presented in this work, the contribution to the measured heat flux of the molecular recombination is negligible. Therefore, the material catalytic property cannot be estimated precisely. Conversely, in this study, we were able to have some estimation of the reference calorimeter catalytic property. We have found that the assumption of constant value γ ref = 0.1 is wrong and introduces a bias in the estimation of the material catalytic property.
As future work, we propose to identify experimental conditions that are optimal for accurately estimating the TPS material catalytic properties.
ε meas |γ, h e , ω nuis ) is the likelihood of the measurements obtained during the catalytic property reconstruction. Note that quantity γ ref is solely involved in the first experiment, whereas quantity γ in the second one. However, both experiments are still connected through the free stream conditions (such as the enthalpy h e ) that are assumed to be constant for both probes that are injected sequentially in the plasma jet. The two likelihoods can still be computed in two different steps, as shown in the following sections.
Derivation of the first experiment likelihood
The enthalpy rebuilding step does not involve ε meas and T w,meas . The expression becomes:
P (T cw
since the measurements are considered independent.
Each term from the right hand side of Eq. 1 has to be evaluated individually.
For instance, for the cold wall surface temperature, one has that:
P (T cw,meas |γ ref , h e , ω nuis ) = P (T cw,meas = T cw + ζ|γ ref , h e , ω nuis ) = 1 2πσ 2 Tcw,meas exp - (T cw,meas -T cw ) 2 2σ 2
Tcw,meas .
The last equality comes from the fact that ζ is a zero mean Gaussian random variable. Very similarly, one has P (q cw,meas |γ ref , h e , ω nuis ) = P (q cw,meas = q cw + ζ|γ ref , h e , ω nuis ) = 1 2πσ
Derivation of the second experiment likelihood
For the second set of experiments the material sample is tested in order to measure its catalytic property γ. The catalytic property rebuilding step consists in computing P (T w,meas , ε meas |γ, h e , ω nuis ). In the rebuilding procedure, the heat flux radiated by the TPS is assumed to be equal to the heatflux q w from the flow to the TPS, which is computed by means of the BL solver.
Mathematically we have:
q w (γ, h e , ω nuis ) = σεT 4 w . ( 13
)
Following the same procedure as for the enthalpy rebuilding, the likelihood for the catalytic property rebuilding has the following form:
P (T
Similarly, it follows that: Injecting Eqs. 12 and 17 in Eq. 5 provides an explicit way to numerically evaluate the likelihood. Unfortunately, even though there are analytical solutions for the likelihood and the prior distribution, in order to compute the posterior, it is necessary to compute the normalization factor in Eq. 4. In this study, this is computationally intractable. To bypass that issue, a classical Markov Chain Monte Carlo method is used to directly compute the posterior without having to evaluate the normalization factor. In fact, the Metropolis algorithm enables to sample from the posterior distribution [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF][START_REF] Bremaud | Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues[END_REF]49,50].
Details of the implementation are given in appendix B.
1. In our case for an efficient exploration of the distribution of γ it is natural to choose the random walk as h e,n =h e,n-1 + ξ 1
ω n =ω n-1 + ξ 2 log(γ n ) = log(γ n-1 ) + ξ 3 (22)
Therefore the random walk is not symmetrical for γ and the ratio becomes: R = P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas |γ n , h e,n ω n ) P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , p s,meas |γ n , h e,n ω n )
× (γ n -γ max ) 2 (γ n -γ min ) 2 γ n-1 (γ n-1 -γ max ) 2 (γ n-1 -γ min ) 2 γ n ( 23
)
The rest of the algorithm of the implementation follows the MH algorithm described in [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF].
Figure 1 :
1 Figure 1: ESA standard probes (5 cm diameter) used for the measurements performed in the Plasmatron facility: (left to right) stagnation-point probe holding a material sample, copper calorimeter, and water-cooled Pitot probe.
Figure 2 :
2 Figure 2: Flow chart of the deterministic estimation of material catalytic property
Figure 3 :
3 Figure 3: Flow chart of the BL solver with its main inputs
Figure 4 :
4 Figure 4: Flowchart of the Bayesian-based estimation of material catalytic properties.
Figure 5 :
5 Figure 5: Bayesian reconstruction (γ ref =0.1) of the material catalytic property on a semi-log scale (top) and flow enthalpy on a linear scale (bottom) for material sample S1.
Figure 11 compares the CDF of γ based on the sample S1 in the two cases where γ ref is either constant (equal to 0.1) or unknown. The constant γ ref case is actually a worst case scenario that overestimates the molecular recombination rate at the surface of the sample. The unknown γ ref case shows that the actual material sample catalytic property is certainly lower. Its distribution is hardly usable as it is, especially for the low values of γ since for those the posterior is very similar to the arbitrary prior chosen for this study. However, the CDF remains useful to estimate probabilities and confidence intervals. Now, we investigate reasons for the large increase in the γ uncertainty for an unknown γ ref quantity compared to the constant case. It is partially due to the addition of γ ref as NP but also to the lower estimation of γ ref =0.018 leading to an increase in the estimated flow enthalpy. The dependence of material catalytic property versus the flow enthalpy is weak. By inspecting the distributions of γ in Figure 10, one notices that these are flat in particular for sample S8. In other words, the sample catalytic property does not influence the measured heat flux for the tested conditions. It follows that scarce information from the measured heat flux can be used to estimate γ.
Figure 8 :
8 Figure 8: Distribution of the reference probe catalytic property γ ref for samples S1 and S8 on a semi-log scale, obtained by means of the Bayesian approach.
Figure 9 :
9 Figure 9: Distribution of the flow enthalpy h e for sample S1 obtained by means of the Bayesian approach (γ ref a priori unknown).
Figure 10 :
10 Figure 10: Distribution of the material catalytic property γ for samples S1 and S8 on a semi-log scale, obtained by means of the Bayesian approach (γ ref a priori unknown).
Figure 11 :
11 Figure 11: CDF of the material catalytic property γ for sample S1 on semi-log scale, obtained by means of the Bayesian approach (γ ref = 0.1 and γ ref a priori unknown).
Figure 12 :
12 Figure 12: Bivariate distribution of the flow enthalpy h e and material catalytic property γ for sample S1 on semi-log scale obtained by means of the Bayesian approach (γ ref =0.02).
P 2
2 (ε meas |γ, h e , ω nuis ) catalytic property likelihood becomes: P (T w,meas , ε meas |γ, h e , ω nuis ) = 1 2πσ Tw,meas exp -(T w,meas -T w )
Table 1 :
1 Measured quantities used for the flow and sample material characterization
Symbol Variable Uncertainty
P d,meas Dynamic pressure 6%
P s,meas Static pressure 0.3%
q cw,meas Heat flux 10%
T cw,meas Probe temperature 10%
T w,meas TPS temperature 1%
ε Emissivity 5%
γ, ω nuis |m) = P (m|γ ref , h e , γ, ω nuis ) P (γ ref , h e , γ, ω nuis ) The likelihood quantifies the amount of information carried by the measurements to the QoI and the NP. It is the probability of observing the measured quantities knowing the QoI and the NP. It measures the compatibility between the measurements and the value of unknown parameters, such as the catalytic property of the material sample. When the value of the catalytic property is compatible with the experimental results, the likelihood increases. The amount of this increase is directly related to the amount of
P (m) (4)
where P (m|γ ref , h e , γ, ω nuis ) is the likelihood, P (γ ref , h e , γ, ω nuis ) the prior, and P (m) a normalization factor such that the probabilities add up to one. information brought by the measurements. If the measurements are very informative, the increase (or decrease if the catalytic property gets less and less compatible with the experiments) is very steep. The prior accounts for the knowledge of the unknown parameters before any experiment. In our case, as scarce prior information is available for ω nuis and h e , uniform priors are considered. As γ and γ ref are defined on the interval [0;1], a beta distribution with parameters α = 1 and β = 1 is chosen with a support of [10 -8 ; 1]. The next section is devoted to the determination of the likelihood.
Table 2 :
2 Deterministic conditions for material samples S1 and S8. Here, reported values of h
e and γ are determined using the standard rebuilding procedure detailed in
[4]
.
Table 3 :
3 Flow enthalpy h e [MJ kg-1 ] statistics obtained by means of the Bayesian approach (γ ref =0.1)
Sample Mean SD MAP 95% CI UQ(95% CI) [%]
S1 6.0 0.43 6.06 [5.06;6.76] 28.3
S8 9.7 0.43 6.66 [8.80;10.51] 17.6
Sample Mean SD MAP 95% CI UQ(95% CI) [%]
S1 7.4e-3 4.1e-03 6.2e-3 [2.4e-3;1.7e-2] 197.2
S8 3.7e-3 1.8e-03 2.7e-3 [1.4e-3;8.38e-3] 188.6
Table 4 :
4
Material catalytic property γ statistics obtained by means of the Bayesian approach (γ ref =0.1)
QoI due to the MQs are computed for discrete values of γ ref , whereas for the Bayesian method, γ ref is a priori
Method Mean SD MAP 95% CI UQ(95% CI) [%]
Polynomial Chaos 0.00747 1.6e-03 0.007 [0.0045 ; 0.0094] 65.6
Bayesian 0.00747 4.1e-03 0.0059 [0.0024 ; 0.017] 195.4
Table 5 :
5 Comparison between the statistics of catalytic property γ for material sample S1 obtained by means of the PC and Bayesian approaches (γ ref =0.1).
Table 6 :
6 Reference probe catalytic property γ ref statistics obtained by means of the Bayesian approach.
The reason for this
P (P d,meas |γ ref , h e , ω nuis ) = P (P d,meas = P d + ζ|γ ref , h e , ω nuis ) P (P s,meas |γ ref , h e , ω nuis ) = P (P s,meas = P s + ζ|γ ref , h e , ω nuis ) Note that q cw can be computed using the BL solver as it is a function of γ ref , h e , and ω nuis . Finally Eq. 6 becomes: P (T cw,meas , q cw,meas , P d,meas , P s,meas |γ ref , h e , ω nuis ) = Ps,meas σ P d,meas σ Tcw,meas σ qcw,meas exp -(P s,meas -P s ) 2 2σ 2
2 qcw,meas exp - (q cw,meas -q cw ) 2 qcw,meas 2σ 2 , (8)
= 1 2πσ 2 P d,meas exp - (P d,meas -P d ) 2 P d,meas 2σ 2 , (9)
= 1 2πσ 2 Ps,meas exp - (P s,meas -P s ) 2 Ps,meas 2σ 2 . (10)
(11)
1
(2π) 2 σ ps
× exp - (P d,meas -P d ) 2 2σ 2 P d,meas - (T cw,meas -T cw ) 2 2σ 2 Tcw,meas - (q cw,meas -q cw ) 2 2σ 2 qcw,meas
.
(12)
w,meas , ε meas |γ, h e , ω nuis ) = P (T w,meas |γ, h e , ω nuis ) P (ε meas |γ, h e , ω nuis ) ,
(14)
and the following expression can be computed:
P (T w,meas |γ, h e , ω nuis ) = 1 2πσ 2 Tw,meas exp -(T w,meas -T w ) 2 2σ
2
Tw,meas .
Acknowledgment
The authors would like to thanks Andrien Todeschini for the fruitful discussions and remarks on the Bayesian approach.
Appendix A: Determination of the likelihood
The likelihood represents the link between the MQ, the QoI and NP and is directly related to the experiments. The two experiments from the first and second steps are independent, so that the likelihood can be rewritten as: Chain toward the desired distribution [50]. Complete proof of the convergence of the Algorithm can be found in [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF][START_REF] Bremaud | Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues[END_REF]. In this section, the basics of the algorithm and the specificity of the implementation are presented.
Consider a random walk Markov Chain X n on state space S. In the case studied, S contains all the admissible values of the NPs and QoIs.
Consider two states x and y ∈ S, the probability to go from x to y is P(x,y) referred as the transition probability. Let π(x) be the distribution of X n , if x∈S π(x)P (x, y) = π(y). Then, the distribution π is said to be invariant or stationary. In the special case of random walks, the invariant distribution is unique and the random walk converges to π asymptotically (see [49] or [START_REF] Bremaud | Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues[END_REF]).
In other words, no matter where the Markov Chain started, we have,
The Metropolis algorithm uses the right transition probability P (x, y) such as π is the distribution of interest (the QoI distribution). It uses this results
from Markov Chain theory cf. [START_REF] Kaipio | Statistical and Computational Inverse Problems[END_REF] or [49] for more further details) :
If π(x)P (x, y) = π(y)P (y, x) then π is the limiting distribution for X n π(x)P (x, y) = π(y)P (y, x) is called the detailed balanced equation. In short, the algorithm models a random walk but between each step it adapts the next random step so that the detailed balanced equation is verified. Asymptotically, the MC behaves like the stationary distribution and using Monte Carlo method one can compute the distribution after convergence of the Markov Chain. In our case the state space has 6 or 7 dimensions and the Markov Chain we aim to build is X n = (γ smpl,n , h e,n , ω n ) and since we are interested in the posterior distribution, in our case we choose:
π(γ, h e , ω) = P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas |γ, ω, h e ) P (T cw,meas , q cw,meas , T w,meas , ε meas , P d , P s ) ×P (γ, ω, h e ), (19) that can be computed up to a normalization factor. The advantage of the Metropolis Hasting (MH) algorithm is that it only uses the ratio
Since the priors for h e,n , ω n are uniform and γ follows a beta distribution, the ratio simplifies into: R = P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,meas |γ n , h e,n ω n ) P (T cw,meas , q cw,meas , T w,meas , ε meas , P d,meas , P s,i |γ smpl,n , h e,n ω n )
where P (n -1 → n) is the probability to go from state n -1 to n. If the random walk is symmetrical P (n -1 → n) = P (n → n -1) and the ratio is |
01480523 | en | [
"phys.meca.mema"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01480523/file/delobelle2016.pdf | V Delobelle
email: vincent.delobelle@imag.fr
G Chagnon
D Favier
T Alonso
Study of electropulse heat treatment of cold worked NiTi wire: From uniform to localised tensile behaviour
Electropulse heat treatment is a technique developed to realise fast heat treatment of NiTi shape memory alloys. This study investigates mechanical behaviour of cold worked NiTi wires heat treated with such a technique. It is demonstrated that milliseconds electropulses allow to realise homogeneous heat treatments and to adapt the mechanical behaviour of NiTi wires by controlling the electric energy. The material can be made elastic with different elastic modulus, perfectly superelastic with different stress plateau levels and superelastic with important local residual strain. Due to the short duration and high temperature of the heat treatment, this technique allows to obtain mechanical properties that cannot be obtained with classical heat treatments of several minutes in conventional furnaces such as linear evolution of the final loading and high tensile strength to 1500 MPa for superelastic material or increase of the stress plateau level with cycling for superelastic material.
Introduction
Since several years, NiTi shape memory alloys (SMA) are the most widely used SMA in engineering fields as reported by [START_REF] Van Humbeeck | Non-medical applications of shape memory alloys[END_REF], but more especially for biomedical applications as reviewed by [START_REF] Duerig | An overview of nitinol medical applications[END_REF] due to their excellent mechanical properties, corrosion resistance and biocompatibility. To design their applications, engineers use industrial basic components such as NiTi wires, tubes or plates. These components are generally shaped with several successive hot and cold rolling operations. During these operations, the material is severely deformed causing important grain size reduction, amorphisation of the material, and finally leading to a suppression of the phase transformation which confers the unique superelastic or ferroelastic properties to SMA as shown in [START_REF] Jiang | Nanocrystallization and amorphization of NiTi shape memory alloy under severe plastic deformation based on local canning compression[END_REF]. To restore these properties, annealing and ageing treatments are classically used as shown in [START_REF] Jiang | Crystallization of amorphous NiTi shape memory alloy fabricated by severe plastic deformation[END_REF] for example. These heat treatments are generally long, classically 60 min for annealing and 10-120 min for ageing step as proposed in [START_REF] Jiang | Effect of ageing treatment on the deformation behaviour of Ti-50.9 at.%[END_REF] for example. Such heat treatments are performed in conventional furnace so the entire sample is heat treated homogeneously.
Recent studies investigated heat treatments of NiTi wires with Joule effect. Duration of such heat treatments is very dispersed. [START_REF] Zhu | The improved superelasticity of NiTi alloy via electropulsing treatment for minutes[END_REF] proposed heat treatment of several minutes duration, [START_REF] Wang | Effect of short time direct current heating on phase transformation and superelasticity of Ti-50.8 at.% Ni alloy[END_REF] and [START_REF] Malard | In situ investigation of the fast microstructure evolution during electropulse treatment of cold drawn NiTi wires[END_REF] of seconds duration and [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF] studied milliseconds heat treatments. Interest of such heat treatments is twofold: (i) reducing the time of heat treatment and (ii) performing local heat treatment. This last point is of key interest to realise architectured materials with multiple or graded mechanical properties as obtained in [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF] with 300 s heat treatment. For milliseconds heat treatments, [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF] focused on the microstructures observation of the material and few informations are available about transformation and mechanical properties of the created materials. Moreover, [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF] mentioned the presence of an important gradient during cooling phase of their heat treatments but the structure is supposed homogeneous. For longer heat treatments proposed in [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF], when a thermal gradient is applied to the sample, the material has graded mechanical properties. Thus, for milliseconds heat treatments, it is important to analyse the homogeneity of the created material from a mechanical point of view.
In this study, heat treatments are realised with milliseconds electropulse. Investigation of the transformation and mechanical behaviours of the heat treated part of the NiTi SMA wires is realised. Transformation behaviour is studied by means of differential scanning calorimetry technique. Local mechanical behaviour is studied by means of digital image correlation (DIC) technique. Investigation of strain fields allows to study the impact of the heat treatment on the uniformity of the mechanical behaviour. In Section 2, experiments and methods are presented. In Section 3, results are described and discussed in Section 4.
Experiments and methods
Electropulse heat treatment
The experiments were performed on cold worked Ti-50.8 at.% Ni SMA wire of diameter 0.5 mm, from the commercial provider Fort Wayne Metals (NiTi # 1). The as-received material was in 45% cold worked condition. Short time electrical pulses were generated with a direct current welder (commercial ref.: DC160, Sunstone Engineering) in wire of length L HT = 20 mm, as shown in Fig. 1a. The wire was maintained with two massive brass grips. Brass grips conduce electricity to the sample and act as a thermal mass. Thus, as shown in Fig. 1b, after heat treatment, the tips of the wire are unchanged and the centre is heat treated. In this study, six heat treatments, called A, B, C, D, E and F were carried out.
During the heat treatment, voltage U W at the sample terminals and voltage U R = RI at the resistance terminal are measured, with R the resistor value and I the electrical current in the electrical loop, as shown in Fig. 1a. Power P = U W I dissipated in the wire is estimated. Evolutions of U W , I and P are presented in Fig. 2a,b,c, respectively. The dissipated power is almost constant during the heat treatment and equal to P = 3000 W. From these measurements, heat treatment duration T and final dissipated energies in the wire E = tP are estimated and summarised in Table 1 for all treatments. Note that the decrease of Uw = R wire I (Fig. 2a), where R wire is the wire electrical resistance, is in good agreement with [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF] that observed significant decrease of the electrical resistance of the wire during pulse annealing. An infrared camera associated to a high magnification lens (Commercial reference: Camera SC 7600, Flir) was used to record images of the wire during tests. Due to the symmetry of the experimental setup, measurements are presented only on the half of the wire. Then, due to the unknown variation of the wire emissivity during tests, only radiation values are presented. Fig. 3a shows the maximal wire radiation for tests A to E, obtained at the end of heating and measured along the main axis of the sample y. Fig. 3b shows the wire radiation measured along the main axis of the sample y during cooling of test D, from the maximal radiation obtained to room temperature. For all tests at the end of heating (Fig. 3a), close to the clamps, a strong thermal gradient is observed on 1 mm, due to the heat loss into the clamps. It is assumed that the bump observed between 1 and 5 mm is due to a reflection of the wire itself via the clamp to the camera which increases the radiation. Then, between 1 and 10 mm, the radiation and thus the temperature are uniform for all tests. Note that for test E presented in Fig. 3a, the plateau observed between 2 and 7 mm is due to a saturation of the infra-red sensors.
During cooling (Fig. 3b), the observed gradient is due to the presence of the clamps acting as thermal mass, and increasing the cooling rate close to them. From this observation, it can be supposed that heat treatment is heterogeneous during the cooling phase.
To estimate the sample temperature during the experiment, it is considered that the wire is submitted to: (i) step electrical power pulse P = 3000 W, of duration equal to times indicated in Table 1 presented in full lines in Fig. 4 and (ii) radiation and convection heat losses. During heating, the sample temperature is supposed uniform as observed in Fig. 3a. Thus, the sample temperature was estimated by solving the following heat diffusion equation:
mC(T ) dT (t) dt = P(t) -Ah(T (t) -T 0 ) -A T 4 (t) (1)
where m is the sample mass, C the heat capacity taking in [START_REF] Smith | The heat capacity of NiTi Alloys in the temperature range 120 to 800[END_REF], depending of the sample temperature T at instant T, A the . From these estimations, maximum temperatures T max obtained at the end of the electropulse are summarised in Table 1. The maximum temperature estimated for heat treatment F is T F = 1440 • C which is superior to the melting temperature of almost equiatomic NiTi SMA T NiTi melting = 1310 • C. Experimentally, it was observed that the sample melt for such a pulse. Thus, experimental observation and theoretical approximations of the temperature are in good agreement. Due to the melting of the sample, experimental results cannot be presented for heat treatment F. The cooling time of the wire to room temperature is comprised between 40 and 50 s as shown in Fig. 4b. It remains important to keep in mind that these values are only estimations.
As shown in Fig. 1b. the resulting wire is a material having two properties. However, from Fig. 3, due to an important temperature gradient at the junction of the two materials, it can be assumed that a gradient of property is also present between the two materials. Nevertheless, this study only focuses on the transformation and mechanical behaviours of the part of the wire heat treated homogeneously.
Transformation and mechanical behaviours study
The transformation behaviour of materials was studied by means of DSC. DSC experiments were performed with a TA Q200 DSC between 90 and -90 • C with heating/cooling rate of 10 • C min -1 . The transformation behaviour of the wire was then studied between 80 and -70 • C, when the cooling ramp is stabilised. DSC measurements were realised for all the specimens.
All the tensile tests were performed using a Gabo Eplexor tensile machine. The tests were realised at room temperature T 0 ≈ 25 • C, at constant cross head velocity U = 0.1 mm min -1 , where U is the cross head displacement. The initial gauge length of the wire was L 0 = 18 mm, thus, the applied global strain rate was U/L 0 = 9.3 × 10 -5 s -1 . In this study the transition zone between heat treated material and as received material is not studied (Fig. 1b). During the tensile test, the axial force F was recorded. The nominal stress = F/S 0 , where S 0 is the initial section of the wire is calculated. In this study, the local strain field yy was estimated by means of DIC method. The strain field is averaged along the main axis the sample, in order to obtain the global strain of the material, noted in the following study.
Results
In the following, the cold worked material is noted CW. Then, CW material heat treated with pulses A, B, C, D, E are called material A, B, C, D, E, respectively. Transformation behaviour of CW is plat and does not exhibit any peak (Fig. 5a). This result is in good agreement with [START_REF] Kurita | Transformation behavior in rolled NiTi[END_REF]. The transformation behaviour of material A remains plat (Fig. 5b). A small peak is observed for material B, as sketched on the close up of Fig. 5c. A difference of 10 • C between heating and cooling peak temperatures and small heat of transformation of 2.8 J g -1 are well the signature of the austenite -R phase transformation (noted A -R). The transformation peak temperature at cooling and heating are about T A-R = 10 • C and T R-A = 20 • C, respectively. For material C (Fig. 5d), a A R transformation is observed with of mation estimated to approximately 2.5 J g -1 at cooling and heating.
Transformation behaviour
Transformation peak temperatures at cooling and heating are lower than with treatment B with T A-R = -18 • C and T R-A = -7 • C, respectively.
For material D (Fig. 5e), the Austenite-Martensite (noted A -M) transformation is observed with peak transformation temperatures equal to T A-M = -44 • C and T M-A = -14 • C. Direct and reverse heat of transformations are estimated to be 11.0 J g -1 and 13.5 J g -1 , respectively. Almost identical transformation behaviour is observed for material E (Fig. 5f) with a A-M transformation having heat of transformation equal to the ones found for material D. However, the peak transformation temperatures are higher than material D ones and are equal to T A-M = -38 • C and T M-A = -12 • C.
Mechanical behaviour
Mechanical tests presented in Fig. 6 are composed of a loading-unloading cycle and a final loading to failure. Fig. 6a shows the global mechanical behaviour of CW material and material A. For material A, strain profiles estimated along the main axis of the wire are plotted in Fig. 6b for instants defined in Fig. 6a. Strain profiles are similar to the ones obtained for CW material. Fig. 6c shows the global mechanical behaviour of materials B and C. For material C, strain profiles are plotted in Fig. 6d for instants defined in Fig. 6c. Strain profiles are similar to the one obtained for material B. Finally, Fig. 6e shows the global mechanical behaviour of materials D and E. For material D, strain profiles are plotted in Fig. 6f for instants defined in Fig. 6e. Strain profiles are similar to the one obtained for material E. Fig. 7a shows (i) the initial elastic modulus, noted E ini , during the first loading and (ii) the elastic modulus after localisation plateau, noted E end , observed on the stress-strain curves. Slopes to estimate the elastic moduli are sketched in dashed line in Fig. 6. Fig. 7b shows the plateau stresses at loading, unloading and the hysteresis height noted high , low and , respectively. CW material exhibits purely elastic brittle behaviour (Fig. 6a), i.e. stress-strain curve of the material is linear, and no plasticity occurs before failure. Its elastic modulus is estimated to 53 GPa. Ultimate tensile strength is about 1500 MPa. With short duration heat treatment, the material can be soften. Material A remains purely elastic and brittle with lower elastic modulus estimated to 43 GPa. The ultimate tensile strength remains high for metallic material and is about 1500 MPa. For these two materials, the strain field is uniform along the wire axis (Fig. 6b).
When increasing heat treatment energy, the materials B and C exhibit classical behaviour of superelastic NiTi SMA without residual strain (Fig. 6c andd). Stress plateaus due to direct and reverse phase transformations are observed at loading and unloading with an important hysteresis of = 200 MPa for the two heat treatments. The stress plateaus are lower for material C because the maximum temperature reached by material C is higher than material B one. The plateau stresses decreases between first and second cycles and the difference is estimated to -30 MPa and -10 MPa for materials B and C, respectively. For these two materials, classical localisation phenomena are observed during the stress plateau. During the ultimate loading, when the material is stretched to the maximum stress reached during the first cycle, stress drops up to the value of the first cycle plateau. The elastic moduli of first slopes increase with heat treatment duration. It is estimated to 50 GPa and 68 GPa for materials B and C, respectively. The elastic moduli after stress plateau are estimated to be about 21 GPa and 25 GPa for materials B and C, respectively. Ultimate tensile strength is about 1300 MPa for both materials. Stress-strain evolution after the plateau is linear. The material exhibits brittle behaviour.
For materials D and E, a superelastic behaviour with a residual strain is observed during the first loading, unloading cycle (Fig. 6e andf). Their loading plateau stresses are equal to 430 MPa and 450 MPa, respectively. Material D plateau stress is lower than C one. It is also lower than material E one while maximum temperature reached by material E is higher than material D one. Elastic moduli during the first slopes are estimated to 63 GPa for the two materials. During the ultimate loading, a non-classical behaviour is observed. When the plateau stress of the first loading is reached, the stress still increases with strain and reaches a plateau to a higher value. Then, when the material is stretched to the maximum strain reached during the first cycle, stress drops down to the stress value of the first cycle plateau. Then, global behaviour is identical to the behaviour observed during the first loading. Finally after the plateau, the elastic moduli presented in Fig. 6e are estimated to 20 GPa and 17 GPa for materials D and E Fig. 7a, respectively. Finally, strain hardening is observed. A maximal strain of 40% was reached for both materials without failure. During first loading, the local strain field exhibit localisation phenomenon as materials B and C. However, during the second loading, the local strain field is non-uniform with two localisation fronts. This specific local behaviour is not presented here but will be analysed in a forthcoming study. During the strain hardening phase, localisation is observed.
Finally, from Fig. 6c ande, the transformation strain of materials B, C, D and E are estimated to be 5%, 7%, 9% and 10%, respectively. When the pulse time increases, the transformation strain increases too.
Discussion
On the homogeneity of the heat treatment
In [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF], thermal gradient was observed during 300 s heat treatment, leading to mechanical graded material. Considering thermal gradient observed during cooling phase of the heat treatment (see Fig. 3), an identical mechanical behaviour could be assumed. However, if local strain behaviours are very different from one to another materials, the heat treatment is uniform along the main axis of the sample. Initially, material A deforms homogeneously as clearly shown in Fig. 6b. For materials B and C, during elastic phases, the strain fields are uniform (Fig. 6d instants a,b,e,f). For all superelastic materials, during the plateau, i.e. when localisation is observed, outside localisation front, the strain is uniform to a high or low strain value (Fig. 6d andf). The specific local behaviour of materials D and E during localisation zone and strain hardening, presented in the previous section, is due to the partial pre-straining of the sample during the first loading.
Thus, even if the material temperature during cooling is heterogeneous, materials are heat treated homogeneously. For such heat treatment, the governing parameter is the maximal temperature reached during heating. Such a is not valid when increasing the duration of heat treatment as observed in [START_REF] Meng | Ti-50.8 at.% Ni wire with variable mechanical properties created by spatial electrical resistance over-ageing[END_REF]. et al. (2010, 2011) studied microstructure evolution of identical CW wires during milliseconds electropulse heat treatment. From comparison of Fig. 6 of this study and Fig. 3 of [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF], it is considered that materials A, B, C, D, and E can be compared with materials called 6 ms, 10 ms, 12 ms, 16 ms and 18 ms in their paper, respectively. In the following discussion, the microstructure is considered as the one summarised in Table 2, taken from [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF].
Mechanical properties and microstructure
Delville
About materials elasticity
To begin, observations that CW and A materials, i.e. amorphous and polygonised material, have a high elastic potential and exhibits brittle behaviour (Fig. 6a andb) are in good agreement with [START_REF] Sergueeva | Structure and properties of amorphous and nanocrystalline NiTi prepared by severe plastic deformation and annealing[END_REF] and [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF]. [START_REF] Sergueeva | Structure and properties of amorphous and nanocrystalline NiTi prepared by severe plastic deformation and annealing[END_REF] showed that amorphous NiTi exhibits classical properties of metallic glasses. In the report of [START_REF] Schuh | Mechanical behavior of amorphous alloys[END_REF] about mechanical properties of metallic glasses, it is mentioned that the elastic modulus of amorphous material is about 30% lower than crystallised material. From Fig. 7a it is observed that the difference of elastic modulus between CW and D materials, i.e. partly amorphous and crystallised material is about 20%. The order of magnitude is in good agreement with [START_REF] Schuh | Mechanical behavior of amorphous alloys[END_REF] and the difference is assumed to be due to the presence of an important amount of austenite phase in the CW material. [START_REF] Schuh | Mechanical behavior of amorphous alloys[END_REF] also indicated that the room temperature elastic modulus decreases with increasing annealing temperature which is in good agreement with measurement of elastic modulus of CW and A materials (Fig. 7a).
Values of elastic modulus E ini proposed in Fig. 7a for materials B, C, D and E are generally associated to austenite elastic modulus. Results are in good agreement with dispersed values found in the literature comprised between 40 and 90 GPa as mentioned in [START_REF] Liu | Apparent modulus of elasticity of near-equiatomic NiTi[END_REF]. However, for materials B and C, considering that the material is composed of polygonised material and nanocrystals, these values cannot be associated to Young modulus of crystallised austenite because it can be supposed that nanograins and polygonised part deform in different manner. Values of elastic modulus E end are generally associated to martensite elastic modulus. Results are in good agreement with dispersed values found in the literature comprised between 20 and 50 GPa as mentioned in [START_REF] Liu | Apparent modulus of elasticity of near-equiatomic NiTi[END_REF]. The specific mechanical behaviour obtained after superelastic plateau is discussed in Section 4.2.3.
About superelasticity of materials
Since several decades, it is known that superelasticity of NiTi SMA is due to the phase transformation from austenite to martensite on the material grains. In the proposed case, it is assumed that identical deformation mechanisms occur in the nanograins and micrograins of B, C and D, E materials, respectively. Other deformation mechanisms such as nanograins rotation and boundary sliding can occur in materials A, B and C, as mentioned in [START_REF] Sergueeva | Structure and properties of amorphous and nanocrystalline NiTi prepared by severe plastic deformation and annealing[END_REF], but cannot be proven or discussed from the proposed results. For materials D and E, the material is recrystallised and composed of micrograins and it is assumed that the classical deformation mechanisms are observed. The irreversible strain observed in the stress strain curve is due to the sliding of dislocations observed in [START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF]. Localisation phenomenon (Fig. 6d andf) is also classically associated to the superelastic behaviour obtained in tension.
Results about superelasticity obtained via electropulse heat treatment are in good agreement with the literature about superelasticity obtained via classical heat treatment as in [START_REF] Jiang | Effect of ageing treatment on the deformation behaviour of Ti-50.9 at.%[END_REF]: increasing heat treatment temperature decreases the plateau stress and increases stress hysteresis (Fig. 7b). For materials B and C, the plateau stress decreases with cycling as observed in [START_REF] Liu | Effect of pseudoelastic cycling on the Clausius-Clapeyron relation for stress induced martensitic transformation in NiTi[END_REF] and the yield drop phenomenon is observed as in [START_REF] Eucken | The effects of pseudoelastic prestraining on the tensile behaviour and two-way shape memory effect in aged NiTi[END_REF], for example. However, longer electropulse heat treatments, as for material D and E, create a unique mechanical behaviour: the plateau stress increases with cycling. This phenomenon is characteristic of electropulse heat treatments and was also observed in [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF]. The yield drop phenomenon is also observed but from a higher stress value. For these materials, very specific localisation phenomenon can be observed and it will be developed in an other study.
Brittle behaviour vs. strain hardening
In Fig. 6c, for materials B and C, the ultimate loading is linear and the material is brittle with important tensile strength. This behaviour is similar to the one observed for CW and A materials (Fig. 6a). With classical heat treatments, strain hardening are classically observed in cold worked material directly aged as in [START_REF] Saikrishna | On stability of NiTi wire during thermo-mechanical cycling[END_REF] or annealed and aged as in [START_REF] Jiang | Effect of ageing treatment on the deformation behaviour of Ti-50.9 at.%[END_REF]. Materials having linear and potentially elastic behaviour after transformation plateau, with important tensile strength, have already been observed in [START_REF] Pilch | Final thermomechanical treatment of thin NiTi filaments for textile applications by electric current[END_REF] with Joule heating but have never been obtained with conventional heat treatments to the knowledge of authors. This property is of great interest for SMA engineering.
From [START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF], it is known that materials D and E are recrystallised into micrograins and that important dislocation phenomenon occurs. This observation is in good agreement with strain hardening observed in Fig. 6e because dislocations create large irreversible deformations with strain hardening of the material. In the profile h of Fig. 6f, localisation is observed. This localisation is due to the pre-straining of the localised area during first loading.
Conclusion
This study investigated mechanical behaviours of cold worked NiTi wires heat treated with electropulse. From this study, one can conclude that milliseconds electropulse is an efficient method to realise homogeneous heat treatments and allows to adapt the functional properties of cold worked NiTi SMAs. On wire of diameter 0.5 mm, of length 20 mm and for heat pulse of power P = 3000 W:
• Low duration heat treatment (1.5 ms) allows to soften the initial cold worked material but preserves the elastic and brittle with a high ultimate tensile strength properties. • Middle duration heat pulse (2-3 ms) restores the classical superelastic behaviour observed on NiTi SMAs. After the plateau, the material deforms linearly with a brittle behaviour and an important ultimate tensile strength as the initial material: such a property cannot be obtained with classical heat treatments. • Long duration heat pulse (4-5 ms) allows to obtain a superelastic behaviour with important residual deformation. With cycling, the strain of successive plateaus increases with loading: such a property has never been observed with classical heat treatments. The residual deformation is due to the sliding of dislocations defects in the material. After the plateau, strain hardening is observed.
This study brings important information about mechanical behaviour of cold worked NiTi SMA heat treated with milliseconds electropulses.
Fig. 1 .
1 Fig. 1. a) Experimental set-up presentation. (b) Resulting material with local heat treatment.
Fig. 2 .
2 Fig. 2. Evolution of (a) Voltage UW, (b) electrical current I and (c) power P during electropulse heat treatment.
Fig. 3 .
3 Fig. 3. Radiation measured with an infrared camera. (a) Radiation at the end of heating for tests A to E. (b) Radiation during cooling phase of test D.
Fig. 5
5 Fig.5shows the transformation behaviour for CW, A, B, C, D and E materials.Transformation behaviour of CW is plat and does not exhibit any peak (Fig.5a). This result is in good agreement with[START_REF] Kurita | Transformation behavior in rolled NiTi[END_REF]. The transformation behaviour of material A remains plat (Fig.5b).
Fig. 4 .
4 Fig. 4. P step (full lines) used to estimate temperature T (dotted lines) for heat treatment A to F during the heating phase. (b) Temperature T evolution during the cooling phase.
Fig. 5 .
5 Fig. 5. DSC of (a) material CW, (b) material A, (c) material B, (d) material C, (e) material D, and (f) material E.
Fig. 6 .
6 Fig. 6. Stress-strain curves of (a) initial material and material A, (c) materials B and C and (e) materials D and E. Local strain fields for (b) material A, (d)material C and (f) material D.
Fig. 7 .
7 Fig. 7. (a) Elastic moduli E ini and E end in of pulse duration. (b) Plateau stresses at loading ( high ), unloading ( low ) and hysteresis height ( ) in function of pulse duration.
Table 1
1 Heat treatments parameters: duration, energy and estimated temperature.
Material t (ms) E (J) Tmax ( • C)
CW 0 0 -
A 1.44 4.3 380
B 2.27 6.8 570
C 3.08 11.2 760
D 3.99 12.1 970
E 5.04 14.1 1200
F 6.10 18.3 1440
Table 2
2 Comparison with the literature data from[START_REF] Delville | Microstructure changes during non-conventional heat treatment of thin Ni-Ti wires by pulsed electric current studied by transmission electron microscopy[END_REF][START_REF] Delville | Transmission electron microscopy investigation of dislocation slip during superelastic cycling of Ni-Ti wires[END_REF].
Material Sample name in Microstructure Grain size (nm)
Delville et al.
(2010)
CW 0 ms Mainly austenite -
and amorphous
A 6 ms Polygonised and 5-10
amorphous
B 10 ms Polygonised 20-40
nanocrystalline
C 12 ms polygonised 25-50
nanocrystalline
D 16 ms Recrystallised 200-700
E 18 ms Recrystallised 800-1200
Acknowledgement
The authors wish to acknowledge the financial support of the ANR research programme "Guidage d'une Aiguille Médicale Instrumentée -Déformable" (ANR-12-TECS-0019). |
01480542 | en | [
"phys.meca.mema"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01480542/file/rebouah2016.pdf | Marie Rebouah
Grégory Chagnon
Gregory Chagnon
email: gregory.chagnon@imag.fr
Patrick Heuillet
G Chagnon
Anisotropic viscoelastic models in large deformation for architectured membranes
Keywords: Viscoelasticity, Sphere unit model, Anisotropy, Stress-softening B
published or not. The documents may come
Anisotropic viscoelastic models in large deformation for architectured membranes
Introduction
Depending on the process, rubber-like materials can be considered as initially isotropic or anisotropic. Even if isotropic behaviour is the most widespread, calender processing [START_REF] Itskov | A class of orthotropic and transversely isotropic hyperelastic constitutive models based on a polyconvex strain energy function[END_REF][START_REF] Diani | Directional model isotropic and anisotropic hyperelastic rubber-like materials[END_REF][START_REF] Caro-Bretelle | Constitutive modeling of a SEBS cast-calender: large strain, compressibility and anisotropic damage induced by the process[END_REF] generates anisotropy by creating a privileged direction in the material. The differences of mechanical properties affect the stiffness, stress softening or viscoelastic properties. Even if rubber-like materials are isotropic, some induced anisotropy can be generated by the Mullins effect [START_REF] Diani | A review on the Mullins effect[END_REF]Rebouah and Chagnon 2014b) for most materials.
Numerous studies have dealt with the modelling of rubber-like materials. In large deformations, the viscoelasticity is tackled either by the Boltzmann superposition principle [START_REF] Green | The mechanics of non-linear materials with memory: part I[END_REF]Coleman and Noll 1963) which leads to the K-BKZ models or by internal variable models [START_REF] Green | A new approach for the theory of relaxing polymeric media[END_REF]. Many constitutive equations were proposed to describe different viscoelastic behaviours.
The modelling of calendered rubber sheet necessitates taking into account the initial anisotropy. Many constitutive equations are developed to describe anisotropic hyperelastic behaviour [START_REF] Chagnon | Hyperelastic energy densities for soft biological tissues: a review[END_REF], these equations were often initially developed to describe soft biological tissues. Different equations were also developed to describe viscoelasticity for orthotropic materials or materials having one of two reinforced directions [START_REF] Holzapfel | A structural model for the viscoelastic behavior of arterial walls: continuum formulation and finite element analysis[END_REF][START_REF] Bischoff | A rheological network model for the continuum anisotropic and viscoelastic behavior of soft tissue[END_REF][START_REF] Haslach | Nonlinear viscoelastic, thermodynamically consistent, models for biological soft tissue[END_REF][START_REF] Quaglini | A discrete-time approach to the formulation of constitutive models for viscoelastic soft tissues[END_REF][START_REF] Vassoler | A variational framework for fiber-reinforced viscoelastic soft tissues[END_REF]. These constitutive equations rely on the representation of the material by a matrix with different reinforced directions, inducing the anisotropy in the material. The viscoelastic constitutive equations are introduced in the fibre modelling and can also be introduced in the matrix modelling. In a different way, [START_REF] Flynn | An anisotropic discrete fibre model based on a generalised strain invariant with application to soft biological tissues[END_REF] developed a discrete fibre model with dissipation for biological tissues. The model relies on a structural icosahedral model with six discrete fibres.
These models do not correspond to calendered materials. A calendered material can be represented as a macromolecular network in which the repartition of macromolecules was not equiprobable in space. A way to treat this problem is to describe the material by a uniaxial constitutive equation integrated in space considering different orientations. Different formalisms were proposed [START_REF] Verron | Questioning numerical integration methods for microsphere (and microplane) constitutive equations[END_REF]. For soft tissues and rubber-like materials, the formalism proposed by [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] is the most widespread. It was used to describe the stress softening or the viscoelasticity. [START_REF] Miehe | A micro-macro approach to rubber-like materials. Part ii: the micro-sphere model of finite rubber viscoelasticity[END_REF] were the first to use this formalism to describe the viscoelasticity but in an isotropic framework. The same formalism was later used by [START_REF] Diani | Observation and modeling of the anisotropic visco-hyperelastic behavior of a rubberlike material[END_REF], introducing induced anisotropy by the stress softening, but the viscoelasticity remained isotropic. Moreover, [START_REF] Rey | Hyperelasticity with rate-independent microsphere hysteresis model for rubberlike materials[END_REF] used this formalism to describe hysteresis loops but also in an isotropic approach. In fact, the discretisation in privileged directions is often used to induce anisotropy for hyperelasticity and stress softening, but not for viscoelasticity.
In this paper, we propose to characterise the mechanical behaviour of initially anisotropic rubber-like materials. The experimental data will be used to adapt the formalism proposed by Rebouah and Chagnon (2014a) to describe the anisotropic viscoelasticity and stress softening of the material.
Two rubber-like materials that possess an initial anisotropic behaviour are studied: first, a room temperature vulcanized silicone rubber which was made anisotropic by a stretching during reticulation, and second, a thermoplastic elastomer made anisotropic by the industrial process. In Part 2, the cyclic mechanical behaviour of the materials is described by means of a tensile test performed on specimens oriented in different directions in the plane of the membrane. In Part 3, the mono-dimensional constitutive equation is first described, and next the three-dimensional formulation is proposed. In Part 4, a discussion about the abilities of the constitutive equations to describe the two materials is proposed. Finally, a conclusion closes the paper.
Experiments
Materials and specimen geometry
In this paper, two materials that possess an anisotropic mechanical behaviour are used, a silicone rubber (RTV3428) and a thermoplastic elastomer (TPE). They are detailed in the next paragraphs.
RTV3428a
An RTV3428 silicone rubber is used here, which was previously studied in other works [START_REF] Rey | Influence of the temperature on the mechanical behavior of two silicone rubbers above crystallization temperature[END_REF]). This material is initially isotropic and only has an anisotropy induced by Mullins effect [START_REF] Machado | Induced anisotropy by the Mullins effect in filled silicone rubber[END_REF]. It is proposed to modify its microstructure by changing the elaboration process to generate an initially anisotropic behaviour. The process is illustrated in Fig. 1. To obtain this anisotropic plate, two components are mixed first. The mixture is then put into a vacuum pump and finally injected into a mould. The mould is put into the oven at 70 °C for 22 minutes. The crosslinking of the obtained membrane is not fully performed after being removed from the mould. Next the membrane is installed in a clipping system made of two jaws and applying a constant displacement between the two extremities of the membrane (as represented in the fifth step in Fig. 1). The global deformation of the membrane in the system is about 60 %. The system is put into the oven at a temperature of 150 °C for two hours. The new obtained material is named RTV3428a. This process generates a preferential orientation of the macromolecular chains in the material.
TPE
Different processes can be used to manufacture TPE [START_REF] Caro-Bretelle | Constitutive modeling of a SEBS cast-calender: large strain, compressibility and anisotropic damage induced by the process[END_REF]. In this study, an industrial material provided by Laboratoire de recherches et de contrôle du caoutchouc et des plastiques (LRCCP) is used. This material is obtained by means of an injection process which gives it a predominant direction and makes it initially anisotropic.
Specimen geometry
Each material is initially elaborated as a membrane as illustrated in Fig. 2. The RTV3428a membrane dimensions after the second vulcanization are 150 mm in length, 70 mm in width and 1.6 mm in thickness; for the TPE the membrane dimensions are 150 mm in length, 100 mm in width and 2 mm in thickness. For each material tensile test samples (20 mm long and 2 mm wide) are cut in the middle of the membrane to avoid edge effects. These specimens are cut with different angles 0°, 45°and 90°compared to the preferential direction of the material, considering that 0°matches the preferential direction imposed to the macromolecular chains for both processes as illustrated in Fig. 2.
Loading conditions
Mechanical tests were realised with a Gabo Eplexor 1500 N mechanical test machine with a load cell of 50 N. Samples were submitted to a cyclic loading, two cycles up to a stretch λ = 1.5, two cycles up to λ = 2, and finally two cycles up to λ = 2.5. The tests were carried out at a strain rate of 0.016 s -1 . The loading history is detailed at the top of the Fig. 3 and Fig. 4.
Results
Figure 3 presents the results of the test for the three samples cut from the RTV3428a. The three samples do not have the same mechanical behaviour, and several phenomena are observed. First, to evaluate the amount of anisotropy, an anisotropic factor ξ is defined as the ratio of stresses for a stretch λ = 2.5 for different orientations as ξ = σ 0 • (λ = 2.5)/σ 90 • (λ = 2.5). It permits qualitatively quantifying the anisotropy of the two materials. For the RTV3428a an anisotropic factor of approximately 1.3 can be calculated between the sample cut at 0°and the one cut at 90°. This emphasises that the second vulcanization undergone by the membrane modifies the microstructure of this filled silicone and is efficient to generate anisotropy in the silicone rubber. The sample cut at 0°(which is the same direction as the loading direction imposed during the second vulcanization) has the most important stress hardening compared to 45°and 90°, the latter being the softest specimen. This test also highlights that the material has few viscous effects and permanent set even at slow strain rate. Stress softening is still the major non-linear effect associated with the mechanical behaviour.
Figure 4 presents the results for the same test obtained for the three samples of the TPE material. As before the anisotropic factor ξ can be evaluated and is approximately equal to 1.5. As for the RTV3428a, stress softening, hysteretic behaviour and permanent set are also observed. It is to note that the stress softening and permanent set are very large. The The observed phenomena are the same for the two materials. They both have an anisotropic mechanical behaviour with viscoelasticity, stress softening and permanent set for any loading direction. All the phenomena are even more amplified for the TPE than for the RTV3428a. The stress softening of the material between the first and the second cycles seems to be the same for any direction, this should mean that stress softening is not affected by the initial anisotropy. On the contrary, the stiffness and viscoelastic properties are modified for the two materials depending on the direction.
Constitutive equations
This section aims to detail the constitutive equation developed to describe the mechanical behaviour observed experimentally for both materials. This constitutive equation must take into account the anisotropy, stress softening and viscoelastic effects (including the permanent set) undergone by the materials. It is to note that the material is considered as a homogeneous structure and not as a matrix with reinforced fibres as it is classically done to represent the anisotropic materials (see, for example, [START_REF] Peña | Mechanical characterization of the softening behavior of human vaginal tissue[END_REF][START_REF] Natali | Biomechanical behaviour of oesophageal tissues: material and structural configuration, experimental data and constitutive analysis[END_REF].
The constitutive equation relies on the representation of space by an integration of a uniaxial formulation by means of [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF]
directions: σ = 42 i ω (i) σ (i) a (i) n ⊗ a (i) n (1)
where a (i) n are the normalized deformed directions, ω (i) the weight of each direction and σ (i) the stress in the considered direction. The directions are represented in Fig. 6. The idea of the Fig. 6 Representation of a microsphere with a spatial repartition of 2 × 21 directions proposed by [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] modelling is to propose the constitutive equation in tension-compression for each direction, i.e. σ (i) .
A classical rheological scheme is used to model viscoelasticity; the first schema is illustrated in Fig. 5. The deformation gradient F is decomposed into an elastic part F e and an inelastic part F i between the initial configuration (C 0 ) and the instantaneous configuration (C t ). The formalism proposed by [START_REF] Huber | Finite deformation viscoelasticity laws[END_REF] is used. An application of the Second Principle of Thermodynamics leads to the equation of dissipation D int , namely
D int = σ : D -Ẇ ≥ 0 ( 2
)
where W is the strain energy (it can be decomposed into two parts, W 1 for the elastic branch of the model and W 2 for the inelastic branch of the model), σ is the Cauchy stress tensor and D is the rate of deformation tensor. [START_REF] Huber | Finite deformation viscoelasticity laws[END_REF] proved that the sufficient condition to verify is
2F e ∂W ∂B e F e : D i ≥ 0. (3)
The indices e and i are referring to the elastic and the inelastic parts of the model. As [START_REF] Huber | Finite deformation viscoelasticity laws[END_REF] detailed the simplest sufficient condition to satisfy, this equation is chosen as:
Ḃe = LB e + B e L T - 2 η 0 B e ∂W 2 ∂B e B e D ( 4
)
where B is the left Cauchy-Green tensor, L is the velocity gradient equal to ḞF -1 and D strands for the deviatoric part of the tensor. Any constitutive equation can be used by assuming that each spring of the model is modelled by a neo-Hookean model [START_REF] Treloar | The elasticity of a network of long chain molecules (I and II)[END_REF], C (i) 1 and C (i) 2 are the material parameters of each branch. Index i denotes that this model will be used in every directions of the microsphere decomposition. To be used, the governing equation must be written in uniaxial extension as
λ(i) e = λ e (i)
λ(i)
λ (i) -4 C (i) 2 3η (i) 0 λ (i) e 3 -1 (5)
where λ (i) and λ (i) e are the stretch and the elastic part of the stretch in the considered direction, and η (i) 0 is a material parameter associated to each direction. To take into account the stress softening phenomenon, a non-linear spring is added to the previous rheological scheme. It consists of a non-linear spring that can be added to the rheological scheme as illustrated in Fig. 6. [START_REF] Rebouah | Anisotropic Mullins softening of a deformed silicone holey plate[END_REF] proposed using an evolution function F (i) that records the loading history of each direction, this function alters the stiffness of the non-linear spring:
F (i) = 1 -η (i) m I 1max -I 1 I 1max -3 I (i) 4 max -I (i) 4 I (i) 4 max - 1
I (i) 4 max I 4 max 4 (6)
where η (i) m is a material parameter, I 1 is the first invariant of Cauchy-Green tensor and I (i) 4 is the fourth invariant associated to direction i. The term I (i) 4 max is the maximum value reached at the current time for each direction, and I 4 max is the maximum value of I 4 for every direction.
Summing the viscoelastic part and the stress softening enables us to define the stress in direction i considering that each direction endures only tension-compression, an incompressibility hypothesis is used to write the stress in each direction as
σ (i) = 2C (i) 1 λ (i) 2 - 1 λ (i) + 2C (i) 2 λ (i) e 2 - 1
λ (i) e + 2λ (i) 2 F (i) ∂W cf ∂I (i) 4 (i) (7)
where W (i) cf is the strain energy of the material oriented in direction i for the stress softening part. Due to the differences observed experimentally between both materials, the strain energy W (i) cf used to describe the RTV3428a and the TPE is different:
• The isotropic RTV3428 silicone rubber was already studied by [START_REF] Rebouah | Anisotropic Mullins softening of a deformed silicone holey plate[END_REF], and the same strain energy function was used in that study, namely
W (i) cf = K (i) I (i) 4 -1 2 (8)
where K (i) is a material parameter.
• The TPE has a very smooth behaviour with no strain hardening, as a consequence a square root function is chosen (Rebouah and Chagnon 2014b):
W (i) cf = K (i) 2 I (i) 4 -1 I (i) 4 dI (i) 4 ( 9
)
where K (i) is a material parameter.
The global dissipation of the model is obtained by summing the dissipation of each direction. As each dissipation is positive by construction of the evolution equation, the global dissipation is also positive.
To conclude, five parameters in each direction are used to handle the anisotropy of the material, C 1 (i) for the hyperelastic part, C 2 (i) and η (i) 0 for the viscoelastic part and K (i) and η (i) m for the stress softening part. The model needs the integration of a differential equation (Eq. ( 5)). An implicit algorithm is used to determine the equilibrium solution in each direction.
Comparison with experimental data 4.1 Parameter identification strategy
The model has five material parameters in each direction. It is important not to fit globally all the material parameters, but to impose some restrictions due to experimental observations:
• First, the stiffness of the material is different depending on the direction (0°, 45°and 90°), this stiffness is principally controlled by the parameters controlling the hyperelasticity of the constitutive equation, C (i) and K (i) , these parameters must be different in the different directions.
• Second, no significant difference was observed for the stress softening for the three orientations, i.e. the difference between the first and second loadings. Thus, the material parameter describing the stress softening η (i) m is chosen to be independent of the direction. • Third, as exposed by [START_REF] Petiteau | Large strain rate-dependent response of elastomers at different strain rates: convolution integral vs. internal variable formulations[END_REF] and Rebouah and Chagnon (2014b), the hysteresis loop size depends both on the elastic parameter, C 2 , and the time parameter, η 0 . As the hysteresis loops are very similar, but at different stress levels, the governing parameter is C (i) . As a consequence, it is chosen to impose the same η 0 in all the directions.
It was experimentally observed that the stress is maximal for the privileged direction of the fabrication process. The variation of the mechanical parameters according to the spatial repartition enables us to increase or decrease the initial anisotropy of the material. According to the representation of the spatial repartition of [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] as illustrated in Fig. 6, the closest directions of the microsphere to the preferential direction induced by the process (i.e., direction 1 in Fig. 6) are the directions with the largest material parameter values and minimum for the orthogonal directions 2 and 3 (of the microsphere). The values of the parameters are the same for the directions which are symmetrical with respect to the privileged direction of the sphere unit (direction 1). The values for the intermediary directions of the microsphere are obtained according to their relative position compared to direction 1. The material parameters are supposed to vary linearly between the two extrema. All these choices permit us to avoid non-physical responses of the model for other loading conditions.
RTV3428a
According to the limitations detailed in the previous paragraph, the material parameters are fitted for the three samples with different orientations in comparison of the principal direction. The material parameters which values are independent of the directions are η m = 4 and η 0 = 200 MPa s -1 . The values of the other parameters are listed in Table 1.
Figure 7 presents a comparison between the experimental and theoretical tests for the three samples of RTV3428a with different orientations. The stiffness of the material is well described for any direction. The viscoelastic effects are also well described for the three directions. A difference can be observed for the second loading curves at the maximum stretch λ = 2.5. This error corresponds to the stress softening part of the model. This could be improved by modifying the form of Eq. ( 6) by imposing a more important loss of stiffness. Nevertheless, the model is able to globally describe the anisotropic mechanical behaviour of the RTV3428a material.
• RTV TPE C 1 (i) MPa C 2 (i) MPa K (i) MPa C 1 (i) MPa C 2 (i) MPa K (i
TPE
As before the material parameters of the TPE are fitted to the tensile tests of the three specimens with different orientations. The material parameters which values are independent of the directions are η 0 = 500 MPa s -1 , η m = 8. The values of the other parameters are listed in Table 1 and are obtained by the same strategy as the one described for the RTV3428a. Figure 8 presents a comparison of the model with experimental data. The variations of stiffness of the material with the directions are well described. Nevertheless, the model is not able to describe very large hysteresis loops and very important stress softening. Important differences are observed for the model according to the direction, but the size of the hysteresis is underestimated. This is due to the form of the constitutive equations that were chosen. Only neo-Hookean constitutive equations were used in the viscoelastic part, and it is well known that this model cannot describe large variations. Moreover, the governing equation of the viscoelasticity (i.e., Eq. ( 5)) is a very simple equation that also cannot take into account large non-linearity of the mechanical behaviour.
Nevertheless, even if the proposed model is a first approach written with simple constitutive elements, all the phenomena are qualitatively described. The limits correspond to the limits of each part of the constitutive equation.
Discussion
The model succeeds in depicting the anisotropic viscoelasticity with stress softening mechanical behaviour of architectured membranes. The use of different material parameters in the different directions leads to an important number of parameters. A global fit of the parameters could lead to parameter values with no physical meaning. By analysing the experimental data, a strategy was proposed to fit the material parameters. As exposed in the Introduction, constitutive equations developed in the literature to model anisotropic viscoelasticity often rely on an isotropic matrix reinforced with some viscoelastic fibres. These models were principally elaborated for soft biological tissues and could be applied in a phenomenological approach to the two materials tested in this paper. It would consist in considering the rubber as a soft matrix having the mechanical properties of the soft direction, reinforced by fibres in the predominant direction of the material. Even if this approach were to succeed in describing the material, it would not characterise the macromolecular network of the material.
All the equations in the literature to model for viscoelasticity can be written in tensioncompression and introduced into the present model, by replacing Eq. ( 5). This would permit us to represent non-linear viscoelasticity.
Conclusion
This paper developed a study of anisotropic materials. Two micro-mechanical architectured materials were obtained in two different ways. A silicone rubber was turned anisotropic by applying a deformation state during the second reticulation, and an initially anisotropic injected TPE was used. In both cases an orientation of the macromolecular chains was imposed to the material to create a microstructural architecture. The anisotropic membranes were tested in their plane, highlighting their anisotropic mechanical behaviour.
A three-dimensional equation was obtained by considering the integration in space of a uniaxial equation with the 42 directions of [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF]. This equation describes hyperelasticity, viscoelasticity and stress softening. In this first approach, we chose to use simple constitutive equations to prove the feasibility of the method. As a consequence, a neo-Hookean model was chosen to describe the elasticity and a simple linear equation for viscoelasticity. The anisotropy was obtained by considering that the material parameters could be different in all directions. Nevertheless, we chose to limit the variations of parameter values depending on the directions. This permitted us to limit the number of independent mechanical parameters. It was even possible to use different parameter values in every direction.
It appears that the model succeeded in qualitatively describing all the phenomena. When the phenomena (stress softening, hysteresis) were not too large, the model succeeded also in quantitatively describing the tests. Some errors between experimental data and the model appeared when the phenomena became too large. This was due to the use of simple elements for hyperelasticity and viscoelasticity. Indeed, the most robust constitutive equation for hyperelasticity, stress softening and viscoelasticity should be used when the phenomena are very large. For instance, the neo-Hookean hyperelastic equation should be replaced by a model accounting for stress hardening, or the viscoelastic equation should be replaced by a non-linear one as in, e.g. [START_REF] Bergstrom | Constitutive modeling of the large strain time dependant behavior of elastomers[END_REF].
Fig. 1
1 Fig. 1 Elaboration of the microstructural architectured membrane of filled silicone
Fig. 2
2 Fig. 2 Representation of the membrane and the samples oriented in different directions for the RTV3428a and the TPE materials
Fig. 3 Fig. 4
34 Fig. 3 Cyclic tensile test A performed on the RTV3428a architectured membrane
Fig. 5
5 Fig. 5 Definition of the configurations and of the rheological modelling
Fig. 7
7 Fig. 7 Comparison with the experimental data of cyclic tensile test on RTV3428a for the different samples with different orientations
Fig. 8
8 Fig. 8 Comparison with the experimental data of cyclic tensile test on TPE for the three samples with different orientations
Table 1
1 Material parameters for the RTV3428a and the TPE n
Acknowledgements This work is supported by the French National Research Agency Program ANR-12-BS09-0008-01 SAMBA (Silicone Architectured Membranes for Biomedical Applications). |
01480510 | en | [
"phys.meca.mema"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01480510/file/chagnon2014.pdf | G Chagnon
M Rebouah
D Favier
Hyperelastic Energy Densities for Soft Biological Tissues: A Review
Many soft tissues are naturally made of a matrix and fibres that present some privileged directions. They are known to support large reversible deformations. The mechanical behaviour of these tissues highlights different phenomena as hysteresis, stress softening or relaxation. A hyperelastic constitutive equation is typically the basis of the model that describes the behaviour of the material. The hyperelastic constitutive equation can be isotropic or anisotropic, it is generally expressed by means of strain components or strain invariants. This paper proposes a review of these constitutive equations.
Introduction
Soft tissues are composed of several layers, each one of these layers has different compositions. It is considered that four typical tissues exist: epithelial tissue, connective tissue, muscular tissue and neuronal tissue [156]. For the mechanical studies on soft tissues the connective tissues are often considered as the most important from a mechanical point of view [START_REF] Epstein | Isolation and characterisation of cnbr peptides of human [α 1 (iii)] 3 collagen and tissue distribution [α 1 (i)] 2 α 2 and [α 1 (iii)[END_REF]156,177]. They are composed of cells and of extra cellular matrix. The extra cellular matrix is composed of ground substance and of three types of fibres: collagen, reticular and elastic fibres. Collagen fibres are often considered as more important than others, particularly because of their large size, and represent most of the mechanical behaviour. The reticular fibres, which are thin collagen fibres with different chemical properties, allow creating ramifications with the collagen fibres. Finally the elastic fibres mainly composed of elastin present a purely elastic behaviour and are also linked to the collagen fibres. The elastic properties of soft tissues are mainly due to these fibres. Soft tissues are often able to support large deformations.
The first mechanical study of soft tissues started in 1687 with Bernoulli experiments on gut. The first constitutive equation was proposed in 1690 by Leibniz, before Bernoulli and Riccati proposed other equations [START_REF] Bell | The Experimental Foundations of Solid Mechanics, Mechanics of Solids[END_REF]. Since these works, many experimental studies have been performed. As an illustration, some experimental data can be found, not exhaustively, in the literature for arteries [209,262], aortic valve tissues [162], veins [START_REF] Alastrué | Experimental study and constitutive modelling of the passive mechanical properties of the ovine infrarenal vena cava tissue[END_REF], vaginal tissues [196], anterior malleolar ligament [START_REF] Cheng | Mechanical properties of anterior malleolar ligament from experimental measurement and material modeling analysis[END_REF], muscles [START_REF] Gras | Hyper-elastic properties of the human sternocleidomastoideus muscle in tension[END_REF], human trachea [254], cornea [235], skin [START_REF] Groves | An anisotropic, hyperelastic model for skin: experimental measurements, finite element modelling and identification of parameters for human and murine skin[END_REF] or gallbladder walls [143]... Even if many soft tissues are studied, the largest database in the literature concerns arteries.
Soft tissues present a complex behaviour with many non-linear phenomena as explained by different authors [118,124] as the time dependency [START_REF] Bischoff | A rheological network model for the continuum anisotropic and viscoelastic behavior of soft tissue[END_REF]202] or the stress softening phenomenon [154,198], i.e., their mechanical behaviour mainly depends on time and on the maximum deformation previously endured. Most of soft tissues dissipate energy when loading, nevertheless, the elastic behaviour generally dominates their behaviour and it represents the asymptotic behaviour when the dissipation diminishes to zero. In this way, in a first approach, most of the soft tissues are described in the context of hyperelasticity [START_REF] Harb | A new parameter identification method of soft biological tissue combining genetic algorithm with analytical optimization[END_REF]149,246]. To take into account the fibrous structure of the soft tissues, anisotropic formalism is introduced. The diversity among the mechanical characteristics of soft tissues has motivated a great number of constitutive formulations for the different tissue types. For example, the reader is referred to [224], wherein the author treats the history of biaxial techniques for soft planar tissues and the associated constitutive equations.
Anisotropic hyperelasticity can be modeled by using the components of the strain tensor or by the use of strain invariants. The two formulations permit the development of different families of anisotropic strain energy densities. Soft tissues are numerous and present different tissue architectures that lead to various anisotropy degrees, i.e., difference of mechanical behaviour in each direction, and different maximum admissible deformation. In this way, many constitutive equations are proposed to describe the tissues.
The aim of this paper is to propose a review of most of the hyperelastic strain energy densities commonly used to describe soft tissues. In a first part, the different formalisms that can be used are recalled. In a second part, the isotropic modelling is described. In a third part, the anisotropic modelling is presented. The deformation tensor component approach based on Fung's formulation is briefly presented, and invariant approaches are detailed. In a fourth part, the statistical approaches, considering the evolution of the collagen network, are described. Last, a discussion about the models closes the paper.
Mechanical Formulation
Description of the Deformation
Deformations of a material are classically characterised by right and left Cauchy-Green tensors defined as C = F T F and B = FF T , where F is the deformation gradient. In the polar decomposition of F, the principle components of the right or left stretch tensors are called the stretches and are denoted as λ i with i = 1..3. The Green-Lagrange tensor is defined as E = (C -I)/2, where I is the identity tensor, and its components are denoted as E ij with i, j = 1...3. Nevertheless, some prefer to use the logarithmic strains e i = ln(λ i ), instead of a strain tensor, generalised strains as e i = 1 n (λ n i -1) [185], or others measures as, for example,
e i = λ i λ 2 j λ 2 k
with j = i and k = i [START_REF] Gilchrist | Generalisations of the strain-energy function of linear elasticity to model biological soft tissue[END_REF]; all these measures are written in their principal basis.
Instead of using directly the strain tensors, strain invariants are often preferred as they have the same values whatever the basis is. From an isotropic point of view, three principal strain invariants I 1 , I 2 and I 3 are defined by
I 1 = tr(C), (1)
I 2 = 1 2 tr(C) 2 -tr C 2 , ( 2
)
I 3 = det(C), (3)
where "tr" is the trace operator, and "det" the determinant operator. Characteristic directions corresponding to the fibre orientations must be defined. For one material, one or many material directions (the number of directions is noted q) can be defined according to the architecture of the considered tissue. In the undeformed state, the ith direction is noted N i in the initial configuration. The norm of the vector N i is unit. Due to material deformation, the fibre orientations are evolving in the deformed state. The current orientation is defined by n (i) = FN (i) .
(4)
Note that n (i) is not a unit vector. Two orientation tensors can be defined, one in the undeformed and another in the deformed state:
A (i) = N (i) ⊗ N (i) , a (i) = n (i) ⊗ n (i) . ( 5
)
The introduction of such directions lead to the definition of new invariants related to each direction. The invariant formulation of anisotropic constitutive equations is based on the concept of structural tensors [START_REF] Boehler | Applications of Tensor Functions in Solid Mechanics[END_REF][START_REF] Boehler | A simple derivation of representations for non-polynomial constitutive equations in some cases of anisotropy[END_REF]238,239,241,243]. 1 The invariant I 4 and I 5 can be defined for one direction i as
I (i) 4 = tr CA (i) = N (i)
• CN (i) , and
I (i) 5 = tr C 2 A (i) = N (i) • C 2 N (i) . ( 6
)
In practice, some prefer to use the cofactor tensor of F, i.e., Cof(F), [120] and to define J (i) 5 = tr(Cof(C)A (i) ), in order to easily ensure the polyconvexity of the strain energy (see Sect. 2.3). In the literature, in the case of two fibre directions (1) and (2), a notation I 4 and I 6 is often used for soft tissues [108] instead of I (1) 4 and I (2) 4 (or I 5 and I 7 instead of I (1) 5 and I (2) 5 ). In this paper, it is preferred to keep only I (i) 4 notation and to generalise the notation to n directions. These invariants depend only on one direction but it is possible to take into account the interaction between different directions, by introducing a coupling between directions i and j by means of two other invariants:
I (i,j ) 8
= N (i) • N (j ) N (i) • CN (j ) , and
I (i,j ) 9 = N (i) • N (j ) 2 . ( 7
)
I (i,j ) 9
is constant during deformation, thus it is not adapted to describe the deformation of the material but it represents the value of I (i,j ) 8 for zero deformation. Let us denote I k as the invariants family (I 1 , I 2 , I 3 , I (i) 4 , I (i) 5 , I (i,j ) 8
, I (i,j ) 9
) and J k as the invariants family 1 Details about the link between structural tensors and a method to link a fictitious isotropic configuration to render an anisotropic, undeformed reference configuration via an appropriate linear tangent map is given in [163].
(I 1 , I 2 , I 3 , I (i) 4 , J (i) 5 ). When only one direction is considered, the superscript (i) is omitted in the remainder of this paper.
The I k invariants are the mostly used invariants in the literature, although other invariants have been proposed. Some authors [START_REF] Ciarletta | Stiffening by fiber reinforcement in soft materials: a hyperelastic theory at large strains and its application[END_REF] propose to use invariants that are zero at zero deformation. In this way, they introduce the tensor G = H T H, with H = 1 2 (F -F T ). This motivates the definition of a new class of invariants
I k : ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ I 1 = tr(G), I 2 = tr G 2 , I 4 = tr GA (i) , I 5 = tr G 2 A (i) . ( 8
)
Ericksen and Rivlin [START_REF] Ericksen | Large elastic deformations of homogeneous anisotropic materials[END_REF] proposed another formulation, adapted to transversely isotropic materials only, characterised by a vector N (i.e., only one direction i). This direction often corresponds to a fibre reinforced direction. Their work was further used by different authors [START_REF] Agoras | A general hyperelastic model for incompressible fiber-reinforced elastomers[END_REF][START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF][START_REF] Debotton | Neo-Hookean fiber-reinforced composites in finite elasticity[END_REF][START_REF] Debotton | Mechanics of composites with two families of finitely extensible fibers undergoing large deformations[END_REF] who proposed to define other invariants (λ p , λ n , γ n , γ p , ψ γ ), denoted as Cr k . They can be expressed as a function of the I k invariants:
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ λ 2 p = I 3 I 4 , λ 2 n = I 4 , γ 2 n = I 5 I 4 -I 4 , γ 2 p = I 1 - I 5 I 4 -2 I 3 I 4 , tan 2ψ γ = 2λ p H + / -γ p γ 4 n γ 2 p (4λ 2 p + γ 2 p ) -H 2 λ p H + / -2λ p γ 4 n γ 2 p (4λ 2 p + γ 2 p ) -H 2 , ( 9
)
with
H = (2λ 2 n + γ 2 n )(2λ 2 p + γ 2 p ) + 2λ 4 p -2I 2 .
The advantage is that these invariants have a physical meaning. λ n is the measure of stretch along N, λ p is a measure of the in-plane transverse dilatation, γ n is a measure of the amount of out-of-plane shear, γ p is the amount of shear in the transverse plane, and ψ γ is a measure of the coupling among the other invariants. Criscione et al. [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF] criticised these invariants for not being zero for zero deformation, as is the corresponding strain tensors. They proposed to use the β k invariants:
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ β 1 = ln I 3 2 , β 2 = 3 ln I 4 -ln I 3 4 , β 3 = ln I 1 I 4 -I 5 2 √ I 3 I 4 + I 1 I 4 -I 5 2 √ I 3 I 4 2 -1 , β 4 = I 5 I 2 4
-1,
β 5 = I 1 I 4 I 5 + I 1 I 3 4 + 2I 3 I 4 -I 2 5 -2I 2 I 2 4 -I 5 I 2 4 (I 5 -I 2 4 ) I 2 1 I 2 4 + I 2 5 -2I 1 I 4 I 5 -4I 3 I 4 . ( 10
)
These invariants also have a physical meaning. β 1 is the logarithmic volume strain, β 2 specifies a fibre strain of distortion, β 3 specifies the magnitude of cross-fibre, i.e., pure shear strain, β 4 specifies the magnitude of along fibre strain, i.e., simple shear strain and β 5 specifies the orientation of the along fibre shear strain relative to the cross-fibre shear strain. These last two families of invariants were developed for a one fibre direction material; it can easily be generalised to q directions but this has not yet been used yet in the literature. All these invariants are useful. In practice, the I k are the most used and the other invariants are not often used for calculations in finite element software. But, as they can be written by means of the I k invariants, all the expressions can be deduced from these invariants. As a consequence in this work, the theoretical development is only presented for the I k formulation.
Strain-Stress Relationships
Living tissues are often considered as incompressible. To use constitutive equations in finite element codes, a volumetric/isochoric decomposition is used. All the equations are written using the pure incompressibility hypothesis in order to avoid any non-physical response of these equations [100], but some details about the consequences of the volumetric-isochoric choice split is detailed in [227]. Nevertheless, they can be written in a quasi-incompressible framework by means of the incompressible invariants Īk :
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ Ī1 = I 1 I -1/3 3 , Ī2 = I 2 I -2/3 3 , Ī4 (i) = I (i) 4 I -1/3 3 , Ī5 (i) = I (i) 5 I -2/3 3 , Ī8 (i,j ) = I (i,j ) 8 I -1/3 3 . ( 11
)
This formulation is particularly useful for finite element implementation. All the equations for the elasticity tensor can be seen in different papers [START_REF] Bose | Computational aspects of a pseudo-elastic constitutive model for muscle properties in a soft-bodied arthropod[END_REF]133,145,152,195,271]. In this case, a penalty function depending on I 3 is used to ensure incompressibility. One can refer to [START_REF] Doll | On the development of volumetric strain energy functions[END_REF] for a comparison of the different functions classically used. The choice of the penalty parameter to ensure incompressibility [253] is a critical issue. In this paper, all the constitutive equations are written in the purely incompressible framework, but all the models can be established in the quasi-incompressible framework as well.
The second Piola-Kirchhoff stress tensor can be directly calculated by derivation of the strain energy function W (I 1 , I 2 , I (i) 4 , I (i) 5 , I (i,j ) 8
, I (i,j ) 9
), with i, j = 1..q:
S = 2 (W ,1 + I 1 W ,2 )I -W ,2 C + q i W (i) ,4 N (i) ⊗ N (i) + q i W (i) ,5 N (i) ⊗ CN (i) + N (i) C ⊗ N (i) + i =j W (i,j ) ,8 N (i) • N (j ) N (i) ⊗ N (j ) + N (j ) ⊗ N (i) + pC -1 (12)
where W ,k = ∂W ∂I k , and p is the hydrostatic pressure. The Eulerian stress tensor, i.e., the Cauchy stress tensor, is directly obtained by the push-forward operation. To ensure that the stress is identically zero in the undeformed configuration, it is required that: ∀i W (i) ,4 + 2W (i) ,5 = 0, [START_REF] Baek | Theory of small on large: potential utility in computations of fluid-solid interactions in arteries[END_REF] for zero deformation [174]. The direct expressions that permit calculation of the stress with the other invariants basis can be found in [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF][START_REF] Criscione | Constitutive framework optimized for myocardium and other high-strain, laminar materials with one fiber family[END_REF].
Stability
The strong ellipticity condition is a mathematical restriction on the constitutive functions.
For three-dimensional problems [267], the strong ellipticity was characterised for compressible isotropic materials in [236], and for incompressible ones in [273]. In this context, the strong ellipticity was largely studied in the case of transverse isotropy for in plane strains in [165-168, 170, 240]. The generic condition to verify for the strain energy in the absence of body forces [150,167,168] can be written as:
1 J F pr F qs ∂ 2 W ∂F ir F js n p n q m i m j > 0 with m = 0 and n = 0, (14)
where m and n are two non-zero vectors. Nevertheless, this condition is always difficult to verify. Thus, some have proposed another way to tackle the strong ellipticity condition.
It is known that polyconvexity implies ellipticity [173,228,232]. As a consequence, the polyconvexity in the sense of Ball [START_REF] Ball | Convexity conditions and existence theorems in non-linear elasticity[END_REF][START_REF] Ball | Constitutive equalities and existence theorems in elasticity[END_REF] is used, even if it is more restrictive than strong ellipticity. Of course, some strain energies can be elliptic but not polyconvex. It is important to note that polyconvexity does not conflict with the possible non-uniqueness of equilibrium solutions, as it guarantees only the existence of at least one minimizing deformation. Hence, polyconvexity provides an excellent starting point to formulate strain energy functions that guarantees both ellipticity and existence of a global minimizer.
Polyconvexity has been studied within the framework of isotropy [START_REF] Bilgili | Restricting the hyperelastic models for elastomers based on some thermodynamical, mechanical and empirical criteria[END_REF]244], and the conditions to verify it are well known for every classical isotropic model from the literature (see for example [START_REF] Hartmann | Parameter estimation of hyperelasticity relations of generalized polynomial-type with constraint conditions[END_REF][START_REF] Hartmann | Polyconvexity of generalized polynomial-type hyperelastic strain energy functions for near-incompressibility[END_REF]180,204]). Many authors have extended their study to anisotropic materials [START_REF] Ehret | A polyconvex hyperelastic model for fiber-reinforced materials in application to soft tissues[END_REF]121,171,206,245,257]. Some have studied the polyconvexity of existing constitutive equations [START_REF] Doyle | Adaptation of a rabbit myocardium material model for use in a canine left ventricle simulation study[END_REF]104,106,186,267], whereas others have attempted to directly develop polyconvex constitutive equations. Some Conditions. In case of existing constitutive equations, Walton and Wilber [267] summarised conditions to ensure polyconvexity. For a strain energy depending on I 1 , I 2 and I 4 , W (I 1 , I 2 , I (i) 4 ), the conditions are:
W ,k > 0 for k = 1, 2, 4 and (15)
[W ,kl ] is definite positive. ( 16)
If the strain energy also depends on
I (i,j ) 8
, the following condition should be added:
∂W ∂I (i,j ) 8 ≤ ∂W ∂I 1 . ( 17
)
The use of the fifth invariant I (i) 5 introduces the need to change the other invariants, as I 5 is not a polyconvex function (when used alone). Walton and Wilber [267]
used I * k : ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ I * 1 = 1 2 I 1 , I * 2 = 1 2 I 2 1 -I 2 , I (i) * 4 = I (i) 4 , I (i) * 5 = I (i) 5 . ( 18
)
Here, the condition to verify for
I * k is: ⎧ ⎪ ⎨ ⎪ ⎩ W ,k > 0 for k = 1, 2, 4, 5, W ,1 + κW ,4 ≥ 0 for some κ > 4, [W ,kl ] is definite positive. ( 19
)
As it will be described in next paragraph, many strain energies can be decomposed as W = W iso (I 1 ) + W aniso (I 4 ). In this case, some sufficient conditions, but not necessary for polyconvexity, have been given in [106] for the anisotropic part:
∂W aniso ∂I 4 ≥ 0 and ( 20)
∂W aniso ∂I 4 + 2I 4 ∂ 2 W aniso ∂I 2 4 ≥ 0. (21)
These two restrictive conditions mean that the considered directions cannot generate negative forces when submitted to compression whereas the strong ellipticity can also be verified in compression. This is an illustration of the constraints generated by the polyconvexity compared to strong ellipticity.
Development of Specific Constitutive Equations. Some authors have created elementary strain energies that satisfy polyconvexity. First, Schroder and Neff [228] worked on equations depending on I 1 and I 4 , and they proved that some functions are polyconvex:
W 1 = β 1 I 4 , W 2 = β 2 I 2 4 , W 3 = β 3 I 4 I 1/3 3
, and
W 4 = β 4 I 2 4 I 1/3 3 , ( 22
)
where β i are material parameters. Nevertheless, as I 5 is not a polyconvex function, some have proposed [228,229] the construction of new combinations of invariants in the case of one reinforced direction that are polyconvex; these invariants are denoted as
K i : ⎧ ⎪ ⎨ ⎪ ⎩ K 1 = I 5 -I 1 I 4 + I 2 tr(A) 1/2 , K 2 = I 1 -I 4 , K 3 = I 1 I 4 -I 5 . ( 23
)
These invariants permitted the development of a list of elementary polyconvex energies [START_REF] Ebbing | Approximation of anisotropic elasticity tensors at the reference state with polyconvex energies[END_REF]231]. The different strain energies are listed in Table 1. Since a combination of polyconvex energy densities is also polyconvex, it is possible to develop many constitutive equations that can be adapted to different soft tissues.
Table 1 Elementary polyconvex functions [START_REF] Ebbing | Approximation of anisotropic elasticity tensors at the reference state with polyconvex energies[END_REF]231], where β i with i = 5...23 are material parameters Elementary polyconvex functions
W 5 = β 5 K 1 W 6 = β 6 K 2 1 W 7 = β 7 K 3 1 W 8 = β 8 K 1 I 1/3 3 W 9 = β 9 K 2 1 I 2/3 3 W 10 = β 10 K 2 W 11 = β 11 K 2 2 W 12 = β 12 K 2 I 1/3 3 W 13 = β 13 K 2 2 I 2/3 3 W 14 = β 14 K 3 W 15 = β 15 K 2 3 W 16 = β 16 K 3 I 1/3 3 W 17 = β 17 K 2 3 I 2/3 3 W 18 = β 18 I 2 1 + I 4 I 1 W 19 = β 19 2I 2 2 + I 2 I 5 -I 1 I 2 I 4 W 20 = β 20 3I 2 1 -I 4 I 1 W 21 = β 21 2I 2 2 + I 1 I 2 I 4 -I 2 I 5 W 22 = β 22 (3I 1 -2I 4 ) W 23 = β 23 (I 2 -2I 5 + 2I 1 I 4 )
Isotropic Hyperelastic Constitutive Equations
From a macroscopic point of view, soft tissues are an assembly of cells and fibres. According to the quantity and the orientation of the fibres, the behaviour of soft tissues can be supposed isotropic or not. According to the application, anisotropic behaviour can be neglected, and isotropic modelling can be efficient. In this way, many authors decide to use an isotropic approach to model soft tissues, as for example liver [149] kidney [113], bladder and rectum [START_REF] Boubaker | Finite element simulation of interactions between pelvic organs: predictive model of the prostate motion in the context of radiotherapy[END_REF], pelvic floor [193], breast [START_REF] Azar | A deformable finite element model of the breast for predicting mechanical deformations under external perturbations[END_REF]226], cartilage [144], meniscus [START_REF] Abraham | Hyperelastic properties of human meniscal attachments[END_REF], ligaments [START_REF] Garcia | A nonlinear biphasic viscohyperelastic model for articular cartilage[END_REF], eardrum [START_REF] Cheng | Viscoelastic properties of human tympanic membrane[END_REF], arteries [192], brain [127], lungs [234], uterus [START_REF] Harrison | Towards a novel tensile elastometer for soft tissue[END_REF] or skin [142]... Many models that are used to describe an isotropic approach come from rubber like materials studies. Some literature reviews have been proposed [START_REF] Boyce | Constitutive models of rubber elasticity: a review[END_REF]258]. Constitutive equations for rubber like materials were created to represent a strain hardening for deformations of about hundreds of percent whereas soft tissues often strain harden after some tens of percent. Thus, the functions for rubber like materials may not necessarily apply. Other, more suitable constitutive equations have been developed especially for soft tissues. The main models are listed in Table 2. The main feature for the constitutive equations is the presence of an important change of slope in the strain-stress curve for moderate deformations. This explains why most of the equations include an exponential form which allows the description of strong slope changes. Nevertheless, all constitutive equations stay equivalent to the neo-Hookean model [255,256] for small strains. Moreover, most of the constitutive equations are very similar for the I 1 part as it is the exponential form that dominates in the equations. While most of the constitutive equations are only expressed with the first invariant, the second invariant can be employed to capture the different states of loading [112]. There exists some limitations to use only the first invariant [110,270]. Nevertheless, the choice of using I 1 , or (I 1 , I 2 ) mainly depends on the available experimental data. When experiments are limited to one loading case, it can be difficult to correctly fit a constitutive equation expressed by means of the two invariants.
Anisotropic Hyperelastic Constitutive Equations
Different approaches have been used to describe the anisotropy of soft tissues. The first one is based on Green-Lagrange components and the second one is based on strain invariants.
W = c 1 (I 1 -3) + c 2 (I 1 -3) 2 Knowles [131, 274] (*) W = c 1 2c 2 1 + c 2 c 3 (I 1 -3) c 3 -1 Exponential model Demiray [57] (**) W = c 1 c 2 exp c 2 2 (I 1 -3) -1 Demiray et al. [58] W = c 1 c 2 exp c 2 2 (I 1 -3) 2 -1 Holmes and Wow [103] W = c 0 exp c 1 (I 1 -3) + exp c 2 (I 2 -3) -c 0 Arnoux et al. [7, 8] W = c 1 exp c 2 (I 1 -3) - c 1 c 2 2 (I 2 -3) Singh et al. [237] W = c 1 2c 2 exp c 2 (I 1 -3) -1 + c 3 2 (I 2 -3) 2 Volokh and Vorp [266] W = c 1 -c 1 exp - c 2 c 1 (I 1 -3) - c 3 c 1 (I 1 -3) 2 Tang et al. [251] W = c 1 (I 1 -3) + c 2 (I 2 -3) + c 3 exp c 4 (I 1 -3) -1 Van Dam et al. [261] W = c 1 - 1-c 2 c 2 3 (c 3 x + 1) exp(-c 3 x) -1 + 1 2 c 2 x 2 with x = √ c 4 I 1 + (1 -c 4 )I 2 -3
Use of Green-Lagrange Tensor Components
The first model using the components of the Green-Lagrange strain tensor were developed in [118]. It consists in proposing strain energy densities that are summarily decomposed into contributions of each component with different weights; a review of these models is proposed in [116]. The first generic form was proposed by Tong and Fung [252]:
W = c 2 exp b 1 E 2 11 + b 2 E 2 22 + b 3 E 2 12 + E 2 21 + 2b 4 E 12 E 21 + b 5 E 3 11 + b 6 E 3 22 + b 7 E 2 11 E 22 + b 8 E 11 E 2 22 -1 , ( 24
)
where c and b i , i = 1...8 are material parameters. Three years later, Fung [START_REF] Fung | Pseudoelasticity of arteries and the choice of its mathematical expression[END_REF] developed a generic form in two dimensions, the model was next generalised to three dimensions [START_REF] Chuong | Three-dimensional stress distribution in arteries[END_REF].
Later, shear strains were introduced [128], and finally a global formulation was proposed [116]:
W = c exp(A ij kl E ij E kl ) -1 , ( 25
)
where c and A ij kl are material parameters. Different constitutive equations were then developed and written in cylindrical coordinates (r, θ , z) often used for arteries [138]. Moreover, the strain energy function can be naturally uncoupled into a dilatational and a distortional part [START_REF] Ateshian | A frame-invariant formulation of fung elasticity[END_REF], to facilitate the computational implementation of incompressibility. In the same way, as in non-Gaussian theory [137], it is possible to take into account the limiting extensibility of the fibres [175]. This exposes the possibility of a constitutive equation that presents an asymptote even if constitutive equations that include an exponential or an asymptotic form can be very close [START_REF] Chagnon | A comparison of the physical model of Arruda-Boyce with the empirical Hart-Smith model and the Gent model[END_REF]. The proposed models are listed in Table 3. The main difficulty of these constitutive equations is that they have a large number of material parameters.
Q = A ij kl E ij E kl Fung et al. [77] Q = b 1 E 2 θθ + b 2 E 2 zz + 2b 4 E θθ E zz Chuong and Fung [49] Q = b 1 E 2 θθ + b 2 E 2 zz + b 3 E 2 rr + 2b 4 E θθ E zz + 2b 5 E rr E zz + 2b 6 E θθ E rr Humphrey [116] Q = b 1 E 2 θθ + b 2 E 2 zz + b 3 E 2 rr + 2b 4 E θθ E zz + 2b 5 E rr E zz + 2b 6 E θθ E rr + b 7 E 2 θz + b 8 E 2 rz + b 9 E 2 rθ Costa et al. [51] Q = b 1 E 2 ff + b 2 E 2 ss + b 3 E 2 nn + 2b 4 1 2 (E f n + E nf ) 2 + 2b 5 1 2 (E sn + E ns ) 2 + 2b 6 1 2 (E f s + E sf ) 2 Rajagopal et al. [213] Q = b 1 E 2 θθ + b 2 E 2 zz + b 3 E 2 rr + 2b 4 E θθ E zz + 2b 5 E rr E zz + 2b 6 E θθ E rr + b 7 E 2 rr + E 2 θθ + b 8 E 2 θθ + E 2 zz + b 9 E 2 rr + E 2 zz
Other exponential functions Choi and Vito [START_REF] Choi | Two-dimensional stress-strain relationship for canine pericardium[END_REF]
W = b 0 exp b 1 E 2 11 + exp b 2 E 2 22 + exp(2b 3 E 11 E 22 ) -3 Kasyanov and Rachev [128] W = b 1 exp b 2 E 2 zz + b 3 E zz E θθ + b 4 E 2 θθ + b 5 E 2 zz E θθ + b 6 E zz E 2 θθ -1 + b 7 E θθ exp(b 8 E θθ ) + b 9 E zz + b 10 E 2 θz Other models Vaishnav et al. [259] W = b 1 E 2 θθ + b 2 E θθ E zz + b 3 E 2 zz + b 4 E 3 θθ + b 5 E 2 θθ E zz + b 6 E θθ E
Humphrey [117] W = b 1 E 2 rr + b 2 E 2 θθ + b 3 E 2 zz + 2b 4 E rr E θθ + 2b 5 E θθ E zz + 2b 6 E rr E zz + b 7 E 2 rθ + E 2 θr + b 8 E 2 zθ + E 2 θz + b 9 E 2 zr + E 2 rz Takamizawa and Hayashi [249] W = -c ln 1 -1 2 b 1 E 2 θθ + 1 2 b 2 E 2 zz + b 3 E θθ E zz + b 4 E θθ E zz + b 5 E θθ E rr + b 6 E rr E zz
Use of Strain Invariants
Strain energy densities depend on isotropic and anisotropic strain invariants. The use of I 4 and I 5 is necessary to recover linear theory [174]. Different cases exist. In a first case, the strain energy can be split as a sum into different parts as an isotropic and anisotropic contribution:
W = W iso (I 1 , I 2 ) + i W aniso I (i) 4 , I (i) 5 , ( 26
)
or some coupling can be realised between the isotropic and anisotropic parts as W aniso (I 1 , I 2 , I (i) 4 , I (i) 5 ). But very few models present a non-additive decomposition between two directions i and j , i.e., between
I (i) 4 , I (i) 5 , I (j )
4 and I (j )
5 . When W iso is used, it is often represented by a classical energy function. We discuss W aniso in the next paragraph. The use of only I 4 or I 5 , instead of the both of these invariants is questionable as it leads to the same shear modulus in the direction of and in the direction orthogonal to the reinforced direction [174].
Different model forms can be distinguished such as the polynomial, the power, the exponential and other constitutive equations not of these types.
Polynomial Development
The most known model for isotropic hyperelasticity is Rivlin's series [217] that describes a general form of constitutive equations depending on the first and second invariants. The generalisation of this model to an anisotropic formulation has been proposed in different ways. One consists in introducing the anisotropic invariants in the series. First a simple I 4 series [123] was proposed:
W aniso = n k=2 c i (I 4 -1) k , ( 27
)
where c i are material parameters. A linear term cannot be used, i.e., k = 1 in the previous equation, as it does not ensure zero stress for zero deformation. The term k = 2 corresponds to the standard reinforcing model [START_REF] Destrade | Surface instability of sheared soft tissues[END_REF]182,220,257], not initially proposed for soft tissues.
The complete generalisation of the Rivlin series was proposed in [222]:
W = klmn c klmn (I 1 -3) k (I 2 -3) l (I 4 -1) m (I 5 -1) n , ( 28
)
where c klmn are material parameters. A modified formulation was proposed in [111] to be more convenient for numerical use:
W = klmn c klmn (I 1 -3) k (I 2 -3) -3(I 1 -3) l (I 4 -1) m (I 5 -2I 4 + 1) n . ( 29
)
Instead of using I 4 , one may use √ I 4 which represents the elongation in the considered direction. This leads to a new series development [115]: W aniso = c 2 (I 4 -1) 2 + c 4 (I 4 -1) 4 Basciano and Kleinstreuer [START_REF] Basciano | Invariant-based anisotropic constitutive models of the healthy and aneurysmal abdominal aortic wall[END_REF] W aniso = c 2 (I 4 -1) 2 + c 3 (I 4 -1) 3 + c 4 (I 4 -1) 4 + c 5 (I 4 -1) 5 + c 6 (I 4 -1) 6 Basciano and Kleinstreuer [START_REF] Basciano | Invariant-based anisotropic constitutive models of the healthy and aneurysmal abdominal aortic wall[END_REF] W aniso = c 6 (I 4 -1) 6
W = kl c kl (I 1 -3) k ( I 4 -1) l , ( 30
)
Lin and Yin
[148] W = c 1 (I 1 -3)(I 4 -1) + c 2 (I 1 -3) 2 + c 3 (I 4 -1) 2 + c 4 (I 1 -3) + c 5 (I 4 -1) √ I 4 forms Alastrue et al. [4, 6] W aniso = c 2 ( √ I 4 -1) 2 Humphrey [115] W = c 1 ( √ I 4 -1) 2 + c 2 ( √ I 4 -1) 3 + c 3 (I 1 -3) + c 4 (I 1 -3)( √ I 4 -1) + c 5 (I 1 -3) 2 I 4 , I 5 forms
Park and Youn [194]
W aniso = c 3 (I 4 -1) + c 5 (I 5 -1) Bonet and Burton [START_REF] Bonet | A simple orthotropic, transversely isotropic hyperelastic constitutive equation for large strain computations[END_REF] W Ogden [111,169,170] W aniso = c 2 (I 5 -1) 2
= c 1 + c 2 (I 1 -3) + c 3 (I 4 -1) (I 4 -1) - c 1 2 (I 5 -1) Bonet and Burton [31] W aniso = c 1 + c 3 (I 4 -1) (I 4 -1) - c 1 2 (I 5 -1) Merodio and
Hollingsworth and Wagner [102]
W aniso = c 2 (I 5 -
I 2 4 ) Murphy [174] W = c 1 (I 1 -3) + c 2 (2I 4 -I 5 -1) + c 3 (I 5 -1) 2 Murphy [174] W = c 1 (I 1 -3) + c 2 (2I 4 -I 5 -1) + c 3 (I 4 -1)(I 5 -1) Murphy [174] W = c 1 (I 1 -3) + c 2 (2I 4 -I 5 -1) + c 3 (I 4 -1) 2
where c kl are material parameters. It is worth noting that the use of √ I 4 includes, in the quadratic formulation [START_REF] Alastrué | On the use of the bingham statistical distribution in microsphere-based constitutive models for arterial tissue[END_REF][START_REF] Brown | A simple transversely isotropic hyperelastic constitutive model suitable for finite element analysis of fiber reinforced elastomers[END_REF], a model that represents the behaviour of a linear spring.
As other invariants were proposed, a series development based on β k invariants also has been considered [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF]:
W = klm G klm β k 3 β l 4 β m 5 , ( 31
)
where G klm are material parameters.
As for rubber like materials with the Rivlin's series, the whole series is not used and a good truncation of the strain energy is essential. According to the considered material and to the loading states, different developments have been given in the literature. A list of resulting equations is included in Table 4. It is also important to note that the I 4 invariant is often used whereas the I 5 invariant is most often disregarded.
Power Development
Ogden 's [184] isotropic constitutive equation has proved its efficiency to describe complex behaviour. It is based on elongations and a power law development. For a material with a Schroder et al. [START_REF] Ciarletta | Stiffening by fiber reinforcement in soft materials: a hyperelastic theory at large strains and its application[END_REF]228,230]
W aniso = k 1 I k 2 4
Balzani et al. [START_REF] Balzani | A polyconvex framework for soft biological tissues. Adjustement to experimental data[END_REF][START_REF] Balzani | Simulation of discontinuous damage incorporating residual stresses in circumferentially overstretched atherosclerotic arteries[END_REF]
W = k 1 (I 1 I 4 -I 5 -2) k 2 Schroder et al. [230] W = k 1 (I 5 -I 1 I 4 + I 2 ) + k 2 I k 3 4 + k 4 (I 1 I 4 -I 5 ) + k 5 I k 6 4 O'Connell et al. [183] W = k 6 I 4 (I 5 -I 1 I 4 + I 2 ) -1 2
single fibre direction, there is the following generic form [186]:
W aniso = 2μ 1 β 2 I β/2 4 + 2I -β/4 4 -3 , ( 32
)
where μ 1 and β are material parameters. A generalised form was proposed by not imposing the same parameters for the two terms [264]
W aniso = r α r I βr 4 -1 + γ r I -δr 4 -1 , (33)
where α r , β r , γ r , δ r are material parameters. The same type of formulation is also proposed using the other invariants. Two general equations are of the form [122]:
W = klmn c klmn (I 1 -3) a k (I 2 -3) b l (I 4 -1
) cm (I 5 -1) dn and ( 34)
W = klmn c klmn I a k 1 -3 a k I b l 2 -3 b l I cm 4 -1 I dn 5 -1 , ( 35
)
where c klmn , a k , b l , c m and d n are material parameters. In the same way, other power law constitutive equations were proposed and are listed in Table 5. Additional forms can be found in the polyconvex strain energies listed in Table 1. These models represent different forms that link different invariants.
Exponential Development
A key property of the constitutive equation for soft tissues is the inclusion of an important strain hardening. This is easily obtained by means of an exponential function of the I 4 invariant. This approach is largely used in the literature, the first models were proposed in the 1990s. In the beginning, two fibre directions were introduced to represent the mechanical behaviour of arteries [104]. This was extended to four directions [START_REF] Baek | Theory of small on large: potential utility in computations of fluid-solid interactions in arteries[END_REF]159] and to n directions [START_REF] Gasser | A rate-independent elastoplastic constitutive model for biological fiberreinforcedcomposites at finite strains: continuum basis, algorithmic formulationand finite element implementation[END_REF] and used for example with 8 directions for cerebral aneurysms [276]. These models may be used to model the behaviour of a complex tissue such as in different areas of a soft tissue (as for example the different layers of an artery) [START_REF] Balzani | On the mechanical modeling of anisotropic biological soft tissue and iterative parallel solution strategies[END_REF]. Various formulations are listed in Table 6.
In order to take into account the ratio of isotropic to anisotropic parts of a heterogeneous material, a weighting factor has been introduced based on the contributions of I 1 and I 4 [107]. This represents a measure of dispersion in the fibre orientation. This model leads to
W aniso = c 1 2c 2 exp c 2 (I 4 -1) 2 -1 Weiss et al. [268] W aniso = c 1 exp(I 4 -1) 2 -(I 4 -1) 2 -1 Peña et al. [196] W aniso = c 1 c 2 exp c 2 (I 4 -1) -c 2 (I 4 -1) -1 I 1 , I 4 forms Holzapfel et al. [105] W aniso = c 1 (I 4 -1) exp c 2 (I 4 -1) 2 Gasser et al. [83] W = c 1 2c 2 [exp c 2 κI 1 + (1 -3κ)I 4 -1 2 -1] Holzapfel et al. [107] W = c 1 2c 2 exp c 2 (1 -κ)(I 1 -3) 2 + κ(I 4 -1) 2 -1 May-Newman and Yin [161, 162] W = c 0 exp c 1 (I 1 -3) 2 + c 2 ( √ I 4 -1) 4 -1 Rubin and Bodner [221] W = c 1 2c 2 exp c 2 c 5 (I 1 -3) + c 3 c 4 ( √ I 4 -1) 2c 4 -1 Lin and Yin [148] W = c 1 exp c 2 (I 1 -3) 2 + c 3 (I 1 -3)(I 4 -1) + c 4 (I 4 -1) 2 -
W aniso = C 1 2C 2 exp C 2 (I 4 -1) 2 -1 + C 3 2C 4 exp C 4 (I 5 -1) 2 -1
the creation of different constitutive equations which are also listed in Table 6. Recently, a general form of an energy function was devised [197] in order to summarise a large number of constitutive equations:
W = γ aη exp η(I 1 -3) a -f 1 (I 1 , a) + c i bd i exp d i I (i) 4 -I 0 4 b -g I (i) 4 , I 0 4 , b . ( 36
)
The choice of the functions f 1 and g allows for the wide generalization of many different models. Also, γ , η, a, b, c i , d i and I 0 4 are material parameters, and I 0 4 represents the threshold to reach for the fibre to become active. Some authors [START_REF] Einstein | Inverse parameter fitting of biological tissues: a response surface approach[END_REF][START_REF] Freed | Invariant formulation for dispersed transverse isotropy in aortic heart valves[END_REF] have proposed in a way similar as to what is done in the case of isotropy [START_REF] Hart-Smith | Elasticity parameters for finite deformations of rubber-like materials[END_REF] a constitutive equation for the stress, the energy being obtained by integration: Horgan and Saccomandi [111] W ,4 = -
σ (λ) = A exp B λ 2 -1 2 -1 , ( 37
)
W = 2 3 I 4 c 2 1 + 2 c 1 √ I 4 - 3
Horgan and Saccomandi [111] W = -c 1 c 2 log 1 -
(I 5 -1) 2 c 2 Lurding et al. [153] W = c 1 (I 1 -3) + c 2 (I 1 -3)(I 4 -1) + c 3 (I 2 4 -I 5 ) + c 4 ( √ I 4 -1) 2 + c 5 ln(I 4 ) Chui et al. [153] W = c 1 ln(1 -T ) + c 5 (I 1 -3) 2 + c 6 (I 4 -1) 2 + c 7 (I 1 -3)(I 4 -1) with T = c 2 (I 1 -3) 2 + c 3 (I 4 -1) 2 + c 4 (I 1 -3)(I 4 -1)
Other invariants
Lu and Zhang
[151] W = c 2 exp c 1 ( √ I 4 -1) 2 + 1 2 c 3 (β 1 -1) + 1 2 c 4 (β 2 -2)
where A and B are material parameters. Even if this approach was initially developed and used for arteries [START_REF] Chagnon | An osmotically inflatable seal to treat endoleaks of type 1[END_REF]205,260], it is also often used for different living tissues, as for example human cornea [190], erythrocytes [130], the mitral valve [203], trachea [155,254], cornea [139,179], collagen [125], abdominal muscle [101].
Some Other Models
Other ideas have been developed for rubber like materials, as for example the Gent [START_REF] Gent | A new constitutive relation for rubber[END_REF] model which presents a large strain hardening with only two parameters. Its specific form gives it a particular interest for some tissues. This model was generalised to anisotropy in two ways [111]. Other different forms can be proposed with a logarithmic or a tangent function. A list of constitutive equations is given in Table 7. There are two ideas in these models. One is to describe the behaviour at moderate deformation. Thus, functions that provide for a weak slope are used; these models are principally used before the activation of muscles, i.e., when the material is very soft. When the material becomes stiffer, a function that models a large strain hardening is necessary. In this way, different functions were introduced to capture very important changes of slopes.
Coupling Influence
Different coupling can be taken into account in the constitutive equation, for example, the shear between the fibres and the matrix, and the interaction between the fibres.
Fibre Shear. In this case, the soft tissue is considered as a composite material, the strain energy is decomposed into three additive parts W = W m + W f + W f m [199], where the three terms are the strain energy of the matrix, of the fibres and of the interactions between fibres and matrix, respectively. Moreover, the deformation gradient of the fibres F can be decomposed into a uniaxial extension tensor F f and a shear deformation F s , as F = F s F f [START_REF] Criscione | Physically based strain invariants set for materials exhibiting transversely isotropic behavior[END_REF]. The decomposition of the strain energy function into different parts allows, for different loading states, the consideration of constitutive equations which are specific for the strain endured by the fibre, the matrix and the interface. This leads to the construction of different function forms [START_REF] Guo | Large deformation response of a hyperelastic fibre reinforced composite: theoretical model and numerical validation[END_REF]:
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ W m = 1 2 c 1 f (I 4 )(I 1 -3), W f = c 1 g 1 (I 4 ) I 5 -I 2 4 I 4 , W f m = c 1 g 2 (I 4 ) I 1 - I 5 + 2 √ I 4 I 4 . ( 38
)
Another basic form also has been proposed [199]:
W f m = g 2 (I 4 ) I 4 I 3 (I 5 -I 1 I 4 + I 2 ) -1 2 , ( 39
)
where f , g 1 and g 2 are functions to define and c 1 is a material parameter. The first function corresponds a generalisation of the neo-Hookean model [START_REF] Guo | Mechanical response of neo-Hookean fiber reinforced incompressible nonlinearly elastic solids[END_REF]. Few functions for f , g 1 and g 2 have so far been proposed, the first being based on exponential functions [START_REF] Caner | Hyperelastic anisotropic microplane constitutive model for annulus fibrosus[END_REF][START_REF] Guo | A composites-based hyperelastic constitutive model for soft tissue with application to the human annulus fibrosus[END_REF][START_REF] Guo | Large deformation response of a hyperelastic fibre reinforced composite: theoretical model and numerical validation[END_REF].
Interaction Between Fibres. Few models are proposed to take into account the influence of the coupling between different fibre directions. Different techniques can be used.
In order to take into account different directions and to not limit the problem to one direction fibre, it is also possible to couple invariants from different directions [228], the following invariant expression has been proposed: (1) 4
α 2 I (1)2 4 + 2λ(1 -α)I
I (2) 4 + (1 -α) 2 I (2)2 4 with α ∈ [0, 1]. ( 40
)
α represents a material parameter. This expression takes into account the deformation in two directions with only one invariant. Nevertheless, this has not yet been used in constitutive equations.
Instead of employing an additive decomposition of the strain energy to account for the different directions, a function that represents a coupling between the invariants of different directions [102] can be used [242]:
W = c 1 c 2 exp c 2 I (1) 4 + I (2) 4 -2 -c 2 I (1)
4 + I (2) A generalised weighted expression of the constitutive equation also has been developed [START_REF] Ehret | A polyconvex hyperelastic model for fiber-reinforced materials in application to soft tissues[END_REF]121]:
W = 1 4 r μ r 1 α r exp α r i γ i I (i) 4 -1 -1 + 1 β r exp β r i γ i J (i) 5 -1 -1 . ( 42
)
Even if the model was developed for pneumatic membranes, such representations that have proposed multiplicative terms between the I 4 invariants of each direction instead of an additive decomposition can be used for soft tissues [215]: (1,2) c I (1) 4 -
W = c (1) 1 I (1) 4 -1 β 1 + c (1) 2 I (1) 5 -1 β 2 + c (2) 1 I (2) 4 -1 γ 1 + c (2) 2 I (2) 5 -1 γ 2 + c (1) c (I 1 -3) δ 1 I (1) 4 -1 δ 1 + c (2) c (I 1 -3) δ 2 I (2) 4 -1 δ 2 + c
1 η I (2) 4 -1 η . ( 43
)
This strain energy introduces coupling between the different directions, but the additive decomposition of the constitutive equation allows one to fit separately the different parameters c (j ) i , δ i and β i .
Use of I 8 and I 9 . As proposed in the first part of this paper, coupling terms including I (i,j ) 8
and I (i,j ) 9 can be used. Thus such terms have been added to the strain energy in order to model esophageal tissues [177]:
W = c 1 c 3 exp c 3 (I 1 -3) + c 2 c 5 exp c 5 (I 2 -3) + c 4 c 2 7 exp c 7 I (1) 4 -1 -c 7 I (1) 4 -1 -1 + c 6 c 2 8 exp c 8 I (2) 4 -1 -c 8 I (2) 4 -1 -1 + c 9 I (1,2) 8 -I (1,2) 9 2 , ( 44
)
where c i with i = 1...9 are material parameters. For annulus fibrous tissues, the influence of the interaction between the layers has been modelled [178] with an energy term taking into account I (1) 4 , I (2) 4 and I (1,2) 8 :
W = c 1 2c 2 exp c 2 I (1,2)
8 (I (1) 4 I (2) 4 I (1,2) 9
) 1/2 -I (1,2) 9 2 -1 . ( 45
)
A similar form of exponential model (cf. Table 6) has been proposed to include the effect of I 8 [START_REF] Göktepe | Computational modeling of passive myocardium[END_REF]:
W = c 1 c 2 exp c 2 (I (1,2) 8 ) 2 I (1,2) 9 -1 . ( 46
)
These models are not often employed, but there exist some for composite materials that can be used [200,214]. In comparison with other models, these approaches take into account the shear strain in the material whereas the first models couple the deformations of the different fibres.
Statistical Approaches
In this part, some statistical approaches that tend to encompass the physics of soft tissues physics are detailed. They come from the study of the collagen network and use a change of scale method [START_REF] Chen | Nonlinear micromechanics of soft tissues[END_REF]181]. A collagen molecule is defined by its length, its stiffness and its helical structure. Some studies are motivated by approaches developed for rubber like material [START_REF] Beatty | An average-stretch full-network model for rubber elasticity[END_REF]73,129]. Unlike polymer chains in rubber which are uncorrelated in nature, collagen chains in biological tissues are classified as correlated chains from a statistical point of view. Rubber chains resemble a random walk whereas biological chains often present privileged oriented directions. It this way, different theories are considered to represent the chains, as for example wormlike chains with a slight varying curvature [132], or sinusoidal, zig-zag or circular helix representations [START_REF] Freed | Invariant formulation for dispersed transverse isotropy in aortic heart valves[END_REF]126,140]. Nevertheless, to develop models which rest on statistical approaches, some hypotheses are needed. A distribution function f of the orientation of the fibres is used to represent the material. The unit vector a 0 oriented in the direction of a certain amount of fibres having a spatial orientation distribution f is defined in terms of two spherical angles, denoted as φ and ψ:
a 0 = sin φ cos ψe 1 + sin φ sin ψe 2 + sin φe 3 , ( 47
)
with φ ∈ [0, π] and ψ ∈ [0, 2π] and e i is the usual rectangular Cartesian basis. The distribution function is required to satisfy some elementary properties [191]. By symmetry requirements f (a 0 ) = f (-a 0 ). The quantity f (a 0 ) sin φdφdψ represents the number of fibres with an orientation in the range [(φ, φ + dφ), (ψ, ψ + dψ)]. By considering the unit sphere S around a material point, the following property is deduced:
1 4π S f (a 0 )dS = 1 4π π 0 2π 0 f (a 0 ) sin φdφdψ = 1. (48)
A constant distribution leads to isotropy [START_REF] Ateshian | Anisotropy of fibrous tissues in relation to the distribution of tensed and buckled fibers[END_REF].
The strain energy of the soft tissue can then be deduced by integration of the elementary fibre energy in each direction w(I 4 (a 0 )) by:
W = 1 4π S f (a 0 )w I 4 (a 0 ) dS. ( 49
)
Finally, the stress is determined by derivation:
S = 1 2π S f (a 0 ) ∂w(I 4 (a 0 )) ∂C dS. ( 50
)
The evaluation of the stress depends on different parameters: the distribution function and the energy of a single fibre. Different considerations have been proposed in the literature. For the distribution function, the principal propositions are: beta distribution [START_REF] Abramowitz | Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables[END_REF][START_REF] Cacho | A constitutive model for fibrous tissue sconsidering collagen fiber crimp[END_REF]218,225], log-logistic distribution [277], Gaussian distribution [START_REF] Billar | Biaxial mechanical properties of the native and glutaraldehyde-treated aortic valve cusp: Part II-a structural constitutive model[END_REF][START_REF] Chen | The structure and mechanical properties of the mitral valve leaflet-strut chordae transition zone[END_REF][START_REF] Driessen | A structural constitutive model for collagenous cardiovascular tissues incorporating the angular fiber distribution[END_REF]141,223,272], von Mises distribution [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Gasser | Hyperelastic modelling of arterial layers with distributed collagen fibre orientations[END_REF][START_REF] Girard | Peripapillary and posterior scleral mechanics-Part I: development of an anisotropic hyperelastic constitutive model[END_REF]191,211,263] or the Bingham distribution [START_REF] Alastrué | On the use of the bingham statistical distribution in microsphere-based constitutive models for arterial tissue[END_REF]. The forms of the distribution are listed in Table 8. The choice of the functions is also a key point. Different functions can be chosen to describe the mechanical behaviour of a collagen fibre; the simple linear behaviour [START_REF] Ateshian | Anisotropy of fibrous tissues in relation to the distribution of tensed and buckled fibers[END_REF], or the phenomenological laws of the exponential Fung type [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Billar | Biaxial mechanical properties of the native and glutaraldehyde-treated aortic valve cusp: Part II-a structural constitutive model[END_REF]125,211,225,263] or a logarithmic function [277] or a polynomial function [START_REF] Flynn | An anisotropic discrete fibre model based on a generalised strain invariant with application to soft biological tissues[END_REF]248], other functions [119,207] or worm-like chain forms [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Alastrué | On the use of the bingham statistical distribution in microsphere-based constitutive models for arterial tissue[END_REF][START_REF] Bischoff | A microstructurally based orthotropic hyperelastic constitutive law[END_REF][START_REF] Bischoff | Orthotropic hyperelasticity in terms of an arbitrary molecular chain model[END_REF][START_REF] Bustamante | Ten years of tension: single-molecule DNA mechanics[END_REF][START_REF] Garikipati | A continuum treatment of growth in biological tissue: the coupling of mass transport and mechanics[END_REF]135,136, 216,218] which are a particularisation of the eight-chain model [START_REF] Arruda | A three dimensional constitutive model for the large stretch behavior of rubber elastic materials[END_REF] to the transversely isotropic case. For some models, a parameter should be introduced in the fibre concentration factor to control collagen fibre alignment along a preferred orientation [START_REF] Girard | Peripapillary and posterior scleral mechanics-Part I: development of an anisotropic hyperelastic constitutive model[END_REF]. The different constitutive equations are listed in Table 9. The reader can refer to [START_REF] Bischoff | Continuous versus discrete (invariant) representations of fibruous structure for modeling non-linear anisotropic soft tissue behavior[END_REF] to determine which strain energy is used for each tissue.
distribution β(η, γ ) = Γ (η)Γ (γ ) Γ (η+γ ) with Γ (x) = ∞ 0 t x-1 exp(-t)dt Log-logistic distribution f (ε) = k b (ε-ε 0 /b) k-1 [1+(ε-ε 0 /b) k ] 2 with ε = √ I 4 -1 Gaussian distribution f (φ) = 1 σ √ 2π exp --(φ-M) 2 2σ 2 Normalized von Mises distribution f (φ) = 4 I b 2π exp b(cos 2φ + 1) with I = 2 √ π √ 2b 0 exp -t 2 dt Bingham distribution f (r, A) = K(A) -1 exp r T • Ar A is a symmetric matrix, r a vector and K(A) a normalized constant
w = k 1 k 2 exp k 2 (I 4 -1) 2 -1 Logarithmic function w = c ε -log(ε + 1) for ε > 0 with ε = √ I 4 -1 Polynomial function w = 1 2 K γ + M m=2 γ m m γ γ m m Worm-like chain w = nkθ L 4A 2 r 2 i L 2 + 1 1-r i /L - r i L - ln(I 2 4 r 2 0 ) 4r 0 L 4 r 0 L + 1 [1-r 0 /L] 2 -1 -W r with r i = √ I 4 r 0 and W r = 2 r 2 0 L 2 + 1 1-r 0 /L - r 0 L 164,
The main difficulty of the different constitutive equations is that they need a numerical integration that is always time consuming [START_REF] Bischoff | Continuous versus discrete (invariant) representations of fibruous structure for modeling non-linear anisotropic soft tissue behavior[END_REF]211]. The integration of the fibre contribution is mainly realised over a referential unit sphere [134,172]. Some prefer to use a finite number of directions, the constitutive equation is thus modified as follows:
1 4π S (•)dS = m i=1 w i (•) i . ( 51
)
Different choices exist, as the 42 directions of Bazant and Oh [START_REF] Bažant | Efficient numerical integration on the surface of a sphere[END_REF] and by Menzel [164], or the 184 directions of Alastrue et al. [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF], for example. The only different approach to those mentioned above is that proposed by [START_REF] Flynn | An anisotropic discrete fibre model based on a generalised strain invariant with application to soft biological tissues[END_REF] who used only six initial directions without employing an integration. Even if statistical approaches have more complex equations than phenomenological ones, some of these models have been implemented in finite element codes [START_REF] Alastrué | Anisotropic microsphere-based finite elasticity applied to blood vessel modelling[END_REF][START_REF] Garikipati | A continuum treatment of growth in biological tissue: the coupling of mass transport and mechanics[END_REF]135,136,164,275].
Discussion
The main difficulty is not to find the better constitutive equation but to have suitable experimental data. In fact, the difficulty is often that there is a large dispersion in experimental data due to the dispersion between the different specimens. Moreover, it is often difficult to impose different loading conditions on similar specimens. Thus the errors are often large and the number of loading conditions is often limited. As a consequence, one can wonder if the key point is to obtain the best fit for a very specific experimental database, or if the most important point is to represent globally the mechanical behaviour keeping in mind the physics of soft tissues.
As it was shown in the previous paragraphs, the number of constitutive equations that can be used to describe soft tissues non-linear elasticity is very impressive. Moreover, there exist other approaches, not presented in this paper, which involve a new class of elastic solids with implicit elasticity [212] that can also describe the strain limiting characteristics of soft tissues [START_REF] Freed | An implicit elastic theory for lung parenchyma[END_REF]. These theories are also elastic as they do not dissipate energies even though they are written in terms of strain rate and stress rate. But in this paper, we only focus on hyperelastic energy functions. These functions are expressed in terms of strain tensor components or strain invariants. The main difference between the two approaches discussed here is that the invariants formulation permits one to split the energy function into additive isotropic and anisotropic parts, even if, some constitutive equations written in invariants also link these two parts. The first constitutive equations introduced for soft tissues were isotropic. Although, for some applications, an isotropic constitutive equation is used to describe the mechanical behaviour for different soft tissues, the use of such simplified models is, in many cases, misleading and inappropriate as most soft tissues have a fibre structure that must be taken into account. To represent this structure, many constitutive equations are based on privileged directions that correspond to physical fibre orientations. In the modelling, characteristic directions are defined and they are represented by an angle that defines the orientation of the fibre compared to a specific direction. This angle can be considered as a parameter that is used to fit as well as possible the experimental data. Thus, the model is not used to mimic the physical soft tissue but it is used as a phenomenological equation to describe properly experimental data. This is not, in our opinion, a good choice, and it may mean that the energy function is not well chosen. The angle between the fibres should not be an adjustable parameter but must be imposed by the soft tissue structure.
An important issue in modelling concerns the stretching resistance of fibres. Many authors consider that the fibre must reach a threshold before opposing a stress. In this way, a threshold parameter can be introduced in all the suitable constitutive equations presented in this review. For the phenomenological model, it consist in replacing (I 4 -1) by (I 4 -I 0 4 ), or ( √ I 4 -1) by ( √ I 4 -I 0 4 ) in the constitutive equations. I 0 4 corresponds to the needed deformation to generate stress, see for example [START_REF] Calvo | On modelling damage process in vaginal tissue[END_REF]107,197,219]. The advantage of such approaches is that there is a material parameter that controls the beginning of material stiffening. Nevertheless, a main difficulty is that it strongly depends on the zero state of the experimental data. Moreover, this zero state is often different between post-mortem and in-vivo specimens, and can depend on the experimenter.
Anisotropic strain energy functions are difficult to fit, as it is difficult to separate the contribution between the matrix and the fibres, and to distinguish the different parts of the strain energy. Nevertheless, some strategies based on dissociating isotropic and anisotropic parts can be used [START_REF] Harb | A new parameter identification method of soft biological tissue combining genetic algorithm with analytical optimization[END_REF]. To avoid such representations, physical approaches attempt to represent the repartition of fibres in space, but two difficulties must be considered; the knowledge of the distribution function of the fibres in space and the mechanical properties of a single fibre. The choice of the best strain energy function is always a difficult point in the modelling process. A summary of the constitutive equations is presented in Fig. 1. In practise, the invariants I 2 and I 5 are often neglected. Their contribution is always difficult to determine [115] but it can be useful [START_REF] Feng | Measurements of mechanical anisotropy in brain tissue and implications for transversely isotropic material models of white matter[END_REF]. Moreover, these invariants are not independent from I 1 and I 4 in uniaxial loading tests. In this case, it is important to have also biaxial loadings to fit constitutive equations [224]. Moreover, in vivo experimental data [233] would be a benefit to obtain a good experimental fit, but there is little such data in the literature as compared to post-mortem experimental data. The constitutive equation choice will depend on the particular soft tissues under study and the conclusions will strongly depend on the experimental data that is chosen. Nevertheless, some comparisons between anisotropic strain energies have been realised in particular cases, see, for example, [START_REF] Carboni | Passive mechanical properties of porcine left circumflex artery and its mathematical description[END_REF][START_REF] Galle | A transversely isotropic constitutive model of excised guinea pig spinal cord white matter[END_REF]104,116,117,265].
In practice, a strategic point is the choice of a constitutive equation that is implemented in a finite element code to describe loading conditions that are very far from uniaxial or biaxial loadings. In this case, it is important to choose a constitutive equation that can be fitted with few experimental data that do not simulate non-physical response for any loading. Generally, it is better to limit the number of invariants and material parameters. Moreover, the simplest functions are often the best as they stand the least probability of creating nonphysical responses even if their fitting is not the best.
Conclusion
This paper has listed many different constitutive equations that have been developed for soft tissues. The number of constitutive equations to represent the contribution due to hyperelasticity is extensive due to the number of soft tissues and the experimental data dispersion. The paper has listed first, isotropic constitutive equations, and next anisotropic ones, and these were classed in different categories; those written with strain tensor components, those written in terms of the invariants, and those based on statistical modelling. Despite all the difficulties encountered in the modelling of the isotropic or anisotropic hyperelastic behaviour of soft tissue, these constitutive equations must be considered as only the basis of a more complex constitutive equation. Generalized equations should take into account other phenomena such as the activation of muscle [START_REF] Calvo | Passive non linear elastic behaviour of skeletal muscle: experimental results and model formulation[END_REF]158,188,189,250] or the viscoelasticity of the tissues [START_REF] Bischoff | A rheological network model for the continuum anisotropic and viscoelastic behavior of soft tissue[END_REF][START_REF] Haslach | Nonlinear viscoelastic, thermodynamically consistent, models for biological soft tissue[END_REF]105,147,208] or stress softening [154,195], for example. Nevertheless, the hyperelasticity representation should remain as the starting point in a modelling program and should be described as well as possible before introducing other effects.
2 exp c 2 (I 4 - 1 )
2241 c 2 (I 4 -1) -1Holzapfel et al. [104]
1 2 + c 5 (I 1 - 3 ) 1 + c 5 (
1251315 Doyle et al. [64] W = c 1 exp c 2 (I 1 -3) 2 + c 3 (I 1 -3)(I 4 -1) + c 4 (I 4 -1) + c 6 (I 4 -1) -1 Fung et al. [78] W = c 1 exp c 2 (I 1 -3) 2 + c 3 (I 1 -3)(I 4 -1) + c 4 (I 4 -1) 2 -I 1 -3) 2 + c 6 (I 4 -1) 2 + c 7 (I 1 -3)(I 4 -1) I 4 , I 5 forms Masson et al. [160]
c 1 1 ) 4 +
114 +c 2 (I 4 -1) c 3 +c 4 (I 4 -Ruter and Stein [222] W = c 2 cosh(I 4 -1) -1 Horgan and Saccomandi [111] W = -c 2 c 3 I 4 -1 + c 3 ln(1c 6 ln(I 4 ) Calvo et al. [38] W = 2c 5 √ I 4 + c 6 ln(I 4 ) + c 7Demirkoparan et al.[START_REF] Demirkoparan | Swelling of an internally pressurized nonlinearly elastic tube with fiber reinforcing[END_REF][START_REF] Demirkoparan | On dissolution and reassembly of filamentary reinforcing networks in hyperelastic materials[END_REF]
Fig. 1
1 Fig. 1 Organisation of the constitutive equations in the paper
Table 2
2 Principal isotropic hyperelastic constitutive equations developed for soft tissues, where c 1 , c 2 , c 3 and c 4 are material parameters. (*) The model is known as the generalised neo-Hookean model. (**) As pointed out by[109] is frequently mistakenly referred toDelfino et al. [56]
Polynomial models
Raghavan and Vorp [210]
Table 3
3 Anisotropic constitutive equations written with strain tensors components, where A ij kl with i, j, k, l = 1...3, b i , with i = 1...12, a ij , b ij , c ij with i, j = 1...3 and c are material parameters
Generic Fung functions W = C 2 (exp Q -1)
Tong and Fung [252]
E 11 + b 2 E 22 + b 3 E 33 + b 4 E 11 E 22 + b 5 E 11 E 33 + b 6 E 22 E 33 E 12 E 21 + b 10 E 3 11 + b 11 E 3 22 + b 12 E 2 11 E 22 + b 13 E 11 E 2
2 zz
+ b 7 E 3 zz
Rajagopal et al. [213] Tong and Fung [252] W = b 1 + b 7 E 2 11 + b 8 E 2 22 + b 9 E 2 33 + b 10 E 2 12 + b 11 E 2 13 + b 12 E 2 23 W = 1 2 b 1 E 2 11 + b 2 E 2 22 + b 3 E 2 12 + E 2 21 + 2b 4 E 12 E 21
+ b 5 2 exp b 6 E 2 11 + b 7 E 2 22 + b 8 E 2 12 + E 2 21 + 2b 9 22 -1
For example, the function defined in [175] requires a limit for each component. As a consequence, the domain limits of the function are well established. The question is different for[249] as the function is written in terms of the sum of the components in a logarithmic form and the function can be undefined[104].
Nash and Hunter [175] W = c 11 E 2 11 |a 11 -E 11 | b 11 + c 22 E 2 22 |a 22 -E 22 | b 22 + c 33 E 2 33 |a 33 -E 33 | b 33
+ c 12 E 2 12 |a 12 -E 12 | b 12 + c 13 E 2 13 |a 13 -E 13 | b 13 + c 23 E 2
Table 4
4 Some constitutive equations based on truncations of the series developments where c i with i = 1...6 are material parameters
I 4 forms
Triantafyllidis and Abeyaratne [257] W aniso = c 2 (I 4 -1) 2
Peng et al. [199]
Table 5
5 Model based on a power development, where k i , i = 1..6 are material parameters
Power developments
Ghaemi et al. [85] W aniso = C I k 1 /2 4 -1 k 2
Table 6
6 List of exponential constitutive equations, where c 1 , c 2 , c 3 , c 4 , c 5 and κ are material parameters
Table 7
7 Other models written in invariants, where c i with i = 1...7 are material parameters 1 +c 2 (I 4 -1) c 3 +c 4 (I 4 -1)+c 5 (I 4 -1) 2 +c 6 (I 5 -2I 4 +1)
General forms
Horgan and Saccomandi [111] W ,4 = -
c
Table 8
8 Some distribution functions used in statistical approaches, where ε 0 , b, σ , M and I are statistical parameters
Distribution functions
Beta
Table 9
9 Some fibre functions used in statistical approaches where k 1 , k 2 , r 0 , L, K, W r , γ m and m are material parameters
Energy functions
Holzapfel et al. function
|a 23 -E 23 | b 23Moreover, the parameters of these materials are often difficult to fit as they have no physical meaning. For example, the strain energy of [259] is discussed in [104] and is not convex, this can also be the case for Fung functions if the parameters are not well chosen[104]. The limitations in material parameters are discussed in[START_REF] Federico | An energetic approach to the analysis of anisotropic hyperelastic materials[END_REF] 269] with respect to polyconvexity. In this way, developments have been made to ensure polyconvexity with a physical meaning of the material response[247]. Other conditions also must be respected for viable functions.
+ 2c 2 -1 . (41)
Acknowledgements
The authors thank Prof. Roger Fosdick for his valuable comments. This work is supported by the French National Research Agency Program ANR-12-BS09-0008-01 SAMBA (Silicone Architectured Membranes for Biomedical Applications). |
01758982 | en | [
"shs.hisphilso"
] | 2024/03/05 22:32:10 | 2018 | https://shs.hal.science/halshs-01758982/file/asymptoticFinalHAL.pdf | Mirna Džamonja
Marco Panza
Asymptotic quasi-completeness and ZFC
The axioms ZFC of first order set theory are one of the best and most widely accepted, if not perfect, foundations used in mathematics. Just as the axioms of first order Peano Arithmetic, ZFC axioms form a recursively enumerable list of axioms, and are, then, subject to Gödel's Incompleteness Theorems. Hence, if they are assumed to be consistent, they are necessarily incomplete. This can be witnessed by various concrete statements, including the celebrated Continuum Hypothesis CH. The independence results about the infinite cardinals are so abundant that it often appears that ZFC can basically prove very little about such cardinals.
However, we put forward a thesis that ZFC is actually very powerful at some infinite cardinals, but not at all of them. We have to move away from the first few and to look at limits of uncountable cardinals, such as ℵω. Specifically, we work with singular cardinals (which are necessarily limits) and we illustrate that at such cardinals there is a very serious limit to independence and that many statements which are known to be independent on regular cardinals become provable or refutable by ZFC at singulars. In a certain sense, which we explain, the behavior of the set-theoretic universe is asymptotically determined at singular cardinals by the behavior that the universe assumes at the smaller regular cardinals. Foundationally, ZFC provides an asymptotically univocal image of the universe of sets around the singular cardinals. We also give a philosophical view accounting for the relevance of these claims in a platonistic perspective which is different from traditional mathematical platonism.
Introduction
Singular cardinals have a fascinating history related to an infamous event in which one mathematician tried to discredit another and ended up being himself proved wrong. As Menachem Kojman states in his historical article on singular cardinals [START_REF] Kojman | Singular Cardinals: from Hausdorff's gaps to Shelah's pcf theory[END_REF], 'Singular cardinals appeared on the mathematical world stage two years before they were defined'. In a public lecture at the Third International Congress of Mathematics in 1904, Julius König claimed to have proved that the continuum could not be well-ordered, therefore showing that Cantor's Continuum Hypothesis does not make sense, since this would entail that 2 ℵ0 , the (putative) cardinal of the continuum, is not well defined. This was not very pleasant for Cantor, who was not alerted in advance and who was in the audience. However, shortly after, Felix Hausdorff found a mistake in König's ematical findings. These results show that many statements which are known to be independent at regular cardinals become provable or refutable by ZFC at singulars, and so indicate that the behavior of the set-theoretic universe is asymptotically determined at singular cardinals by its features at the smaller regular cardinals. We could say, then, that even though ZFC is provably incomplete, asymptotically, at singular cardinals, it becomes quasi-complete since the possible features of universes of ZFC are limited in number, relative to the size of the singular in question. These facts invite a philosophical reflection.
The paper is organized as follows: Mathematical results that illustrate the mentioned facts are expounded in sections §2 and §3. The former contains results that by now are classic in set theory and it is written in a self-contained style. The latter contains results of contemporary research and is meant to reinforce the illustration offered by the former. This section is not written in a self-contained style, and it would be out of the scope of this paper to write it in this way. Section §2 also contains a historical perspective. Finally, some philosophical remarks are made in §4.
Modern history of the singular cardinals
One of the most famous (or infamous, depending on the point of view) problems in set theory is that of proving or refuting the Continuum Hypothesis (CH) and its generalisation to all infinite cardinals (GCH).
Cantor recursively defined two hierarchies of infinite cardinals, the ℵs and the s, the first based on the successor operation and the second on the power set operation: ℵ 0 = 0 = ω, ℵ α+1 = ℵ + α , α+1 = 2 α , and for δ a non-zero limit ordinal ℵ δ = sup β<δ ℵ β , δ = sup β<δ β (here we are using the notation 'sup(A)' for a set A of cardinals to denote the first cardinal greater or equal to all cardinals in A). A simple way to state GCH is to claim that these two hierarchies are the same: ℵ α = α , for any α. Another way, merely involving the first hierarchy, is to claim that for every α we have 2 ℵα = ℵ + α . CH is the specific instance ℵ 1 = 1 or 2 ℵ0 = ℵ 1 . Insofar as 1 = |R|, CH can be reformulated as the claim that any infinite subset of the set of the real numbers admits a bijection either with the set of natural numbers or with the set of real numbers.
It is well known that, frustratingly, Cantor spent at least thirty years trying to prove CH. Hilbert choose the problem of proving or disproving GCH as the first item on his list of problems presented to the International Congress of Mathematics in 1900. In 1963 ( [START_REF] Cohen | The independence of the continuum hypothesis[END_REF]), Paul Cohen proved that the negation of CH is relatively consistent with ZFC. This result, jointly with that proved by Kurt Gödel in 1940 ([20])-that GCH is also relatively consistent with ZFC-entails that neither CH nor GCH are provable or refutable from the axioms of ZFC.
Cohen's result came many years after Gödel's incompleteness theorems ( [START_REF] Gödel | Über formal unentscheidbare Säztze der Principia Mathematica und verwandter Systeme, I[END_REF]), which imply that there is a sentence in the language of set theory whose truth is not decidable by ZFC. But the enormous surprise was that there are undecidable sentences which are not specifically constructed as a Gödel's sentence; in particular, there is one as simply stated and well known as CH.
There are many mathematical and philosophical issues connected to this outcome. The one which interests us here concerns the consequences it has for ZFC's models: it entails that if ZFC is consistent at all, then it admits a huge variety of different models, where CH and CGH are either true or false and, more generally, the power set class-function (namely F : Reg → Reg; F (κ) = 2 κ , where Reg is the class of regular cardinals) behaves in almost arbitrary ways (see below on the results of William Easton). This means that ZFC's axioms leave the von Neumann universe of sets V -which is recursively defined by appealing to the power set operation (V = α V α , with α an ordinal and V α = β<α P (V β ))-hugely indeterminate: they are compatible, for example, both with the identification of V with Gödel's constructible universe L (which is what the axiom of constructibility 'V = L' asserts, by, then, deciding GCH in the positive), and with the admission that in V the values of 2 κ are as large as desired, which makes V hugely greater than L. The question is whether this indetermination of the size of V α versus the size of L α can be somehow limited for some sort of cardinals, i.e. for some values of α. The results we mention below show that this is so for singular cardinals, and even, as we said above, that V is asymptotically determined at singular cardinals by its features at the smaller regular cardinals.
To explain this better, we begin with a result by Easton ([16]), who, shortly after Cohen's result and building on earlier results of Robert Solovay ([45]), proved that for regular cardinals the indetermination of the values of the power set function is even stronger than the Cohen's result suggests: for any non-decreasing class-function F : Reg → Reg defined in an arbitrary model of ZFC so that cf(F (κ)) > κ for all κ, there is an extension to another model that preserves both cardinals and cofinalities and in which 2 κ = F (κ), for any regular cardinal κ. This implies that in ZFC no statement about the power set (class)-function1 on the regular cardinals other than 'κ ≤ λ =⇒ 2 κ ≤ 2 λ ' and 'cf (κ) < cf (2 κ )' can be proved.
It is important to notice that singular cardinals are excluded from Easton's result. Just after the result was obtained, it was felt that this restriction was due to a technical problem which could be overcome in the future. But what became clear later is that this restriction is due to deep differences between regular and singular cardinals. Indeed, many results attesting to this soon followed. In particular, what these results eventually showed is that the power set classfunction behaves much better at singular cardinals than it does at regular ones. While the above quoted results by Gödel, Cohen and Easton imply that the value of the power set function can be decided in ZFC for neither regular nor singular cardinals, as not even 2 ℵ0 has an upper bound there, it turns out that one can do the next-best thing and show in ZFC that the value of 2 κ for any singular κ is conditioned on the values of 2 λ for the regular λ less than κ. This entails that the size of V κ+1 is, in turn, conditioned by that of that of V λ for λ ≤ κ.
Already by 1965 and 1973 respectively, Lev Bukovský ( [START_REF] Bukovský | The continuum problem and powers of alephs[END_REF]) and Stephen H. Hechler ( [START_REF] Stephen | Powers of singular cardinals and a strong form of the negation of the generalized continuum hypothesis[END_REF]) had proved, for example, that in ZFC if κ is singular and 2 λ is eventually constant for λ < κ, then 2 κ is equal to this constant. Therefore the value of 2 κ is entirely determined by the values of the power set function below κ. An infinite cardinal λ is said to be strong limit if for any θ < λ we have 2 θ < λ (in particular, it follows that such a cardinal is limit). Note that strong limit cardinals, and in particular, strong limit singular cardinals, exist in any universe of set theory: an example is given by ω . Solovay ( [START_REF] Solovay | Strongly compact cardinals and the GCH[END_REF]) proved that for any κ which is larger or equal to a strongly compact cardinal (a large cardinal λ characterised by having a certain algebraic property that is not essential to explain here, namely that any λ-complete filter can be extended to a λ-complete ultrafilter), we have 2 κ = κ + . In other words, GCH holds above a strongly compact cardinal. This result, of course, is only interesting if there exists a strongly compact cardinal. In fact this result was obtained as part of an investigation started earlier by Dana Scott [START_REF] Scott | Measurable cardinals and constructible sets[END_REF], who investigated the question of what kind of cardinal can be the first cardinal failing GCH, that is, what properties must have a cardinal κ such that 2 κ > κ + , but such that 2 θ = θ + , for all infinite cardinals θ < κ. What Solovay's result shows is that such a cardinal cannot be strongly compact.
This result led Solovay to advance a new hypothesis, according to which, for singular cardinals, his own result does not depend on the existence of a strongly compact cardinal. In other words, the hypothesis is that in ZFC, every singular strong limit cardinal κ satisfies 2 κ = κ + . The heart of it is the following implication called the 'Singular Cardinal Hypothesis':
2 cf(κ) < κ =⇒ κ cf(κ) = κ + , (SCH)
for any cardinal κ. Indeed, for definition, the antecedent implies that κ is a singular cardinal, so that SCH states that κ cf(κ) = κ + , for any singular cardinal κ for which this is not already ruled out by 2 cf(κ) being too big. On the other hand, if κ is a strong limit cardinal, then it follows from the elementary results mentioned in the previous section that κ cf(κ) = 2 κ (see [START_REF] Jech | Set Theory[END_REF], pg. 55), so that the consequent reduces to '2 κ = κ + '. Hence, SCH implies that the power set operation is entirely determined on the singular strong limit cardinals, since GCH holds for any such cardinal.
In a famous paper appearing in 1975 ( [START_REF] Silver | On the singular cardinals problem[END_REF]), Jack Silver proved that if κ is a singular cardinal of uncountable cofinality, then κ cannot be the first cardinal to fail GCH. A celebrated and unexpected counterpart of this result was proved by Menachem Magidor shortly afterwards ( [START_REF] Magidor | On the singular cardinals problem[END_REF]). It asserts that in the presence of some rather large cardinals, it is consistent with ZFC to assume that ℵ ω is the first cardinal that fails GCH. This, of course, implies that the condition that κ has uncountable cofinality is a necessary condition for Silver's result to hold. But it also implies that SCH fails and that the power set function at the strong limit singular cardinals does not always behave in the easiest possible way.
Another celebrated theorem proved shortly after the work of Silver is Jensen's Covering Lemma ( [START_REF] Devlin | Marginalia to a theorem of Silver[END_REF]), from which it follows that if there are no sufficiently large cardinals in the universe, then SCH holds. To be precise, this lemma implies that SCH holds if 0 does not exist. (It is probably not necessary here to define 0 , but let us say that it is a large cardinal whose existence would make V be larger than L, whereas its nonexistence would make V be closely approximated by L.)
Further history of the problem up to the late 1980s is quite complex and involves notions that are out of the scope of ZFC and, a fortiori out of the scope of our paper. Details can be found, for example, in the historical introduction to [START_REF] Shelah | Cardinal Arithmetic, volume 29 of Oxford Logic Guides[END_REF]. Insofar as our interest here is to focus on the results that can be proved in ZFC, we confine ourselves to mention a surprising result proved by Fred Galvin and András Hajnal in 1975 ([17]). By moving the emphasis from GCH to the power set function as such, they were the first to identify a bound in ZFC for a value of this function, namely for the value it would take on a strong limit singular cardinal with uncountable cofinality. Let κ be such a cardinal, then what Galvin and Hajnal proved is that 2 κ < ℵ γ , where γ = (2 |α| ) + for that α for which κ = ℵ α . As the comparison with the two results of Silver and Magidor mentioned above makes clear, singular cardinals with countable and uncountable cofinality behave quite differently. There were no reasons in principle, then, to think, that Galvin and Hajnal's result would extend to singular cardinals with countable cofinality and the state of the matters stood still for many years.
Fast forward, and we arrive at a crowning moment in our story, namely to the proof, by Saharon Shelah in the late 1980s, of the following unexpected theorem, put forward in [START_REF] Shelah | Cardinal Arithmetic, volume 29 of Oxford Logic Guides[END_REF]:
[∀n n < ω =⇒ 2 ℵn < ℵ ω ] =⇒ 2 ℵω < ℵ ω4 . (1)
Shelah's theorem is, in fact, more general than the instance we quoted, which nevertheless perfectly illustrates the point. If ℵ ω is a strong limit, then the value of the power set function on it is bounded. In every model of ZFC, Shelah's theorem extends to the countable cofinality the result of Galvin and Hajnal, obtains a bound in terms of just the ℵ-function (unlike the Galvin-Hajnal theorem which uses the power set function), and shows that in spite of Magidor's result (which shows that SCH can fails at singular strong limits cardinals of countable cardinality), even at such cardinals a weak form of SCH holds, namely the value of the power set function is bounded. Shelah's theorem is proved by discovering totally new operations on cardinals, called 'pcf' and 'pp', which are meaningful for singular cardinals and whose values are very difficult to change by forcing. In many instances it is not even known if they are changeable to any significant extent. It would be much too complex for us to describe these operations here but the point made is that even though ZFC axioms are quite indecisive about the power set operation in general, they are quite decisive about it at the singular cardinals and this is because they prove deep combinatorial facts about the operations pcf and pp. The field of research concerned with the operations pcf and pp is called the 'pcf theory'.
Some contemporary results
The foregoing results have been known to mathematicians for a while but they do not seem to have influenced the literature in philosophy very much. The purpose of this article is to suggest that they have some interest for our philosophical views about ZFC and, more generally, set theory. Before coming to it, however, let us make a short detour in the realm of some more recent results which further illustrate the point. These results, to which this section is devoted, deal with mathematical concepts which are rather advanced; it would distract from the point to present them in a self-contained manner. Those readers who are not at ease with these concepts can safely skip the present section, taking it on trust that contemporary research continues to prove that singular cardinals have quite peculiar features, and that the mathematical universe at such cardinals exhibits much less indetermination than at the regular cardinals. This is the view that we shall discuss in §4.
Let us begin by observing that the emphasis of the recent research on singular cardinals has moved from cardinal arithmetic to more combinatorial questions. We could say that what recent research on singular cardinals is concerned with is combinatorial SCH: rather than just looking at the value of 2 κ for a certain cardinal κ, one considers the "combinatorics" of κ, namely the interplay of various appropriate properties ϕ(κ) of it. An example of such a property might be the existence of a certain object of size κ, such as a graph (see below on graphs) on κ with certain properties, or the existence of a topological or a measure-theoretic object of size κ, in the more complex cases. One may think of κ as a parameter here. Then the relevant instance of combinatorial SCH would say that the property ϕ(κ) depends only on the fact that ϕ(θ) holds at all θ < κ. The question can be asked more generally, what about the relevant property of κ can be proved in ZFC, knowing that the property holds all θ < κ.
Concerning the former aspect of such a question, that concerned with what can be proved in ZFC, a celebrated singular compactness theorem has been proved by Shelah in [START_REF] Shelah | A compactness theorem for singular cardinals, free algebras, Whitehead problem and transversals[END_REF]. Shelah's book [START_REF] Shelah | Cardinal Arithmetic, volume 29 of Oxford Logic Guides[END_REF] presents, moreover, many applications of pcf theory to deal with this aspect of the question. The latter aspect of the question-namely the forcing counterparts of the former-appeared only later, due to the enormous difficulty of doing even the simplest forcing at a singular cardinal and the necessity (by the Covering Lemma) of using large cardinals, for performing this task. One of the early examples is [START_REF] Džamonja | Universal graphs at the successor of a singular cardinal[END_REF].
To illustrate this sort of research, let us concentrate on one sample combinatorial problem, which has to do with one of the simplest but most useful notions in mathematics, that of a graph.
A graph is a convenient way to represent a binary relation. Namely, a graph (V, E) consists of a set V of vertices and a set E ⊆ V ×V of edges. Both finite and infinite graphs are frequently studied in mathematics and they are also used in everyday life, for example to represent communication networks. Of particular interest in the theory of graphs is the situation when one graph G is subsumed by another one H, in the sense that one can find a copy of G inside of H. This is expressed by saying that there is an embedding from G to H. Mathematically speaking, this is defined as follows.
Definition 1 Suppose that G = (V G , E G ) and H = (V H , E H ) are graphs and f : G → H is a function. We say that f is a graph homomorphism, or a homomorphic embedding if f preserves the edge relation (so a E G b implies f (a) E H f (b) for all a, b ∈ V G ) but it is not necessarily 1-1. If f is furthermore 1-1, we say that f is a weak embedding. If, in addition, f preserves the non-edge relation (so a E G b holds iff f (a) E H f (b) holds), we say that f is a strong embedding.
Graph homomorphisms are of large interest in the theory of graphs and theoretical computer science (see for example [START_REF] Hell | Graphs and homomorphisms[END_REF] for a recent state-of-the-art book on graph homomorphisms). The decision problem associated to the graph homomorphism, that is, deciding if there is a graph homomorphism from one finite graph into another, is NP-complete (see Chapter 5 of [START_REF] Hell | Graphs and homomorphisms[END_REF], which makes the notion also interesting in computer sciences).
Of particular interest in applications is the existence of a universal graph. If we are given a class G of graphs, we say that a certain graph G * is universal for G if every graph from G admits a homomorphic embedding into G * . Of course, variants of this relation can be obtained by replacing homomorphic embedding with weak or strong embedding, as defined in Definition 1. The combinatorial question that we shall survey is that of the existence of universal graphs of a fixed size κ in various contexts.
To begin with, ZFC proves that there is a unique up to isomorphism graph G * of size ℵ 0 . This is known as a Rado graph (or, also, random or Erdös-Rényi graph), and it satisfies that for every finite graph G and every vertex v of G, every strong embedding of G\{v} into G * can be extended to a strong embedding of G into G * . As a consequence, G * strongly embeds all countable graphs. This graph was discovered independently in several contexts, starting from the work of Ackermann in [START_REF] Ackermann | Die Widerspruchsfreiheit der allgemeinen Mengenlehre[END_REF], but its universality properties were proved by Rado in [START_REF] Rado | Universal graphs and universal functions[END_REF].
Under the assumption of GCH, from the existence of saturated and special models in firstorder model theory (see [START_REF] Chung | Model Theory[END_REF]), it follows that a universal graph exists at every infinite cardinal κ. In particular, the assumption that λ < κ =⇒ κ λ = κ entails that there is a saturated, and consequently universal, graph of size κ.
When we move away from GCH, the existence of universal graphs becomes a rather difficult problem. Shelah mentioned in [START_REF] Shelah | On universal graphs without instances of CH[END_REF] a result of his (for the proof see [START_REF] Kojman | Nonexistence of universal orders in many cardinals[END_REF] or [START_REF] Džamonja | The singular world of singular cardinals[END_REF]), namely that adding ℵ 2 Cohen reals to a model of CH destroys any hope of having a universal graph of size ℵ 1 . This does not only mean that there is no universal graph in this model, but also that, by defining the universality number of a family G of graphs as the smallest size of a subfamily F of G such that every element of G embeds into a member of F, we have that in the above model the universality number of the family of graphs of size ℵ 1 is the largest possible, namely 2 ℵ1 . More generally, one can state the following theorem:
Theorem 2 [Shelah, see [START_REF] Kojman | Nonexistence of universal orders in many cardinals[END_REF] or [START_REF] Džamonja | The singular world of singular cardinals[END_REF]] Suppose that λ < κ =⇒ κ λ = κ and let P be the forcing to add λ many Cohen subsets to κ (with cf(λ) ≥ κ ++ and λ ≥ 2 κ + ). Then the universality number for graphs on κ + in the extension by P is λ.
Using a standard argument about Easton forcing, we can see that it is equally easy to get negative universality results for graphs at a class of regular cardinals: Theorem 3 Suppose that the ground model V satisfies GCH and C is a class of regular cardinals in V , while F is a non-decreasing function on C satisfying that for each κ ∈ C we have cf(F (κ)) ≥ κ ++ . Let P be Easton's forcing to add F (κ) Cohen subsets to κ for each κ ∈ C. Then for each κ ∈ C the universality number for graphs on κ + in the extension by P is F (κ).
The proofs of these results are quite easy. In [START_REF] Shelah | On universal graphs without instances of CH[END_REF], Shelah emphasizes this by claiming that "The consistency of the non-existence of a universal graph of power ℵ 1 is trivial, since, it is enough to add ℵ 2 generic Cohen reals". He focuses, indeed, on a much more complex proof, that of the consistency of the existence of a universal graph at ℵ 1 with the negation of CH. He obtained such a proof in [START_REF] Shelah | Universal graphs without instances of CH: revisited[END_REF], while Mekler obtained a different proof of the same fact in [START_REF] Mekler | Universal structures in power ℵ 1[END_REF]. Insofar as ℵ 0 is regular, ℵ 1 is the successor of a regular cardinal. Other successors of regular cardinals behave in a similar way, although neither Mekler's nor Shelah's proof seems to carry over from ℵ 1 to larger successors of regulars. A quite different proof, applicable to larger successors of regulars but proving a somewhat weaker statement, was obtained by Džamonja and Shelah in [START_REF] Džamonja | On the existence of universal models[END_REF]: they proved that assuming that it is relatively consistent with ZFC that the universality number of graphs on κ + for an arbitrary regular κ is equal to κ ++ but 2 κ is as large as desired.
All these results only concern regular cardinals and their successors, and leave open the question for singular cardinals and their successors. Positive results analogous to the one just mentioned by Džamonja and Shelah were obtained by Džamonja and Shelah, again, in [START_REF] Džamonja | Universal graphs at the successor of a singular cardinal[END_REF], for the case where κ is a singular cardinal of countable cofinality, and by Cummings, Džamonja, Magidor, Morgan and Shelah in [START_REF] Cummings | A framework for forcing constructions at successors of singular cardinals[END_REF], for the case where κ is a singular cardinal of arbitrary cofinality. The most general of their results can be stated as follows:
Theorem 4 [Cummings et al. [START_REF] Cummings | A framework for forcing constructions at successors of singular cardinals[END_REF]] If κ is a supercompact cardinal, λ < κ is a regular cardinal and Θ is a cardinal with cf(Θ) ≥ κ ++ and κ +3 ≤ Θ, then there is a cardinal preserving forcing extension in which cf(κ) = λ, 2 κ = 2 κ + = Θ and in which there is a universal family of graphs on κ + of size κ ++ .
Further recent results of Shelah (private communication) indicate that the universality number in the above model should be exactly κ ++ . These results concern successors of singular cardinals, which themselves are, of course, regular. The situation for singular cardinals themselves is different; in particular, no forcing notion can operate on them. We do not have any general results about graphs on such cardinals, but here is a result showing that in specific classes of graphs, the existence of a universal element at singulars is simply ruled out by the axioms of ZF (not even the full ZFC is needed):
Theorem 5 [Džamonja [START_REF] Džamonja | ZFC combinatorics at singular cardinals[END_REF]] (ZF) Suppose that κ is a cardinal of cofinality ω. Then, for any λ ≥ κ in ZF, there is no universal element in the class of graphs of size λ that omit a clique of size κ, under graph homomorphisms, or the weak or the strong embeddings.
This survey of the graph universality problem shows in a specific example the phenomenon of the change in the combinatorial behaviour between the three kinds of cardinals: successors of regulars, successors of singulars and, finally, singular cardinals. At successors of regulars combinatorics is very independent of ZFC, so that simple forcing, without use of large cardinals, allows us to move into universes of set theory which have very distinct behaviours. At the successor of a singular cardinal, we can move from L-like universes only if we use large cardinals (as we know by Jensen's Covering, mentioned above), and this shows up in combinatorics in the necessity to use both large cardinals and forcing to obtain independence results. This independence is in fact limited (as in the example of Shelah's pcf theorem quoted above). Finally, at singular cardinals, combinatorics tends to be completely determined by ZFC, or even by ZF, as in the example of Theorem 5.
In connection with this theorem, it is interesting to note that in the absence of the Axiom of Choice, it is possible that every uncountable cardinal is singular of countable cofinality. To be exact, Gitik proved in [START_REF] Gitik | All uncountable cardinals can be singular[END_REF] that from the consistency of ZFC and arbitrarily large strongly compact cardinals, it is possible to construct a model of ZF in which all cardinals have countable cofinality. Therefore, if one is happy to work with ZF only, then one has the choice to move to a model in which only singular cardinals exist and they only have countable cofinality. In such a model, combinatorics becomes easy and determined by the axioms, at least in the context of the questions that have been studied, such as the graph universality problem.
Philosophical Remaks
Mathematical platonism is often presented as the thesis that mathematical objects exist independently of any sort of human (cognitive, and/or epistemic) activity, and it is taken to work harmoniously with a realistic semantic view, according to which all we can say in mathematics (i.e. by using a mathematical language) is either true or false, to the effect that all that has been (unquestionably) proved is true, but not all that is true has been (unquestionably) proved or can be proved (because of various forms of incompleteness of most mathematical theories).
Both claims are, however, quite difficult to support and are, in fact, very often supported only by the convenience of their consequences, or, better, by the convenient simplicity of the account of mathematics they suggest, and because they provide a simple explanation of the feeling most mathematicians (possibly all) have that something external to them resists their intuitions, ideas, programs, and conjectures, to the effect that all that they can frame by their thoughts or their imagination must have, as it were, an external, independent approval, before having its place among mathematical achievements. Hence, an interesting philosophical question is whether there can be weaker claims that have similarly convenient consequences and that can be more easily positively supported, either by evidence coming from mathematical practice, or by more satisfactory metaphysical assumptions, or, better, by both.
It is our opinion that such claims can reasonably be formulated. In short, they are the following: i ) there are ways for us to have epistemic de re access to mathematical objects; ii ) we are able to prove truths about them, though others are still not proved or are unprovable within our most convenient theories (which are supposed to deal with these objects). Claim (i ) means that there are ways for us to fix intellectual contents which are suitably conceived as individuals that mathematics is dealing with, in such a way that we can afterwards (that is, after having fixed them) ascribe properties and relations to these individuals. Claim (ii ) means that some of our ascriptions of property and relations to these individuals result in truths, in the sense that they somehow comply with the content we have afterwards fixed, and, among them, some can be, and in many cases have been, provably established, though others are still not so or cannot be so within the relevant theories.
The phrase 'de re' in claim (i ) belongs to the philosophical lexicon. It is currently used in opposition to 'de dicto' to point out a distinction concerning propositional attitudes, typically belief (or knowledge). Believing that something is P can mean believing either that there is at least one thing that is P or that some specific thing is P . In the former case the belief is de dicto; in the latter de re. If the relevant thing is t, a suitable way to unambiguously describe the second belief is saying that of t, it is believed that it is P . This makes clear that the subject of a de re propositional attitude is to be identified independently from ascribing to it what the relevant proposition ascribes to it. Hence, its being P cannot be part of what makes it what it is. This is not enough, however, since for the attitude to be de re, the identification has to be stable under its possible variations. If Mirna believes that t is the only thing that is Q, her believing of t that it is P is the same as her believing it of the Q. But Marco can believe that the only thing that is Q is s (distinct from t). So his believing of the Q that it is P is quite distinct from Mirna's belief that it is so. Hence neither beliefs are de re. This makes clear that the identification of the subject of a de re attitude is to be independent of the attitude itself or, even, of any sort of attitude (since different attitudes can compose each other's). This is why the most straightforward examples of de re attitudes concern empirical objects ostensively, or pre-conceptually identified in one way or another.
This has not prevented philosophers from appealing to the de re vs. de dicto distinction in relation to mathematics. In particular, a rich discussion has concerned the possibility of using appropriate sorts of numerals for directly referring to natural numbers while having a de re attitude towards them. Diana Ackerman has considered that "the existence of [natural] numbers is a necessary condition for anyone's having de re propositional attitudes toward them" ([1], p. 145). Granted their existence, Tyler Burge has wondered whether we can have "a striking relation to [. . . ][a natural number] that goes beyond merely conceiving of it or forming a concept that represents it", and answered that this is so for small such numbers, since "the capacity to represent [. . . ][them] is associated with a perceptual capacity for immediate perceptual application in counting" ( [START_REF] Burge | Belief De Re[END_REF], pp. 70-71). Saul Kripke has gone far beyond this, by suggesting a way to conceive natural numbers that makes decimal numerals apt to "reveal[. . . ] their structure" ([39], p. 164; [START_REF] Kripke | Umpupished transcription of Kripke's lectures[END_REF]). For him, natural numbers smaller than 10 are the classes of all n-uples (n = 0, 1, . . . , 9), while those greater than 9 are nothing but finite sequences of those smaller than 10. This makes decimal numerals, or, at least, short enough ones, work as "buckstoppers" (i.e. they are such that it would be nonsensical asking which number is that denoted by one of them, in opposition to terms like 'the smallest perfect number', denoting the natural number whose buckstopper is 'six'), and so allow direct reference to them. By dismissing such a compositional conception of natural numbers, Jan Heylen ( [START_REF] Heylen | The epistemic significance of numerals Synthese[END_REF] and Stewart Shapiro ( [START_REF] Shapiro | Computing with numbers and other non-syntactic things: De re knowledge of abstract[END_REF]) have respectively submitted that Peano numerals (the numerals of the form '0 ... ', written using only the primitive symbols for zero and the successor relation in the language of Peano Arithmetic) and unary numerals (mere sequence of n strokes used to denote the positive natural number n) provide canonical notations allowing de re knowledge of natural numbers. Finally, Jody Azzouni ( [START_REF] Azzouni | Empty de re attitudes about numbers[END_REF]) has argued that the existence of natural numbers is not required for having "de re thought" about them, since such a thought can be "empty".
Our use of 'de re' in claim (i ) differs from all these uses in that the de re vs. de dicto distinction has a much more fundamental application in our account of mathematics. Far from merely concerning our way of denoting natural numbers so to identify them in such a way to make de re propositional attitudes towards them possible, granted their existence, or our de re thought about them empty, granted their nonexistence, it concerns our way of fixing mathematical objects so as to confer existence to them. In our view these objects are, indeed, nothing but contents of (intentional) though, whose existence just depends on the way they are fixed. Here is how we see the matter.
There are many ways of fixing intellectual contents, which, in appropriate contexts, are (or can be) suitably conceived as individuals. A liberal jargon can refer to these contents as abstract objects. If this jargon is adopted, the claim that mathematics deals with abstract objects becomes quite trivial, and can neither be taken as distinctive of a platonist attitude, nor can provide any characterisation of mathematics among other intellectual enterprises. In a much more restrictive jargon, for something (i.e. the putative reference of a term or description) to count as an object, it has to exist. Under this jargon, the claim that mathematics deals with abstract objects becomes much more demanding, overall if it is either required that these objects are self-standing or mind-independent, or if it is supposed that nothing can acquire existence because of any sort of intellectual (intentional) act. The problem, then, with this claim is that it becomes quite difficult to understand what 'to exist' can mean if referred to abstract contents. What we suggest is reserving the term 'abstract object' to intellectual contents suitably conceived as individuals and so fixed, in an appropriate context, so as to admit de re epistemic access, this being conceived, in turn, as the apprehension of them making de re attitudes towards them possible. We submit that, once this is granted, the claim that mathematics deals with abstract objects becomes both strong enough and distinctive, so as to provide the ground for an appropriate account of mathematics.
Mathematics traditionally admits different modalities for fixing intellectual contents. The French philosopher Jean-Michel Salanskis ( [START_REF] Salanskis | L'heméneutique formelle[END_REF], [START_REF] Salanskis | Philosphie des mathématiques[END_REF]) suggested to distinguish two basic ways of doing it: constructively and correlatively.
The former way has a more limited application, but can be taken, in a sense, as more fundamental. Peano's numerals can, for instance, be quite simply fixed constructively by stating that: i ) the sign '0' is a Peano's numeral; ii ) if the sign 'σ' is such a numeral, then the sign 'σ ' is such a numeral, too; iii ) nothing else is such a numeral. Similarly, unary numerals can be constructively fixed by stating that: i ) the sign '|' is such a unary numeral; ii ) if the sign 'σ' is such a numeral, then the sign 'σ |' is such a numeral, too; iii ) nothing else is such a numeral. These are numerals, not numbers, however. And is clearly unsuitable to use the same pattern to define natural numbers. Suppose it were stated that: i ) 0 is a natural number; ii ) if σ is such a number, then σ is such a number; iii ) nothing else is such a number. It would have not been established yet that there is no natural number n such that 0 = n , or n = n . To warrant that this is so, it would still be necessary to impose appropriate conditions to the successor function -, which cannot be done constructively. To overcome the difficulty, one could have recourse to a trick: stating the natural numbers are the items that Peano numerals denote, or positive such numbers the items that unary numerals denote, in such a way that distinct such numerals denote distinct such numbers. This would make Peano's numerals directly display the structure of natural numbers, and unary ones that of positive natural numbers, so providing a canonical notation for these numbers allowing direct reference to them, in agreement to Heylen's and Shapiro's proposals. But this would be dependent on the informal notion of denotation. Supposing that we have the necessary resources for handling this notion without ambiguity, this would allow us to fix natural numbers almost constructively. Once this is done, one could look at these numbers as such, and try to disclose properties they have and relations they bear to each other's. Making it in agreement with mathematical requirements of rigor asks both for further definitions and the fixation of inferential constraints or rules, typically of an appropriate codified, if not formal, language. What is relevant for illustrating our point, is, however, not this, but rather that that we can do both things in such a way to keep the reference steady to the contents previously fixed as just said: it is on them that we define the relevant properties and relations; and it is to speak of them that we establish the appropriate inferential constraints, and fashion (or adopt) the appropriate language, which allows us to say of them, or some of them, that they are so and so. This should give a rough idea of the intellectual phenomenon we want to focus on by speaking of de re epistemic access.
More importantly, we could observe that once appropriate intellectual contents are fixed constructively, one can also try to capture them correlatively, that is, through an axiomatic implicit definition. This can be done somehow informally, or by immersing the definition within a formal system affording both the appropriate language and the appropriate inference rules (or, possibly, allowing to state these rules). In the case of natural numbers, we can, for instance, define them, through Peano axioms, within an appropriate system of predicate logic, and we could conceive of doing that with the purpose of characterizing correctively the same contents previously fixed constructively, so as that each of them provide the reference for a singular term appropriately introduced within the adopted language, and that they provide, when taken all together, the domain of variation and quantification of the individual variables involved in the definition.
The predicate system adopted can be both first-or higher-, typically second-, order. There is, however, a well-known difference among the two cases: while Peano second-order arithmetic (or PA2, for short) is categoric (with respect to the subjacent set theory), by a modern reformulation of Dedekind's argument ( [START_REF] Dedekind | Was sind und was sollen die Zahlen? Braunschweig[END_REF]), Peano first-order arithmetic (or PA1, for short) is not, by an immediate consequence of the Löwenheim-Skolem's theorem ( [START_REF] Chung | Model Theory[END_REF]). This suggests that the verb 'to capture' is not to be understood in the same way in both cases. In the second-order case, it means that the relevant axioms determine a single structure (up to isomorphism), whose elements are intended to be the natural numbers, identified with the same objects previously fixed constructively. In the first-order case, it means that these axioms describe a class of non-isomorphic structures, all of which include individuals that behave, with respect to each other's, in the same way as the elements of this structure do, and that we can then intend, again, as the same objects previously fixed constructively.
Both in the usual platonist tongue, and in our amended one, we could say that the limited expressive power of a first-order language makes it impossible to univocally describe the natural numbers by means of such a language: to do it, a second-order language is needed (and it suffices). Still, the verb 'to describe' should be understood differently in the two cases: while in the former case it implies that that these numbers are self-standing objects that are there as such, independently of any intellectual achievement, in the latter case, it merely implies that these objects have been previously fixed. Hence, if no previous definition were admitted or considered, the verb 'to fix' should be used instead. What should, then, be said is that the limited expressive power of a first-order language makes it impossible to univocally fix the natural numbers by means of such a language. (Of course, the relativisation of the categoricity of PA2 to a given model of set-theory makes the usual platonist tongue appropriate only insofar as it is admitted that this model reflects the reality of the world of mathematical objects, which, in presence of the strong non-categoricity of ZFC requires a further act of faith. But on this, later.)
The difference between and first-and the second-order case is not limited to this, however. Another relevant fact is that the language of PA1 is forced to include, together with the primitive constants used to designate the number zero and the successor relation, also two other primitive constants used to designate addition and multiplication. (Though versions of PA1 often adopt a language including a further primitive constant used to designate the order relation, this can be easily defined in terms of addition by, then, reducing the number of axioms, albeit increasing the syntactical complexity of some proofs.) The only primitive constants which are required to be included in the language of PA2 are, instead, those used to designate the number zero and the successor relation: addition and multiplication (as well as order), can be recursively defined in terms of zero and successor. Hence, whereas Peano second-order axioms (implicitly) define a structure N, Peano first-order axioms define uncountably many distinct structures N, , +, × . It remains the fact, nevertheless, that the former structure is reflected within any one of the latter ones. Hence, if we admit that the axioms of PA2 capture or fix a domain of objects in an appropriate way, there is room to say that PA1 is studying these same objects by weaker logical means, by identifying them as the common elements of uncountably many possible structures N, , +, × , though being unable to provide an univocal characterisation of them.
This should clarify a little better what having epistemic de re access to mathematical objects could mean: one could argue that, once natural numbers are captured or fixed by the axioms of PA2 as the elements of N, , one can, again, look at them as such and try to disclose their properties and relations, so as to recover the same property or relation already ascribed to them, and possibly more. This can be done in different ways. By staying within PA2, one can, for example, besides proving the relevant theorems statable in its primitive language, also enrich this language by means of appropriate explicit definitions, so as to introduce appropriate constants-as those designating addition multiplication and order-to be used in the relevant proofs. By leaving this theory, one can also try to describe them by using a weaker language, such as a first-order one, and be, then, forced to implicitly define addition and multiplication in them by appropriate axioms, though being unable to reach an univocal description. Other ways for studying these numbers are, of course, at hand. But, for our present purpose, we can confine ourselves to observe that in this latter case (as in many other ones), what we are doing may be appropriately accounted for by saying that, of these very numbers, we claim (by using the relevant first-order language) that they are so and so, or, better, that they form a structure N, , +, × .
There is a quite natural objection one could address to these views. One could remember that, as any other second-order theory, PA2 is syntactically incomplete, to the effect that some statements that are either true or false in its unique model are neither provable nor disprovable in it, and there is, then, no way (or at least no mathematically appropriate way) for us to know whether they are true or false. Hence, one could argue, whatever a de re access to natural numbers, as defined by PA2, might be, it cannot be, properly speaking, an epistemic access, since there are not only things about these numbers that we do not know, but also things that we cannot know. We think this objection misplaced, since something analogous also occurs for genuine empirical objects. Take the chair you sit on (if any): there are many properties that we suppose (at least from a realist perspective) that it does or does not have, about which even our best theories and the information we are in place to obtain are insufficient to make a decision. This should not imply, it seems to us, that you have no knowledge of that chair. Of course, we could always change our theories or improve them if we considered that deciding some questions that we know to be undecidable within them is relevant. In the same way, if we were considering (or discovering) that there are some relevant statements about natural numbers which are provably undecidable in PA2, we could try to add axioms to the effect of provably deciding these statements. But allowing this possibility does not imply that we do not have de re epistemic access to these numbers as fixed by PA2, while working on them either within or outside it. All that is required for it is that there is a suitable sense in which we can say that on these numbers (as independently fixed) we can define some properties or relations within this theory, or of these numbers we can claim this or that outside the theory.
Something similar to what happens with PA2 also happens with Frege arithmetic (or FA, for short), namely full (dyadic) second-order logic plus Hume's Principle (see Wright [START_REF] Wright | Frege's conception of numbers as objects[END_REF] or [START_REF] Boolos | Logic, logic and logic[END_REF], especially section II). The role played by natural numbers in the former case is played by the cardinal ones (understood as numbers of concepts) in the latter case. Once a particular cardinal number, typically the number of an (or the) empty concept is identified with 0, and an appropriate functional and injective relation is defined on these numbers so as to play the role of the successor relation, one can select the natural numbers among the cardinal ones, as being 0 together with all its successors. One can then capture or fix the natural numbers without appealing to addition and multiplication on them (and no more on order, at least explicitly). But now there is even more: these numbers can be captured or fixed by selecting them among items which are fixed, in turn, by appealing neither to a designated item like 0, nor to a certain dyadic relation, like the successor relation. Of the cardinal numbers, one could, then, say, that some of them are the natural ones and can be studied as such with other appropriate means.
It is easy to see that, as opposed to PA2, FA is not categoric (with respect to the subjacent set theory). This merely depends on the presence in some of its models of objects other than cardinal numbers, which can be absent from others. Still, FA interprets PA2 (this is generally known as Frege's theorem: see [START_REF] Richard | Frege's Theorem[END_REF], for example), and a result of relative categoricity can also be proved for FA ([47], prop. 14; [START_REF] Walsh | Relative categoricity and abstraction principles[END_REF], pp. 573-574): any two models of it restricted to the range of the number-of operator are isomorphic (with respect to the subjacent set theory). This migh t make one think that a form of categoricity (with respect to the subjacent set theory) is essential for allowing de re epistemic access to mathematical objects, i.e. that the only intellectual contents suitably conceived as mathematical objects that we can take to have de re epistemic access to are those fixed within a theory endowed with an appropriate form of categoricity (with respect to the subjacent set theory).
This is not what we want to argue for, however. The previous example of the constructive definition of positive natural numbers should already make it clear. Another, quite simple example is the following: when we define the property of being a prime number within PA1, we do it on the natural numbers in such a way that we can say that on these numbers we define this property; if the definition is omitted, many usual theorems of PA1 can no longer be proved, of course, but this does not change anything to many other theorems still concerned with natural numbers as defined within this theory. These two examples are different from each other, and both different from that given by the access to natural numbers as defined within PA2. That provided by the definition of prime numbers within PA1 is only an example of de re epistemic access internal to a given theory, which reduces, in fact, to nothing more than the possibility of performing an explicit definition within this very theory. Claiming that we have de re epistemic access to natural numbers as defined constructively, or to these very numbers as defined correlatively within PA2, when we try to study them in a different context, is quite a different story. Still, there is something similar in the three cases, and this is just what we are interested in underlining here: it is a sort of (relative) stability of intellectual contents counting as mathematical objects, a stability that is made possible by the way these contents are fixed. We do not want to venture here in the (possibly hopeless) tentative of classification of forms of de re epistemic access. Still, it seems clear to us that the phenomenon admits differences: both the stability depending on a constructive, or, more generally, informal definition, and that depending on a categorical implicit formal definition are extra-theoretic; the former is strictly intentional, as it were, the latter semantic; that depending on explicit definitions within non-categoric theories is merely syntactic (and, then, intra-theroretic) or restricted, at least, to an informally identified intended model. But the notion of independent existence of mathematical objects, which usual platonism is concerned with, is imprecise enough to make it possible to hope that all these different sorts of stability can provide an appropriate (metaphysically weak) replacement of it in many cases in which platonists use it in their accounts of mathematics.
# # #
But, let it be as it may. The question here is different: what does all this have to do with ZFC, and the results mentioned in § § 2 and 3, above?
On the one side, it is clear not only that the categoricity of PA2 and FA is relative to the (inevitably arbitrary) choice of a model of set-theory, and, then, typically, of ZFC, but also that what has been said about PA1, PA2 and FA has a chance to be clear only if set-theory provides us with a clarification of the relevant crucial notions. This is, however, not enough for concluding that whatever philosophical position we could take on natural numbers, and other mathematical objects along the lines suggested above, is necessarily dependent on a preventive account of ZFC. On the one side, we do not need all the expressive and deductive power of ZFC, and a fortiori of whatsoever acceptable extension of it, to make the relevant notions clear. On the other side, it is exactly the high un-categoricity of ZFC that invites us to reason with respect to finite numbers under the supposition that a model of the subjacent set-theory has been chosen, or, even, independently of the preventive assumption that these numbers are sets.
This suggests taking ZFC as an independent mathematical theory-one, by the way, powerful enough to be used (among other things) for studying from the outside the structures formed by the natural numbers, as well as by other mathematical objects, as objects we have a de re epistemic access to independently of (the whole of) it. One could then ask whether some sort of de re epistemic access to pure sets (conceived as sui generis objects implicitly defined by ZFC) is possible or conceivable. The high un-categoricity of ZFC seems to suggest a negative answer. Because it looks like neither this theory as such, nor any suitable extension of it (with the only exception, possibly, of ZFC + 'V = L', if this might be taken to be a suitable theory, at all) can provide a way to fix pure sets in any appropriate way for allowing de re (semantic) epistemic access to them. Upon further reflection, the case can appear, however, not to be so desperate as it seems to be at first glance, and the results mentioned above help us in seeing why this is so.
To begin with, one might wonder whether, in analogy to what we have said concerning PA1 and PA2, ZFC could not be taken as studying pure sets as the objects previously fixed in a quasi-categorical way by ZF2, just like PA1 might be taken to do with the natural numbers as (captured or) fixed by PA2.
The problem with this suggestion is that the relations between ZFC and ZF2 are not as illuminating as those between PA1 and PA2. For example, if we fix a level of the cumulative hierarchy of sets, say V α , then the second-order theory of V α is simply the first-order theory of P(V α ) = V α+1 , hence passing to the second-order does not seem like it has achieved much. However, it is true that formulating ZF in the full second-order logic so as getting ZF2, one achieves what is known as quasi-categoricity. The proof is basically contained in Zermelo [START_REF] Zermelo | Über Grenzzahlen und Mengenbereiche[END_REF]. We can describe the situation in more detail although informally, as follows.
What Zermelo proved for ZF2 is that for any strongly inaccessible cardinal υ which is supposed to exist, there is a single model (up to isomorphism) of ZF2 provided by the structure V υ , ∈ . It follows that all theories ZF2 + 'there are exactly n strongly inaccessible cardinal' (n = 0, 1, 2, . . .), or ZF2 n , for short, are fully categorical, giving that ZF2 has, modulo isomorphism, as many (distinct) models as there are strongly inaccessible cardinals (recall that V υ can only include strongly inaccessible cardinals smaller than υ). Of course, in any of these models any statement of the language of ZF2 is either true or false (according to the Tarski's semantic). But, because of the proof-theoretical incompletess of the second-order logic, and, then, of any second-order theory, it is not necessarily decidable. As noted below, this is so also for PA2. The difference is that in these extensions of ZF2, the undecidable statements include some with a clear and unanimously perfectly recognized mathematical significance, namely CH and GCH. Now, while the problem of deciding GCH (for cardinals greater than 2 ℵ0 ) can be seen as intrinsically internal to set theory (both to ZFC and ZF2), this is not so for CG. For, if we admit that there are (necessarily not-constructive) ways to fix real numbers, so as to allow us to have de re epistemic access to them (for example within PA2, as originally suggested by Hilbert and Bernays ( [START_REF] Hilbert | Grundlagen der Mathematik[END_REF], supplement IV), the problem of deciding CH can be seen as the question of answering the very natural question of how many are such numbers, a question which should, then, be seen as having a definite answer outside set theory (both ZFC and ZF2). The difference is, then, relevant, also from the point of view we are delineating.
Usually, a model V M of ZFC is diagrammatically represented this way:
V L κ where V L is the model of ZFC + 'V = L', and the external triangle can coincide with the internal one (which happens if 'V = L' is true in the model), but not go up to become internal. However, insofar as nothing requires that a model of ZFC have a uniform hierarchic shape, and no significant feature of it is represented by the symmetry of the diagram, we submit that a better representation is the following
V L
κ where all that is required of the external curve, call it 'C', for short, is that it is everywhere increasing (with respect to the line of cardinals, taken as axe) and external or coincident to the internal half straight-line. If this picture is adopted, a model of ZF2 could be depicted in the same way, with the specification that the external curve is univocally determined by the choice of a strongly inaccessible cardinal υ, or by the supposition that there are exactly n such cardinals, which leads to our calling it 'C υ ' or 'C n '.
One could, then, advance that (the axioms of) ZF2 plus the choice of a strongly inaccessible cardinal, or (those of) ZF2 n allow to univocally fix a domain of sui generis objects-call it 'the υ-sets' or 'the n-sets'-and that ZFC is studying these very objects with weaker logical means as elements of uncountably many possible structures, being unable to provide an univocal characterisation of them.
This suggests that ZF2, plus the choice of a strongly inaccessible cardinal, or ZF2 n provide domains of objects we can have a de re access to, in the same way as this happens for PA2, that is, not only internally, and so providing a sort of syntactic stability, but also externally, so as to provide a sort of semantic stability: one could argue that, once pure sets are fixed by the relevant (second-order) axioms, one can look at them as such and try to tell (both using a firstor a second-order language) the properties they have or the relations they bear to each other's. Of them, we claim that they form a structure that ZF(C) and all its usual (first-order) extensions try to describe, though being unable to univocally identify.
Still, the relativisation to the choice of a strongly inaccessible cardinal or the admission of the supplementary axiom 'there are exactly n strongly inaccessible cardinals' make the situation much less satisfactory than the one concerned with Peano (first-and second-order) arithmetic: taken as such, ZF2 is not only proof-theoretically incomplete; it is also unable to univocally fix the relevant objects.
This relativisation or admission do not prevent us from ascribing, however, to ZF2 a form of categoricity, since from Zermelo's result "it also follows that every set-theoretical question involving only sets of accessible rank is answerable in ZF2", and, then, in particular, that "all propositions of set theory about sets of reals which are independent of ZFC", among which there is CH, are either true or false in any of its model, though no proof could allow us to establish whether the former or the latter obtains ( [START_REF] Jané | Higher-order logic reconsidered[END_REF], p. 790). This might be taken as very good news. But a strong objection is possible: it is possible to argue that the truth or falsity of CH in any model of ZF2 does not depend on the very axioms of this theory, but on the consequence relation which is determined by the use of second-order logic and the standard (or full) interpretation of it, or, in other terms, that what makes CH true or false there is not what the axioms of ZF2 genuinely say about sets, but their using second-order variables, semantically interpreted as sets of n-tuples on the fist-order domain. Clearly, this would make second-order logic so interpreted "inadequate for axiomatizing set theory" (see [START_REF] Jané | Higher-order logic reconsidered[END_REF], pp. 782 and 790-793, for details).
We do not want enter such a delicate question here. We merely observe that the mathematical results we have expounded above show that there is no need to go second-order to get a limited form of quasi-categoricity. Since these results suggest that ZFC has already (and alone, that is, without any need to appeal to any supplementary axiom) the resources for fixing some of its objects in a better way than it is usually thought. Namely, if we are happy to work at a singular cardinal then much of the combinatorics is determined by what happens at the regular cardinals below, even to the point of fixing the cardinal arithmetic (see Shelah's theorem 1 quoted above). In some cases, we do not even need to know what happens at the regular cardinals below (see theorem 5). And if we are happy to be in a world with no Axiom of Choice, we can even imagine that all cardinals are singular, as in the Gitik's model and hence much of the cardinal combinatorics is completely determined by ZF.
Let us look back to the second of the previous figures and suppose that κ is a singular cardinal. What these results suggest is this: if the values of the ordinates of C are fixed for all regular cardinals λ smaller than κ, i.e. if a single model of ZFC is chosen relatively to all these regular cardinals, then the value of the ordinate of C for κ is strongly constrained, in the sense that this value can only belong to a determined set (a set, not a class) of values. In other terms, things seem to happen as if the shape of a model of ZFC for the regular smaller than κ strongly conditions the shape of the possible models at κ.
These results could be understood as saying that the non-categoricity of ZFC is, in fact, not as strong as it appears. Even within first-order, the behavior of the universe of sets is fixed enough at singular cardinals to give us some sort of external and semantic de re epistemic access to them and their power sets. In particular, once we have given to us all sets of size < κ and all their power sets, our choices for κ are quite limited. This offers an image of the universe of sets in which a strong lack of uniuvocality only concerns successor cardinals or uncountable regular limit cardinals, if any (remember that the existence of uncountable regular limit cardinals is unprovable in ZFC). One could say that, at singular limits, ZFC already exhibits a form of categoricity, or, better, that it does it asymptotically, since the ratio of singular cardinals over all cardinals tends to 1 as the cardinals grow. And at the price of working only in ZF we can even imagine to be in the model of Gitik, in which every uncountable cardinal is a singular limit.
Under a realist semantic perspective, according to which all we could say about the universe of sets is either true or false, one could say that this shows that, though ZFC is unable to prove the full truth about this universe, it provably provides an asymptotic description where the singular cardinals are the limits of the asymptotes. This also suggests, however, an alternative and more sober picture, which is what we submit: though there is no sensible way to say what is true or false about the universe of sets, unless truth and falsity are merely conceived as provable truth and falsity, ZFC provides an asymptotically univocal image of the universe of sets around the singular cardinals: the image of a universe to which we can have an external semantic de re epistemic access.
According to the common abuse of notation, we call F 'power set function', even though it is in fact a classfunction.
Acknowledgement 6 The first author gratefully acknowledges the help of EPSRC through the grant EP/I00498, Leverhulme Trust through research Fellowship 2014-2015 and l'Institut d'Histoire et de Philosophie des Sciences et des Techniques, Université Paris 1, where she is an Associate Member. The second acknowledges the support of ANR through the project ObMathRe. The authors are grateful to Walter Carnielli for his instructive comments on a preliminary version of the manuscript, and to Marianna Antonutti-Marfori, Drew Moshier and Rachael Schiel for valuable suggestions. |
01759108 | en | [
"info.info-ni"
] | 2024/03/05 22:32:10 | 2015 | https://hal.science/hal-01759108/file/wowmom2015.pdf | Nicolas Montavont
Alberto Blanc
Renzo Navas
Tanguy Kerdoncuff
Handover Triggering in IEEE 802.11 Networks
The current and future IEEE 802.11 deployment could potentially offer wireless Internet connectivity to mobile users. The limited AP radio coverage forces mobile devices to perform frequent handovers while current operating systems lack efficient mechanisms to manage AP transition. Thus we propose an anticipationbased handover solution that uses a Kalman filter to predict the short term evolution of the received power. This mechanism allows a mobile device to proactively start scanning and executing a handover as soon as better APs are available. We implement our mechanism in Android and we show that our solution provides a better wireless connection.
I. INTRODUCTION
Due to the proliferation of Wifi hot-spots and community networks, we have recently observed a great evolution of IEEE 802.11 networks especially in urban scenarios. These 802.11-based networks allow mobile users to get connected to the Internet, providing a high throughput but a limited mobility due to the short coverage area of access points (APs). In our previous work [START_REF] Castignani | Wi2Me: A Mobile Sensing Platform for Wireless Heterogeneous Networks[END_REF] we have shown that community networks appear to be highly dense in urban areas, generally providing several APs (15 in median) per scanning spot. Under this condition, a mobile user may be able to connect to community networks and compensate the low AP coverage area by transiting between APs. We call such AP transition a handover. However, two main issues currently limit mobile users from using community networks in such a mobility-aware scenario. First, operators have not deployed the necessary infrastructure to allow mobile users to perform handovers without being disconnected at the application layer, i.e., after a handover on-going application flows are interrupted. This limitation may be addressed by deploying a Mobile IP [START_REF] Perkins | IP Mobility Support for IPv4[END_REF] infrastructure, in which the application flows may be tunnelled through a Home Agent that belongs to the operator. Second, independently from the first issue, there is still a lack of mechanism to intelligently manage a layer 2 handover between two APs. In current mobile devices, when a handover occurs, we observe a degradation of on-going flows corresponding to a dramatic reduction of the TCP congestion window (CWND) and of the throughput. In this paper, we focus on this latter issue by analyzing the impact of layer 2 handovers on mobile users. We propose Kalman-filter-based HAndover Trigger algorithm (KHAT) that succeeds in intelligently triggering handovers and reducing the scanning impact on the mobile device. We propose a complete implementation of our handover mechanism in Android ICS (4.0) and show a comparative study to show that our approach outperforms the handover mechanism that is currently implemented on these devices.
The paper is organized as follows. Section II presents the litterature on handover optimization and Section III analyzes the handover impact on on-going communications. Section IV introduces KHAT which is evaluated indoor and outdoor in Section V. Section VI concludes the paper.
II. HANDOVER PROCESS AND RELATED WORK
The IEEE 802.11 standard defines a handover as a three steps process: scanning, authentication and association. The standard proposes two different scanning algorithms namely passive and active scanning. In passive scanning, the mobile station (MS) simply tunes its radio on each channel and listens for periodic beacons sent by the APs. In active scanning, the MS proactively sends requests in each channel and waits for responses during a pre-defined timer.
Once candidate APs have been found, the MS selects one of the APs and attempts authentication and association. If the association is successful, the MS can send and receive data through the new AP, if this new AP is on the same IP subnet as the previous AP. If the new AP belongs to another IP subnet, the MS needs additional processing to update its IP address and redirect data flows to its new point of attachment. Such Layer 3 handover may be handled by specific protocols like Mobile IP [START_REF] Perkins | IP Mobility Support for IPv4[END_REF]. Note that in this paper we do not address IP mobility and any layer 3 mobility management protocol can be use on top of our proposal if needed.
In 2012, the IEEE has published new amendments for IEEE 802.11 handover optimization, aimed at reducing its duration and its impact on higher layers. The IEEE 802.11k amendment proposes mechanisms for radio resource measurement for seamless AP transition, including measurement reports of signal strength (RSSI) and load of nearby APs. Additionally, the IEEE 802.11r amendment contains a Fast Basic Service Set Transition (FT), which avoids exchanging 802.1X authentication signaling under special conditions by caching authentication data.
While these features may enhance the handover performance, they heavily rely on a cooperation between APs, which might not always be a viable solution. In addition, users may access various networks operated by different providers. In that case, operators should share network information and performance among them, which is quite an unlikely scenario. In this paper, we focus on MS-based solutions, where the MS itself handles the handover without the help from the network. Several works have been proposed in the literature so far. In general, those studies cover different aspects of the handover mechanism. We may group them into three main categories:
• Handover triggering: when to decide that a disconnection with the current AP will occur.
• AP discovery: how to search for APs on different channels by minimizing the impact on the higher layers.
• Best AP selection: with which AP to associate, among the discovered ones.
The simplest mechanism to trigger a handover is to monitor the RSSI as an estimation of the link quality and start the handover process if the current RSSI is lower than a pre-established threshold (commonly set at -80 dBm). Fig. 1a shows the relationship between the RSSI measured on an MS and the TCP throughput that we have gathered during more than 600 connections to community networks in a urban area in Rennes, France [START_REF] Castignani | Wi2Me: A Mobile Sensing Platform for Wireless Heterogeneous Networks[END_REF]. We observe that the TCP throughput is extremely variable for high RSSI, but starts degrading for RSSI lower than -70 dBm, and it becomes significantly low around -80, dBm. Some works focus on the anticipation of the handover triggering in order to minimize the impact on ongoing communications. Mhatre et al. [START_REF] Mhatre | Using smart triggers for improved user performance in 802.11 wireless networks[END_REF] propose a set of handover algorithms based on continuously monitoring the wireless link, i.e., listening to beacons from the current and neighboring channels. These approaches give handover latencies varying between 150 and 800 ms. However, since these approaches need to listen to beacons from neighboring channels, it is necessary to modify the firmware of the wireless card, which may not always be possible. Yoo et al. [START_REF] Yoo | LMS predictive link triggering for seamless handovers in heterogeneous wireless networks[END_REF] propose a number of handover triggering mechanisms based on predicting RSSI samples at a given future time using Least Mean Square (LMS) linear estimation. In this algorithm, the device continuously monitors the RSSI and computes the LMS prediction if the RSSI is below a certain threshold (P Pred ). Then, if the predicted RSSI value is lower than a second threshold, P Min , the MS starts a handover. Wu et al. [START_REF] Wu | Proactive scan: Fast handoff with smart triggers for 802.11 wireless LAN[END_REF] propose a handover mechanism aiming at decoupling the AP discovery phase from the AP selection and reconnection phase. The MS alternates between scanning phases and a (normal) data mode where the MS is connected to its current AP. The time interval between two scanning phases is adapted depending on the current signal level and varies between 100 and 300 ms. In each scanning phase, the sequence of channels to scan is selected based on a priority list that is built based on the results of a periodic full scanning (i.e., here all channels are scanned).
As far as Android devices are concerned, Silva et al. [START_REF] Silva | Enabling heterogeneous mobility in android devices[END_REF] present a mobility management solution based on IEEE 802.21. They propose a mapping of IEEE 802.21 primitives for handover initiation, preparation, execution and completion to existent Android OS methods and functions.
III. HANDOVER IMPACT
During an L2 handover, the MS is not able to send or receive application flows. This is because, usually, when a MS triggers a handover, the link quality does not allow exchanging frames anymore, and because the MS is often switching operating channel. In this section we evaluate the handover and scanning impact on application flows, and determine which parameters influence the scanning latency and success rate. This testbed consists of nine Cisco Aironet 1040 APs installed in the roof of our building at the locations given in Fig. 2. All APs are connected to a dedicated wired LAN. APs broadcast a single SSID, corresponding to an open-system authentication network belonging to a single IP subnet. We also use a dedicated (fixed) server for traffic generation and tracing. iPerf is used to generate TCP downlink traffic to the MS. For each experiment, we walk from AP 1 to AP 6 and then back again to AP 1 .
A. Operating Systems Benchmark
To illustrate how the handover is currently impacting data flows, we have performed a set of experiments to evaluate the degradation of TCP performance for different devices and Operating Systems (OS). Table I shows the number of handovers and the average TCP As a baseline, we also show the maximum achieved throughput for each device remaining static and connected to a single AP. Using Windows, we observe the best result, since the MS performs up to four handovers, reaching an average throughput of 0.875 M B/s. Additionally we observe that for Windows, the time in which no data is downloaded (i.e., the disconnected time) is relatively short compared to the other OSs. The netbook running Ubuntu reacts slowly to changing channel conditions: in this case the MS is disconnected for more than 20 s and executes only two handovers, indicating that the MS waits until the quality of the radio link is significantly degraded. Fig. 1b shows the evolution of the downloaded data for each case. Additionally, we have observed that for the Windows device, the average round-trip time (RTT) is the lowest one (103 ms) having also a low standard deviation. This differs from the other devices that reach larger RTT values.
B. Scanning Interactions with Data Traffic
We focus on active scanning where an MS sends Probe Requests on each channel to discover potential APs, instead of just waiting for periodic beacons (passive scanning). We chose active scanning because it allows spending less time in each channel to determine the AP availability. If the handover phases are done one after the other, all packets that arrive during the handover process will be lost. In order to reduce the impact of handovers on applications flows, it is possible to introduce a gap between the scanning phase and the other handover steps, i.e., the decision to handover, the authentication and the association, as presented in [START_REF] Wu | Proactive scan: Fast handoff with smart triggers for 802.11 wireless LAN[END_REF]. An MS may use the power saving mode defined in IEEE 802.11 to request its current AP to buffer incoming packets during the time the MS scans other channels. This way, instead of loosing packets during the scanning phase, an MS can receive the packets after the scanning phase, albeit with an extra delay. This behavior is illustrated in Fig. 1c, where we plot the sequence number of the received packets of a TCP flow when an MS is performing one scan of the 13 channels with an active timer set at 50 ms. We can see that the scan is starting just before the time 1 s, at which no more data packets are received from the server. Once the scan is finished, around 850 ms after, the MS comes back to its current AP, and starts receiving TCP packets again.
This technique can also be used to split a scanning phase into several sub-phases where only a subset of channels are scanned. For example, to scan the 13 channels, an MS could sequentially scan three times a subset of 4 (or 5) channels each time, interleaving these sub-phases with the data mode with the current AP to retrieve data packets. The impact of the number of scanned channels, and the timers used in each channel is given in the next subsection.
C. Scanning Parameters
We analyze the scanning performance under different values of timers used to wait for Probe Responses (from 5 ms to 100 ms) and different number of scanned channels during a sub-phase (between 1 and 13). In the standard IEEE 802.11 scanning algorithm, the MS is supposed to scan each channel using two timers We ran 60 scanning sub-phases for each AT and subset of scanned channels and measured the average number of discovered APs, the RSSI distribution of the discovered APs and the average duration of the scanning (i.e., the scanning latency). Results are presented in Table II. As a baseline, we consider that all the available APs are discovered when scanning the full channel sequence (i.e., 13 channels) using AT=100 ms. In the other cases, the MS discovers only a fraction of the APs, since it either does not wait long enough to receive all AP Probe Responses, or because only a subset of channels are scanned.
We have also observed that when using a short AT, even if the MS discovers a low number of APs, those APs have a high RSSI. On the other hand, when using higher AT values, the MS discovers more APs but a large part of them have a low RSSI. This can be observed in Fig. 3a, where we see that for AT=5 ms the average RSSI of candidate APs is -67 dBm, while for AT=20 ms, this decreases up to -76 dBm.
IV. KHAT: PROACTIVE HANDOVER ALGORITHM
We propose a handover algorithm called Kalman Filter-based HAndover Triggering (KHAT for short) that provides link going down detection, optmized scanning strategy, and new AP selection. An MS monitors its link quality with its current AP, and when the signal strength is degrading, it starts alternating between scan periods and data communication with the current AP. The scan periodicity and the timer values are determined according to the current link quality and whether a candidate AP has already been found. Once the candidate AP becomes better than the current AP, the handover is triggered.
A. RSSI modelling
One way of keeping track of the changing radio condition is to track the RSSI on the MS. While far from being perfect, the RSSI has the advantage of being always available, whether the MS is exchanging data or not, as it is updated not only whenever the MS receives data frames but also when it receives beacon frames, which are typically sent every 100 ms by most APs. As the RSSI can fluctuate rapidly, especially when a user is moving, its instantaneous value is not necessarily representative. At the same time, its local average and trend are more useful in deciding whether the radio channel conditions are improving or not and whether they are reaching the point where communication is no longer possible. Using the well known Kalman filter, it is possible to extract this information from the RSSI measurements. Many authors have already use Kalman filter and other time series techniques in order to model radio channels and the received signal strength, see, for example, the works by Jiang et al. [START_REF] Jiang | Kalman filtering for power estimation in mobile communications[END_REF], by Baddour et al. [START_REF] Baddour | Autoregressive modeling for fading channel simulation[END_REF] and references therein.
More formally, let X(t i )
X i be the received signal strength at time t i . In our case, we sample the RSSI roughly every 100 ms; but, as we rely on software timers, there are no guarantees that the t i 's will be equally spaced. Figure 3b shows the empirical distribution of ∆t i = t i -t i-1 for a subset of the traces we collected. The average is 96 ms and the standard deviation is 8.2 ms. Given that roughly 90% of the samples are within less than 100 ms of each other, it seems reasonable to "re-sample" the time series with a time-step of 100 ms.
In all the traces we have collected, it is often the case that several consecutive samples have the same value, indicating that the received signal strength is often constant during periods that are longer than the average distance between samples. The presence of several samples with exactly the same value is an obstacle when one is trying to estimate the local trend of a signal as, in this case, the estimated slope would be exactly 0. The Kalman filter does not perform well in these circumstances. As we rely on the values reported by the 802.11 driver, we wondered whether these consecutive samples with the same values were caused by the driver q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q -100 -90 -80 -70 -60 -50 -40 ∆t /ms CDF q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q (b) Sampling interval CDF for the original time series Constant Period Length/ms CDF q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q qq q q q q qq q q q q q q q q q q q q qq q q q q q q q q qq q q q q q q q q q q qq q q q q q q q qqq q (c) The CDF of the length of the periods where all the received power samples are equal for the community networks Fig. 3: RSSI analysis not updating the values often enough. Figure 3c shows the distribution of the length of the periods where the signal strength was constant for several traces collected using a static MS which was not sending or receiving data (note that the distribution was the same whether the test was performed using a laptop running Linux or a smartphone running Android). The median is 305 ms and the standard deviation is 306 ms. In the case of mobile MSs and/or data traffic the median values are smaller (around 110 ms for MS with data traffic, but the standard deviation is always larger). In order to mitigate the effect of these periods, we pre-process the RSSI samples, using a time-varying exponential average, before applying the Kalman filter. In order to further reduce the lag of the smoothed signal, we use a time-varying weight in the exponential smoothing.
Let Y i be the re-sampled RSSI time series. We construct the smoothed series Z i as:
Z i = α i Y i + (1 -α i )Z i-1
where Z 1 = Y 1 and α i = α up if the RSSI is increasing, instead whenever the RSSI starts decreasing α i = α 1 . Whenever the RSSI is constant the value of α i is determined by the last change before the beginning of the constant samples. If the last change was an increase α i = α up , otherwise α i = min(0.8 • α i-1 , α min ). This corresponds to the pseudo-code in Algorithm 1.
We have used α up = 0.5, so that the smoothed time series will react quickly to upward changes, α 1 = 0.4 and α min = 0.01 so that it will, instead, react much more slowly to downward changes. The reason of this asymmetric behavior is that we are interested in having an accurate estimate of the level and, above all, of the trend, only when the signal is decreasing. By using a larger α i when the signal is increasing we assure that Z will quickly reach the value of Y reducing the lag between Y and Z. We have verified that the received power times series of our sample are indeed non-stationary by computing their autocorrelation functions, which were all slowly decreasing (as a function of the lag). We have then decided to use a state-space based model to represent the evolution of the power over time. In particular we have used the local linear trend model (see, for example, Durbin and Koopman [START_REF] Durbin | Time series analysis by state space methods, ser. Oxford statistical science series[END_REF]):
Z i = µ i + ε i ε i ∼ N (0, σ 2 ε ) µ i+1 = µ i + ν i + ξ i ξ i ∼ N (0, σ 2 ξ ) (1) ν i+1 = ν i + ζ i ζ i ∼ N (0, σ 2 ζ )
where Z i is the time series under scrutiny, µ i is the level at time i, ν i is the slope at time i and ε i , ξ i , ζ i are independent identically distributed (i.i.d.) Gaussian random variables with 0 mean and variance σ 2 ε , σ 2 ξ , σ 2 ζ respectively. These variances can be obtained by Maximum Likelihood Estimation from sample realizations of Z.
Once the values for the variances are specified, and given a realization of Z i (i = 0, . . . , n), one can use the well known Kalman filter algorithm to compute µ i and ν i for any value of i (again, see, for example, Durbin and Koopman [START_REF] Durbin | Time series analysis by state space methods, ser. Oxford statistical science series[END_REF]). To be more precise, one can solve the "filtering" problem, where the values of µ i and ν i are computed using the samples Z 0 , Z 1 , . . . , Z i . At first, we have used the dlm [10] R [START_REF]R: A Language and Environment for Statistical Computing[END_REF] package to solve the filtering problem. Note that, as the filtering problem uses only the samples between 0 and i, it can be implemented in real time as it depends only on past values of the time series. The Kalman filter can also be used to predict future values. In the case of the local linear trend model ( 1), the prediction algorithm is extremely simple: one can just model the future values of the time series using a straight line with slope ν i , starting at the value µ i at time i.
We have also implemented the Kalman filter on a Samsung Nexus S smartphone, in the WiFiStateMachine module of the Android Java framework. For the sake of simplicity we have used a straightforward implementation of the Kalman recursion. The general form of the Kalman filter is:
Z i = F Θ i + v v i ∼ N n (0, V i ) Θ i = GΘ i-1 + w i w i ∼ N m (0, W i )
For the local linear trend model Z is a scalar, and so is v, while:
Θ i = µ i ν i , G = 1 1 0 1 , W = σ 2 ξ 0 0 σ 2 ζ , F = (1 0) . We are interested in computing the 2 × 2 vector m i = (E[µ i ] E[β i ]) T ,
containing the expected values of the level (µ i ) and slope (ν i ). It is known [START_REF] Durbin | Time series analysis by state space methods, ser. Oxford statistical science series[END_REF] that one can compute these values using the following equations:
m i = a i + R i F T i Q -1 i e i f i = F i a i C i = R i -R i F T i Q -1 i F i R i Q t = F i R i F T i + V i a i = G i m i-1 R i = G i C i-1 G T i + W i where e i = Y i -f i ,
and the following initial values:
C 0 = σ 2 1 0 0 σ 2 2
, m 0 = (Z 0 0) T are used for C and m. The values of σ 2 1 and σ 2 2 have almost no influence on the computations as the matrices R, F, T, C quickly converge to steady state values which are independent from the initial values.
The local linear trend model 1 is characterized by three parameters:
σ 2 ε , σ 2 ξ , σ 2 ζ .
It is possible to use Maximum Likelihood Estimation (MLE) methods to estimate the values of these parameters from sample realizations of Z. We have used the MLE functions of the dlm package to this end, but in some cases the optimization algorithm used to compute the MLE did not converge. When it converged its estimations for σ 2 ε and σ 2
ζ where not always consistent over all the samples but the order of magnitude was fairly consistent, with σ 2 ε usually smaller than σ 2
ζ and with σ 2 ε often fairly close to 0. It should be stressed that, in this case, there are no guarantees about the convexity of the optimization problem solved by the MLE procedure, which can very well converge to a local minimum instead of a global one. Also it is not uncommon to tune the model parameters in order to improve its performances. In our case we have observed that using σ 2 ε = 0.5, σ 2 ξ = 1 and σ 2 ζ = 2.5, we obtain fairly smooth level and slope values, which can be effectively used by the KHAT algorithm.
B. Algorithm design
KHAT adapts the scanning strategy, the scanning period and the handover trigger by comparing an estimate of the link quality and the quality of candidate AP as presented in Fig. 4. The main process consists in continuously monitoring the RSSI of the current link and detect a link going down event. To achieve this, we use the Kalman filter to obtain the current value of the RSSI (µ) and the slope (ν). After analyzing a large number of RSSI time series, we have estimated that the link going down trigger can be declared if µ < -70 dBm and ν < -0.2 dBm/s. If the link going down condition is satisfied, the MS check on its candidate AP list. If there is not a valid candidate AP, the MS will attempt scanning only if there has not been another scanning instance for the last T Scan seconds. On the other hand, if after triggering a link going down condition, the MS has a valid candidate it will attempt a handover only if the difference between the candidate AP RSSI and the current exponentially smoothed RSSI sample (µ) is greater than ∆, where ∆ is defined as follows:
∆ = 8 , if µ > -70 5 , if -70 ≤ µ < -75 3 , if -75 ≤ µ < -80 2 , if µ ≤ -80. (2)
After a scan completes, an existing candidate AP would be updated with a new RSSI, or a new candidate may be selected. Additionally, in order to avoid The scanning strategy itself is also adapted depending on the current link condition. Each scanning strategy consists in determining a number of channels to scan and the time to wait on each channel (AT in Android system). Based on results presented in section III-C we fixed AT as presented in Table III: the better the current link quality is, the less time the MS will spend scanning, because it still has time to find APs before it disconnects from its current AP. When the signal quality with the current AP is low, we set aside more time for the MS to scan, in order to maximize the probability to find an AP. In order to contain the scanning duration, we propose to use AT in {5 ms, 10 ms, 20 ms}. The reason is that for smaller scan times, we only find APs with high RSSI (as shown in section III-C) and as we are in fairly good condition, the MS would only be interested in AP with high RSSI.
V. EXPERIMENTATION
A. Methodology and implementation
We have implemented our solution on the Android ICS 4.0.3 system working on a Samsung Nexus S (GT-I9023) smartphone. It involves modifications in the Android Java Framework, the WPA Supplicant The mobile user walks at a roughly constant speed and each connection lasts for 120 s. The second set of experiments was performed in the city of Luxembourg, using the HOTCITY Wifi deployment (see [START_REF] Castignani | A study of urban ieee 802.11 hotspot networks: towards a community access network[END_REF] for more details). In all cases, we use iperf to generate the TCP traffic for both MSs and generate several connections for more than one hour.
B. General results
Fig. 5, 6, 7 and 8 show the RSSI and the received TCP data for the two considered environments, over one connection duration, while Table IV shows the average over all connections that we made. We can see that KHAT provides a better RSSI (-69 dBm in average) along the connections and allows the smartphone to have a better throughput than stock Android (222 kB/s versus 146 kB/s). We can also see that KHAT triggers the handover systematically before stock Android to avoid suffering from a poor quality with its current AP. Sometimes, as at Time=400s of the outdoor connection, KHAT manages to find an intermediate AP between those chosen by Stock Android which allows to signigicantly increase both the RSSI and the TCP download. Looking at the zoom of Fig. 8 Time (s.) RSSI (dBm) q q q q q q q q q q KHAT RSSI Legacy RSSI KATH Handover Legacy Handover Time (s.) RSSI (dBm) q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q KHAT RSSI Legacy RSSI KATH Handover Legacy Handover where the Legacy smartphone is not able to receive any data from the TCP server. On the other hand, KHAT is performing two handovers. The first handover at Time=233s is made prior to the Legacy handover, but still a stagnation in the received data is observed before and after the handover. However the second handover at Time=275s is smooth and does not impact the data reception.
Fig. 9 shows the list of selected APs from the chosen connection in the outdoor environment captured by an MS running Wi2me [START_REF] Castignani | Wi2Me: A Mobile Sensing Platform for Wireless Heterogeneous Networks[END_REF]. The APs are shown in their apparence order along the path. We can see that for the six first APs, the KHAT AP selection is judicious: when the signal strength of one AP is degrading, another AP becomes available. However, between scanning occurence 140 to 200, there is not ideal choice for any AP. Observing that the RSSI is low, KHAT is trying to handover to different AP at this period of time, osciatilling between AP 7 , AP 8 , AP 9 and AP 10 . These are the handovers we can see around Time=625s in Fig 7 . While KHAT avoids to trigger handover when it is not for a significantly better AP, in areas where all APs offer low RSSI, KHAT may trigger several handovers. It finally finds AP 11 at Time=721s which was providing a good coverage area. During this period, we can see that the Legacy phone did not trigger any handover and was unable to receive any data for a long period of time.
VI. CONCLUSION IEEE 802.11 is one of the most popular wireless standard to offer high data rate Internet connection. With the vast number of hot-spots and community networks that are deployed today, there is a potential for users to use Wifi network in mobility scenarios. However, as the AP coverage area is usually limited to few tenths of meters, there is a strong need for optimized mobility support, when users move from one AP to another one. We have shown in this paper that current devices are able to transit between APs, but the handover performance is quite low.
We proposed a handover algorithm called KHAT that anticipates the signal loss from the current AP to preemptively scan for potential APs. The prediction of the link going down is achieved with a Kalman filter which estimates the slope of the RSSI to determine the link condition. If the estimate is below a given threshold (smooth RSSI lower than -70 dBm and slope lower than 0.2), we launch a scanning. Data packets can be buffered (and retrieved later on) by the AP during the MS scanning by exploiting the power saving mode defined in 802.11. Depending on the scanning results, the MS will either handover to a new (better) candidate AP that has been found, or it will loop on the link quality prediction. The scanning period and strategy are adapted depending on the current link condition.
We have implemented KHAT on Android ICS 4.0.3 system working on a Samsung Nexus S (GT-I9023). To address the tradeoff between the scanning latency and the AP discovery, the MS is scanning with AT=20 ms if a handover is imminent, AT=10 ms when the link quality is medium and AT=5 ms when the link quality is good. In two different environments (indoor and outdoor), we compared a Stock Android with a KHAT smartphone. We have shown that KHAT outperforms Stock Android by anticipating handovers, and using more APs available on the path. The average RSSI is 6dBm either in the outdoor environment, and the TCP throughout is 0.22MB/s compared to 0.12MB/s for Stock Android. The perspective of this work is to apply the link quality prediction on candidate APs in order to better choose the target AP when a handover is needed.
2
2
Fig. 1 :Fig. 2 :
12 Fig. 1: Various TCP performance
Fig. 4: Algorithm Flow Chart
Fig. 9 :
9 Fig. 9: Heatmap of the selected AP in the outdoor environment
Fig.5, 6, 7 and 8 show the RSSI and the received TCP data for the two considered environments, over one connection duration, while TableIVshows the average over all connections that we made. We can see that KHAT provides a better RSSI (-69 dBm in average) along the connections and allows the smartphone to have a better throughput than stock Android (222 kB/s versus 146 kB/s). We can also see that KHAT triggers the handover systematically before stock Android to avoid suffering from a poor quality with its current AP. Sometimes, as at Time=400s of the outdoor connection, KHAT manages to find an intermediate AP between those chosen by Stock Android which allows to signigicantly increase both the RSSI and the TCP download. Looking at the zoom of Fig.8between Time=200s and
Fig. 5 :Fig. 6 :
56 Fig. 5: RSSI for Legacy and KHAT smartphones indoor
Fig. 7 :Fig. 8 :
78 Fig. 7: RSSI for Legacy and KHAT smartphones outdoor
TABLE I :
I Handover performance of different OS
Nb. of AT=5 AT=10 AT=20 AT=50 AT=100
channels (%) (%) (%) (%) (%)
1 3.11 5.76 10.62 22.28 25.24
3 6.45 18.28 32.61 58.18 88.24
5 9.28 21.02 38.83 68.94 89.31
8 10.44 23.61 40.46 70.43 96.58
13 11.74 28.62 45.76 79.88 100.00
RSSI -67.16 -70.07 -76.02 -81.28 -83.26
TABLE II :
II Percentage of discovered APs for different values of AT and number of scanned channels namely MinCT and MaxCT (see section II). However, the IEEE 802.11 Android driver uses a single timer, namely Active Timer (AT) for scanning. AT is defined as the time an MS waits for Probe Responses on a channel.
Algorithm 1 The algorithm used to compute α i
1: increasing ← FALSE
2: lastV alue ← Y1
3: α1 ← α1
4: i ← 1
5: while i ≤ length(Y ) do
6: if Yi = lastV alue then
7: if YI > lastV alue then
8: increasing ← TRUE
9: αi ← αup
10: else
11: increasing ← FALSE
12: αi ← α1
13: end if
14: else
15: if increasing =FALSE then
16: if αi-1 > αmin then
17: αi ← 0.8 • αi-1
18: else
19: αi ← αi-1
20: end if
21: else
22: αi ← αup
23: end if
24: end if
25: end while
TABLE III :
III Scanning Strategies scanning at a very high frequency, we adapt the value of T Scan depending on the scanning results. Each time the MS triggers a link going down, if a candidate AP exists, we double the current value of T Scan (up to 1 s) since it is not necessary to scan at a high frequency if there is at least one candidate AP. On the other hand, if no candidate exists at that time, we set T Scan to its minimum value (250 ms).
TABLE IV :
IV Performance comparison
AP 1
AP 2
AP 3
AP 4
AP 5
AP 6
AP 7
AP 8
AP 9
AP 10
AP 11
AP 12
AP 13
AP 14
AP 15
AP 16 AP 17 90
0 50 100 150 Scan Occurences 200 250 300
978-1-4799-8461-9/15/$31.00 c 2015 IEEE
VII. ACKNOWLEDGMENTS
This work has received a French government support under reference ANR-10-LABX-07-01 (Cominlabs). |
01759116 | en | [
"sdv.bid.spt",
"sdv.ba.zi"
] | 2024/03/05 22:32:10 | 2017 | https://hal.sorbonne-universite.fr/hal-01759116/file/ARACHNIDA-15-Tityus-cisandinus%20sp.n_%20_%20sans%20marque.pdf | Wilson R Lourenço
email: wilson.lourenco@mnhn.fr
Eric Ythier
email: eythier@syntechresearch.com
con commenti su alcune specie correlate (Scorpiones: Buthidae)
Keywords: Scorpion, Tityus, Atreus, Tityus asthenes, new species, Ecuador. Riassunto Scorpione, Tityus, Atreus, Tityus asthenes, nuova specie, Ecuador
Description of Tityus (Atreus) cisandinus sp. n.
Introduction
The buthid scorpion Tityus asthenes was originally described by [START_REF] Pocock | Notes on the classification of scorpions, followed by some observations upon synonymy, with descriptions of new genera and species[END_REF] from Poruru in Peru, in a paper devoted to the classification of scorpions in general and including description of several new genera and species. No precision, however, about the collector of the studied specimen was supplied, situation not uncommon in the publications of Pocock (Lourenço & Ramos, 2004). The description of T. asthenes was brief, based on a single female specimen and not followed by any illustrations. The type locality of Tityus asthenes, supposedly in Peru, remains unclear since no present locality with this name, including in river systems, exist in this country. [START_REF] Francke | Escorpiones y escorpionismo en el Peru VI: Lista de especies y claves para identificar las familias y los géneros[END_REF] suggested that the correct locality is Paruro in southern Peru, however this corresponds to an arid region not compatible with many species of Tityus placed in the subgenus Atreus. Consequently, the specimen described by Pocock could have been collected in quite different regions in tropical America. Probably in relation to its general morphology, Tityus asthenes was associated to the group of Tityus americanus (= Scorpio americanus Linné, 1754) by [START_REF] Pocock | Notes on the classification of scorpions, followed by some observations upon synonymy, with descriptions of new genera and species[END_REF]. In fact, this group corresponds to scorpions defined by a large size, 80 to 110 mm in total length, presenting long and slender pedipalps, in particular in males and with an overall dark coloration. This group corresponded well to the Tityus asthenes group of species as defined by Lourenço (2002a), this until the more precise definition of subgenera within Tityus (Lourenço, 2006), which placed these large scorpions in the subgenus Atreus Gervais 1843. 2012). The subsequent discovery and description of several new species belonging to this group of scorpions in the last 30 years changed the past opinion about their models of distribution and showed that most species could have less extended and much more localised ranges of distribution (Lourenço, 1997(Lourenço, , 2002b(Lourenço, , 2011(Lourenço, , 2017)).
A recent study on some scorpions from Ecuador (Ythier & Lourenço, 2017) reopened the question about the true identity of some Tityus from this country, and in particular that of Tityus asthenes. In an old paper on scorpion fauna of Ecuador, Lourenço (1988) suggested that Tityus asthenes was the most common species of the subgenus Atreus, being present in both the cisAndean and transAndean regions of the country. The type specimen of Tityus asthenes was examined by the senior author early in the 1970s, while yet a student, without any final resolution about its true identity (Figs. 13). The recent reanalysis of the female holotype of T. asthenes clearly demonstrates that this species does not belongs to the subgenus Atreus but rather to the subgenus Tityus and to the group of species Tityus bolivianus (Lourenço & Maury, 1985; of two new specimens collected in the Amazon region of Ecuador, close to the Peru border and associated to T. asthenes in Lourenço & Ythier, 2013, led us to consider this Tityus population as a new species. Until now the cisAndean and transAndean populations of Tityus (Atreus) found in Ecuador were considered as a single one (Lourenço, 1988;[START_REF] Brito | A checklist of the scorpions of Ecuador (Arachnida: Scorpiones), with notes on the distribution and medical significance of some species[END_REF], however we now consider these as possibly different and separated by the Andean mountain system. The material cited by Lourenço (1988) from the Napo province was not restudied, but most certainly corresponds to the new species described here. The transAndean populations of Tityus subgenus Atreus in Ecuador, in particular those from the Province of Esmeraldas, will require some further studies to have their status clearly redefined, since it may correspond to a not described species.
The Amazon region of Ecuador, known as the Oriente, is exceptionally rich in biodiversity of both flora and fauna. Much of the Oriente is tropical rainforest (Fig. 6.), starting from the east slopes of the Andean mountains (upland rainforest) and descending into the Amazon basin (lowland rainforest). It is crossed by many rivers rising in the Andean mountains and flowing east towards the Amazon River. The lowlands in the Oriente, where the new species was found (type localities going from 250 m (Kapawi) to 310 m (Yaupi) above sea level), have a warm and humid climate year round, and typically receives more than 2000 mm (average 3500 mm) of rain each year, April through June being the wettest period. Temperatures vary little throughout the year and averages 25° C, with variation between daytime (up to 28° C) and nighttime (about 22° C). Lowland rainforests contain the tallest trees of all types of rainforest, with the largest variety of species. The tree canopy typically sits 2040 m above the ground where vegetation is sparse and comprises mainly small trees and herbs that can experience periodical flooding during heavy rains. For several sections of the lowland rainforest such as the canopy, knowledge of the scorpion fauna is still almost nonexistent (Lourenço & Pézier 2002). Consequently, the effective number of species in the Amazon region of Ecuador may be much greater than what is presently estimated.
Material and Methods
Illustrations and measurements were produced using a Wild M5 stereomicroscope with a drawing tube and an ocular micrometer. Measurements follow [START_REF] Stahnke | Scorpion nomenclature and mensuration[END_REF] and are given in mm. Trichobothrial notations follow [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF], while morphological terminology mostly follows [START_REF] Vachon | Etude sur les Scorpions[END_REF] and [START_REF] Hjelle | Anatomy and morphology[END_REF].
Comparative material:
• Tityus obscurus (Gervais, 1843): French Guiana, Réserve de la Trinité, XII/2010 (C. Courtial), 2 males, 1 female.
• Tityus apiacas Lourenço, 2002: Brazil, Pará, Itaiatuba, Bacia do Rio Jamanxim, 5/XII/2007 (J. Zuanon, 1 male; Amazonas, BR319, km 350, trilha 1, ponto 500 (5°16'11.28" S 61°55'46.8" W), 25/VII:2001, pitfall (H. Guariento & L. Pierrot), 1 male.
• Tityus dinizi Lourenço, 1997: Brazil, Amazonas, Anavilhanas, X/1999 (J. Adis), 2 males. Etymology. The specific name refers to the geography of the region where the new species was found, between Amazon and Oriental Andes in Ecuador. Diagnosis. A moderate species when compared with the average size of other species in the subgenus Atreus: male 72.8 mm and female 70.1 mm in total length (see Table I). General pattern of pigmentation reddishbrown to brown overall. Basal middle lamella of female pectines dilated, but less conspicuous when compared with that of several other species of the subgenus Atreus. Subaculear tooth moderately long and spinoid. Pectinal tooth count 1919 in male and 2120 in female. Fixed and movable fingers of the pedipalp with 1516 oblique rows of granules. Ventral carinae of metasomal segments II to IV parallel in configuration. Pedipalps and in particular chela fingers with a strong chetotaxy. Trichobothriotaxy Aα orthobothriotaxic. The new species may be an endemic element to the occidental region of Amazon. Description based on male holotype and female paratype. Measurements in Table I. Coloration. Basically reddishbrown to brown overall. Prosoma: carapace reddishbrown with some dark pigment on the carinae. Mesosomal tergites reddishbrown with one darker transverse stripe on the posterior edge of tergites IVI. Metasoma: segments I to V reddishbrown; IV and V darker than the others and with some blackish regions over carinae. Vesicle: dark reddishbrown; aculeus reddish at the base and dark reddish at the tip. Venter reddishyellow; sternites with dark zones on lateral and posterior edges; sternite V with a white triangular zone on posterior edge, better marked on male; pectines pale yellow to white. Chelicerae reddishyellow with a dark thread; fingers blackish with dark reddish teeth. Pedipalps: reddishbrown; fingers dark, almost blackish with the extremities yellow. Legs reddishbrown to brown.
Morphology. Carapace moderately to strongly granular; anterior margin with a moderate to strong concavity. Anterior median superciliary and posterior median carinae moderate to strong. All furrows moderately to strongly deep. Median ocular tubercle distinctly anterior to the centre of carapace. Eyes separated by more than one ocular diameter. Three pairs of lateral eyes. Sternum subtriangular. Mesosoma: tergites moderately to strongly granular. Median carina moderate in all tergites. Tergite VII pentacarinate. Venter: genital operculum divided longitudinally; each half with a semioval to semitriangular shape. Pectines: pectinal tooth count 1919 in male holotype and 2120 in female paratype; basal middle lamellae of the pectines dilated in the female and inconspicuously dilated in male. Sternites with a thin granulation and elongate spiracles; VII with four carinae better marked on female. Metasomal segments with 108885 carinae, crenulated, better marked on female. Dorsal carinae on segments I to IV with one to three spinoid granules, better marked on female. Lateral inframedian carinae on segment I complete, crenulate; represented by 13 granules on II; absent from III and IV. Ventrolateral carinae moderate to strong, crenulated on female, smooth on male. Ventral submedian carinae crenulate. Intercarinal spaces weakly granular. Segment V with dorsolateral, ventrolateral and ventromedian carinae crenulated on female, inconspicuous in male. Lateral intercarinal spaces moderately granular on female, smooth on male. Telson granular on female, smooth on male, with a long and strongly curved aculeus on both sexes. Dorsal surface smooth in both sexes; ventral surface weakly granular in females; subaculear tooth spinoid, shorter in male. Cheliceral dentition characteristic of the family Buthidae [START_REF] Vachon | De l'utilité, en systématique, d'une nomenclature des dents des chélicères chez les Scorpions[END_REF]; movable finger with two well formed, but reduced, basal teeth; ventral aspect of both fingers and manus with long dense setae. Pedipalps: femur pentacarinate; patella with seven carinae; internal face of patella with several spinoid granules; chela with nine carinae and the internal face with an intense granulation; other faces weakly granular. Femur, patella and chela fingers with a strong chetotaxy. Fixed and movable fingers with 1516 oblique rows of granules. Trichobothriotaxy; orthobothriotaxy Aα [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF][START_REF] Vachon | Sur l'utilisation de la trichobothriotaxie du bras des pédipalpes des Scorpions (Arachnides) dans le classement des genres de la famille des Buthidae Simon[END_REF]. Legs: tarsus with numerous short fine setae ventrally.
Relationships. Taking into account the fact that previous populations of Tityus (Atreus) from Ecuador and, in particular, those from the Amazon region were associated to Tityus asthenes (Lourenço, 1988, Lourenço & Ythier, 2013), it would be logical to associate the new species to this one. However, the type locality of Tityus asthenes, supposedly in Peru, remains unclear, and the specimen described by Pocock could have been collected in quite different regions in tropical America. Moreover, the reanalysis of the general morphology of the female holotype brings confirmation that T. asthenes does not belongs to the subgenus Atreus, but rather to the subgenus Tityus and to the Tityus bolivianus group of species. It has a rather small size with only 53 mm in total length and a tegument strongly smooth with weakly marked carinae and granulations; the subaculear tubercule is small and very sharp (see Lourenço & Maury, 1985). These considerations led us to rather associate the new species to other Tityus (Atreus) distributed in the Amazon basin such as Tityus obscurus, Tityus dinizi, Tityus apiacas and Tityus tucurui Lourenço 1988(Figs. 2328). Tityus cisandinus sp. n. can however be distinguished from these cited species by: I) a rather smaller global size (see Table 1) with marked different morphometric values; II) better marked carinae and granulations; III) stronger chetotaxie on pedipalps. Moreover the geographical range of distribution appears as quite different (see Lourenço, 2011Lourenço, , 2017)). The new species is a possible endemic element to the Andean/Amazon region of Ecuador and Peru. In our opinion, the material listed by Teruel (2011) from Loreto in Peru and associated to Tityus asthenes, corresponds in fact to the new species described here. No confirmation is however possible since the cited material is deposited in the private collection of this author and not accessible.
Figs. 13 .
13 Figs. 13. Tityus asthenes, female holotype. 12. Habitus, dorsal and ventral aspects. 3. Detail of ventral aspect, showing coxapophysis, sternum, genital operculum and pectines. Black & white photos taken in 1972.
Figs. 45 .
45 Figs. 45. Tityus asthenes, female holotype. Habitus, dorsal and ventral aspects. Recent colour photos taken in 2017. Labels attest that the type was examined by F. Matthiesen, while in the Muséum in Paris during 1972. (Scale bar = 1 cm).
All specimens are now deposited in the collections of the Muséum national d'Histoire naturelle, Paris, France.
Fig. 6 .
6 Fig. 6. The natural habitat of Tityus cisandinus sp. n. (rio Capahuari, Pastaza province), covered by rainforest.
Figs. 710 .
710 Figs. 710. Tityus cisandinus sp. n., male holotype and female paratype. Habitus, dorsal and ventral aspects.
Figs. 1114 .
1114 Figs. 1114. Tityus cisandinus sp. n. Male holotype (1113) and female paratype (14). 11. Chelicera, dorsal aspect. 12. Cutting edge of movable finger showing rows of granules. 1314. Metasomal segment V and telson, lateral aspect.
Figs. 1520 .
1520 Figs. 1520. Tityus cisandinus sp. n. Male holotype (1519) and female paratype (20). Trichobothrial pattern. 1516. Chela, dorsoexternal and ventral aspects. 1718. Patella, dorsal and ventral aspects. 1920. Femur, dorsal aspect.
Fig. 21 .
21 Fig. 21. Tityus cisandinus sp. n. Female paratype alive.
Fig. 2324 .
2324 Fig. 2324. Tityus obscurus from French Guiana. Habitus of male, dorsal and ventral aspects.
Fig. 2526 .
2526 Fig. 2526. Tityus dinizi from Brazil. Habitus of male, dorsal and ventral aspects.
Fig. 2728 .
2728 Fig. 2728. Tityus apiacas from Brazil. Habitus of male, dorsal and ventral aspects.
see taxonomic section; Figs 45). Moreover, the study
Table I .
I Measurements (in mm) of the male holotype and female paratype of Tityus cisandinus sp. n. and males of Tityus dinizi (Brazil), Tityus apiacas (Brazil) and Tityus obscurus (French Guiana).
Tityus cisandinus sp. n. Tityus dinizi T. apiacas T. obscurus
♂ ♀ ♂ ♂ ♂
Total length: 72.8 70.1 96.1 81.8 90.5
Carapace:
Length 7.6 8.1 9.1 8.4 8.1
Anterior width 5.7 5.6 6.7 6.2 6.3
Posterior width 7.9 9.0 9.5 9.1 8.7
Mesosoma length 16.9 19.2 19.8 20.2 24.4
Metasomal segment I.
length 5.8 5.2 8.7 6.6 7.6
width 3.8 4.2 4.1 4.2 4.2
Metasomal segment II.
Length 7.8 6.3 10.8 8.5 9.2
Width 3.6 4.1 3.8 4.3 4.0
Metasomal segment III.
Length 8.7 7.1 12.2 9.6 10.1
Width 3.6 4.2 3.9 4.4 4.1
Metasomal segment IV
Length 9.4 8.2 13.6 10.5 10.6
Width 3.8 4.1 4.0 4.7 4.4
Metasoma, segment V.
length 9.9 8.8 13.5 10.7 10.9
width 4.0 4.1 4.1 4.8 4.6
depth 3.8 3.9 4.1 4.4 3.9
Telson length 6.7 7.2 8.4 7.3 9.6
Vesicle:
width 3.2 2.9 3.3 3.2 3.2
depth 3.0 2.9 3.2 3.2 3.1
Femur:
length 9.8 8.4 13.3 13.2 13.3
width 2.2 2.4 2.4 2.2 2.3
Patella:
length 10.1 9.1 13.8 13.8 14.0
width 2.6 3.0 3.1 2.8 2.8
Chela:
length 17.4 15.7 21.5 21.8 23.1
width 2.5 2.9 2.8 2.6 2.5
depth 2.3 2.8 2.6 2.5 2.3
Movable finger:
length 11.3 10.8 13.2 13.4 13.5
Acknowledgements
We are most grateful to Janet Beccaloni (NHM, London) for useful information on the type specimen of Tityus asthenes and for providing colour photos of the type. We are also grateful to EliseAnne Leguin (MNHN, Paris) for the preparation of several photos and plates.
List of the Ecuadorian species of Tityus
Genus Tityus C. L. Koch, 1836 |
01759122 | en | [
"sdv.bid.spt",
"sdv.ba.zi"
] | 2024/03/05 22:32:10 | 2017 | https://hal.sorbonne-universite.fr/hal-01759122/file/Scorpions%20Serra%20Mocidade%202017-low_sans%20marque.pdf | Wilson R Lourenço
email: wilson.lourenco@mnhn.fr
nuove specie dal parco nazionale 'Serra da Mocidade' nello stato di Roraima (Scorpiones: Buthidae, Chactidae)
Keywords: Scorpiones, Buthidae, Chactidae, Broteochactas, Taurepania, new species, Serra da Mocidade National Park, mountain range, State of Roraima, Brazil Scorpiones, Buthidae, Chactidae, Broteochactas, Taurepania, nuova specie, parco nazionale Serra
Scorpions belonging to the genera Tityus C. L. Koch, 1836 (subgenus Atreus Gervais, 1843) and Broteochactas Pocock, 1893 (subgenus Taurepania GonzálezSponga, 1978) are studied and two new species are described: Tityus (Atreus) generaltheophiloi sp. n. and Broteochactas (Taurepania) mauriciodiasi sp. n., based on specimens collected in the mountain range of Serra da Mocidade National Park in the State of Roraima, Brazil. Both new species are most certainly endemic elements of these mountain ranges. Moreover, the discovery of these new elements confirms the poor knowledge available for most of the summits in the Amazon region.
Introduction
As outlined in several previous papers (Lourenço, 2002a(Lourenço, , b, 2012[START_REF] Lourenço | The genus Broteochactas Pocock, 1893 in Brazilian Amazonia, with a description of a new species from the State of Amazonas (Scorpiones: Chactidae)[END_REF]Lourenço et al., 2010[START_REF] Lourenço | New species of Chactidae (Scorpiones) from the upper Rio Negro in Brazilian Amazonia[END_REF], contributions to the knowledge of the Amazonian scorpion fauna and in particular of the elements belonging to the families Buthidae C. L. Koch, 1837 and Chactidae Pocock, 1893 have been the subject of several previous studies (e. g. [START_REF] Lourenço | The distribution of noxious species of scorpions in Brazilian Amazonia: the genus Tityus C. L. Koch, 1836, subgenus Atreus Gervais, 1843 (Scorpiones, Buthidae)[END_REF][START_REF] Lourenço | The genus Brotheas C. L. Koch, 1837 in Brazilian Amazonia, with a description of a new species from the State of Pará (Scorpiones: Chactidae)[END_REF][START_REF] Lourenço | The genus Broteochactas Pocock, 1893 in Brazilian Amazonia, with a description of a new species from the State of Amazonas (Scorpiones: Chactidae)[END_REF][START_REF] Lourenço | The geographical pattern of distribution of the genus Teuthraustes Simon (Scorpiones, Chactidae) in South America and description of a new species[END_REF][START_REF] Lourenço | New species of Chactidae (Scorpiones) from the upper Rio Negro in Brazilian Amazonia[END_REF]. However, the Amazon region remains one of the world's most diverse areas for its scorpion fauna. Inventory on the Amazonian scorpion fauna began in the second half of the 19 th century, and was for the first time synthesized in a monograph by [START_REF] Melloleitão | Escorpiões sulamericanos[END_REF]. Since then, other contributions have been published, e. g. [START_REF] Gonzálezsponga | Escorpiofauna de la region oriental del Estado Bolivar, en Venezuela[END_REF][START_REF] Gonzálezsponga | Guía para identificar escorpiones de Venezuela[END_REF] and Lourenço, (2002a, b). On account of the diversity and richness of the Amazonian scorpion fauna, the discovery and description of new species is by no means unusual (e. g. Lourenço, 2002aLourenço, , b, 2012[START_REF] Lourenço | The genus Broteochactas Pocock, 1893 in Brazilian Amazonia, with a description of a new species from the State of Amazonas (Scorpiones: Chactidae)[END_REF][START_REF] Lourenço | The geographical pattern of distribution of the genus Teuthraustes Simon (Scorpiones, Chactidae) in South America and description of a new species[END_REF]Lourenço et al., 2010[START_REF] Lourenço | New species of Chactidae (Scorpiones) from the upper Rio Negro in Brazilian Amazonia[END_REF].
Since [START_REF] Melloleitão | Escorpiões sulamericanos[END_REF] published his monograph on South American scorpions, the number of known Amazonian species, in particular from Brazil, was much less significant than it is today. In fact, most species known in Amazonia have been described only in recent years (e. g. [START_REF] Gonzálezsponga | Escorpiofauna de la region oriental del Estado Bolivar, en Venezuela[END_REF][START_REF] Gonzálezsponga | Guía para identificar escorpiones de Venezuela[END_REF]Lourenço, 2002aLourenço, , b, 2014;;Lourenço et al., 2010[START_REF] Lourenço | New species of Chactidae (Scorpiones) from the upper Rio Negro in Brazilian Amazonia[END_REF].
In fact, many different regions within the Amazon basin, including those nearby the Rio Negro, and in particular the various summits of the northern range of Amazonia were among the last to attract the attention of investigators because most of the pioneer work in Amazonia was carried out along the Solimões and Amazon rivers, in some cases up to Peru [START_REF] Papavero | Essays on the history of neotropical dipterology[END_REF][START_REF] Papavero | Essays on the history of neotropical dipterology[END_REF]. In so far as scorpions are concerned, only in recent decades more intensive collecting was possible in most of these summits composed by the Tepuys or Inselbergs (e. g. GonzálezSponga, 1978;[START_REF] Lourenço | Scorpion biogeographic patterns as evidence for a NeblinaSão Gabriel endemic center in Brazilian Amazonia[END_REF][START_REF] Lourenço | Description of Tityus (Atreus) neblina sp. n. (Scorpiones, Buthidae), from the 'Parque Nacional do Pico da Neblina' in Brazil/Venezuela, with comments on some related species[END_REF][START_REF] Lourenço | New species of Chactidae (Scorpiones) from the upper Rio Negro in Brazilian Amazonia[END_REF]. This has resulted in several new discoveries and descriptions. One exception among the elder expeditions is the one performed by an English Team composed of Messrs F. V. McConnell and J. J. Quelch in Mount Roraima (British Guiana), in the period from August to October 1898, which resulted in the description of two scorpion species [START_REF] Pocock | Myriopoda and Arachnida. In: Report on a collection made by Messrs F. V. McConnell and J. J. Quelch at Mount Roraima in British Guiana[END_REF], most certainly endemic to this summit. See also [START_REF] Lourenço | The genus Vachoniochactas GonzálezSponga (Scorpiones, Chactidae), a model of relictual distribution in past refugia of the Guayana region of South America[END_REF]. In this contribution, two new species are described from the Mountain range of 'Serra da Mocidade ' National Park (Figs. 12) in the state of Roraima, Brazil. These belong to the genera Tityus C. L. Koch, 1836(subgenus Atreus Gervais, 1843) and Broteochactas Pocock, 1893(subgenus Taurepania GonzálezSponga, 1978). Both new species are most certainly endemic elements of these summits which have been explored for the first time during the INPA's Team expedition of 2016.
Methods
Illustrations and measurements were made using a Wild M5 stereomicroscope with a drawing tube and an ocular micrometer. Measurements follow those of [START_REF] Stahnke | Scorpion nomenclature and mensuration[END_REF] and are given in mm. Trichobothrial notations are those developed by [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF] and the morphological terminology mostly follows [START_REF] Hjelle | Anatomy and morphology[END_REF]. Patronym: name honors the Brazilian Army General Guilherme Cals Theophilo Gaspar de Oliveira, based in Brasilia, Brazil, who strongly supported the field expedition to the Serra da Mocidade National Park.
Diagnosis: a moderate species when compared with the average size of the other species in the subgenus Atreus: female up to 70.5 mm in total length. General pattern of pigmentation blackishbrown to dark blackish overall (a type of velvet blackish). Basal middle lamella of female pectines moderately to strongly dilated. Subaculear tooth moderately long and strongly spinoid. Pectinal tooth count 19/20 in female. Fixed and movable fingers of the pedipalp with 16/16 oblique rows of granules. Ventral carinae of metasomal segments II to IV parallel in configuration. Basal tooth of chelicera fixed finger with a trifid (trifidus) morphology. This new species is a possibly endemic element to the mountains of Serra da Mocidade National Park.
Description: based on female holotype. Measurements after the description. Coloration. Basically blackishbrown to dark blackish overall (velvet blackish). Prosoma: carapace blackishbrown; eyes surrounded by dark blackish pigment. Mesosoma: tergites blackishbrown with reddishbrown confluent stripes on the posterior edges of tergites I VI. Metasomal segments blackishbrown; V very dark, almost blackish. Vesicle dark, almost blackish; aculeus reddish at the base and blackish at the tip. Ventral aspect blackishbrown with diffused reddish spots; sternite V with a white triangle; pectines pale yellow. Chelicerae dark yellow with a dark thread; fingers blackish with dark reddish teeth. Pedipalps globally blackish with the extremities of fingers reddish. Legs globally blackish. Morphology. Carapace strongly granular; anterior margin with a moderate concavity. Anterior median superciliary and posterior median carinae moderate. Furrows moderately to strongly deep. Median ocular tubercle distinctly anterior to the centre of carapace. Eyes separated by a little more than one ocular diameter. Three pairs of lateral eyes. Sternum subtriangular. Mesosoma: tergites strongly granular. Median carina moderate in all tergites. Tergite VII pentacarinate. Venter: genital operculum divided longitudinally; each half with a semitriangular shape. Pectines: pectinal tooth count 19/20 in female holotype; basal middle lamellae of the pectines moderately to strongly dilated in the female. Sternites weakly granular with elongate spiracles; VII with four carinae. Metasomal segment I with 10 carinae, crenulate; segment II with 8 carinae, crenulate; segment III with 8 carinae, crenulate; segment IV with 8 carinae, crenulate; segment V with 5 carinae, crenulate. Dorsal carinae on segments II to IV with 23 strong spinoid granules. Lateral inframedian carinae on segment I complete, strongly crenulate; on II represented by only 3 distal granules; absent from III and IV. Ventrolateral carinae strong, crenulate. Ventral submedian carinae strongly crenulate. Intercarinal spaces weakly to moderately granular. Segment V with dorsolateral, ventrolateral and ventromedian carinae strongly crenulated. Lateral intercarinal spaces moderately granular. Telson, moderately to weakly granular, with a long and strongly curved aculeus. Dorsal surface smooth; ventral surface moderately granular in female; subaculear tooth moderately long and strongly spinoid. Cheliceral dentition characteristic of the family Buthidae [START_REF] Vachon | De l'utilité, en systématique, d'une nomenclature des dents des chélicères chez les Scorpions[END_REF]; movable finger with two wellformed basal teeth; fixed finger with a trifid (trifidus) basal tooth; ventral aspect of both fingers and manus with long dense setae. Pedipalps: femur pentacarinate; patella with 7 carinae; chela with 9 carinae; all faces moderately granular. Fixed and movable fingers with 16/16 oblique rows of granules. Trichobothriotaxy; orthobothriotaxy Aα [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF][START_REF] Vachon | Sur l'utilisation de la trichobothriotaxie du bras des pédipalpes des Scorpions (Arachnides) dans le classement des genres de la famille des Buthidae Simon[END_REF]. Legs: tarsus with numerous short fine setae ventrally. Relationships: from its general morphology, the new species belongs to the subgenus Atreus. Because of its morphological features and also because of the closest area of distribution, it can be associated with Tityus dinizi Lourenço 1997, described from the Rio Negro area. The two species can, however, be distinguished from each other by a number of features: I) the new species has a darker blackish coloration overall, particularly on the legs and pedipalps; II) the basal middle lamella of the pectines is less dilated in the new species than it is in T. dinizi; III) body and appendages are less elongated in the new species; IV) basal tooth on fixed finger of chelicera has a particular trifid morphology in the new species. Moreover, the habitat of both species is quite distinct. Tityus dinizi is found in flooding areas (Igapos) in the region nearby the Rio Negro margins whereas the new species was collected in the mountain range of 'Serra da Mocidade' National Park, at altitudes of 600 700 m.
Morphometric values of the female holotype of Tityus (Atreus) generaltheophiloi sp. n. Total length (including the telson), 70.5. Carapace: length, 8.0; anterior width, 5.9; posterior width, 9.1. Mesosoma length, 19.6. Metasomal segments. I: length, 5.2; width, 4.3; II: length, 6.2; width, 4.3; III: length, 7.1; width, 4.2; IV: length, 8.1; width, 4.1; V: length, 8.9; width, 3.9; depth, 3.7. Telson length, 7.4. Vesicle: width, 2.9; depth, 3.0. Pedipalp: femur length, 8.5, width, 2.2; patella length, 8.8, width, 3.1; chela length, 15.8, width, 2.8, depth, 2.7; movable finger length, 10.7 Diagnosis. Small to moderate sized scorpions with respectively 34.7 and 32.9 mm in total length for the biggest female and male examined. Coloration reddishyellow to reddish brown. Body and appendages moderately to strongly granulated with some smooth and lustrous zones. Metasomal carinae moderately to strongly marked; segment V with several spinoid granules on ventral aspect; segments I to IV with strong posterior spinoid granules on dorsal carinae. Pectines with 89 teeth (mode 8/9). Dentate margins on fixed and movable fingers of pedipalps with 6/7 more or less delimited rows of granules. Rows not separated by clearly larger accessory granules, but rather by a conglomeration of small granules. Trichobothrial pattern of type C, majorante neobothriotaxy.
Description: (based on female holotype and paratypes). Coloration. Basically reddish yellow to reddishbrown. Prosoma: carapace reddishbrown. Tergites reddishbrown, as the carapace. Metasomal segments reddishyellow, paler than tergites; vesicle reddish yellow; aculeus dark red. Chelicerae yellowish marked with diffused variegated spots; fingers yellow with diffused variegated spots; teeth reddish. Pedipalps reddishbrown; femur and patella with blackish zones over the carinae; chelae reddishbrown; granulations over the dentate margins of fingers blackish. Legs yellow without spots. Venter yellow to reddishyellow; sternite V with a pale, almost white triangle; coxapophysis brownish; pectines and genital operculum pale yellow in female to pale brown in male. Morphology. Carapace moderately granular; furrows shallow. Sternum pentagonal, wider than long. Tergites almost acarinate, moderately granular, with granulations better marked on male. Pectinal tooth count 9/8 for female holotype (see diagnose for variation), fulcra absent. Sternites weakly granulated, better marked on male, VII acarinate; spiracles with an ovalshape. Only metasomal segment V longer than wide; metasomal tegument lustrous with some minute granulations; segment V with conspicuous spinoid granulations ventrally. Dorsal and laterodorsal carinae strongly marked on segments IIV; other carinae equally strong; ventral carina present on all segments; vestigial on segment I of female. Pedipalps: femur with dorsal internal, dorsal external and ventral internal carinae moderately to strongly marked; ventral external carina weak; tegument with very few granulations, almost smooth; internal aspect very weakly granular. Patella almost smooth; all carinae moderate to strong. Chela with minute granulations; all carina weakly to moderately develop; internal aspect with a few granules. Dentate margins on fixed and movable fingers of pedipalps with 6/7 more or less delimited rows of granules. Rows not separated by clearly larger accessory granules, but rather by a conglomeration of small granules. Chelicerae with a dentition typical of Chactidae [START_REF] Vachon | De l'utilité, en systématique, d'une nomenclature des dents des chélicères chez les Scorpions[END_REF], and with dense setation ventrally and internally. Trichobothrial pattern of type C, majorante neobothriotaxy. [START_REF] Vachon | Etude des caractères utilisés pour classer les familles et les genres de Scorpions (Arachnides)[END_REF].
Relationships: Broteochactas (Taurepania) mauriciodiasi sp. n. can be distinguished from other species in the subgenus Taurepania and in particular from Broteochactas (Taurepania) porosa [START_REF] Pocock | Myriopoda and Arachnida. In: Report on a collection made by Messrs F. V. McConnell and J. J. Quelch at Mount Roraima in British Guiana[END_REF] which occurs in the nearby region of the Monte Roraima, by the following features: I) some distinct morphometric values; II) carapace and tergites more intensely granulated; III) a slightly inferior number of teeth on pectines; IV) a particular granulation on chela fingers, randomly arranged and without the presence of conspicuous accessory granules. Moreover, the two cited species are most certainly endemic elements to their respectively summits.
Morphometric values of the female holotype and male paratype of Broteochactas (Taurepania) mauriciodiasi sp. n. Total length (including the telson), 34.7/32.9. Carapace: length, 4.8/4.4; anterior width, 3.2/2.8; posterior width, 5.0/4.5. Mesosoma length, 9.8/8.3. Metasomal segments. I: length, 2.0/1.8; width, 3.0/3.1; II: length, 2.2/2.0; width, 2.8/2.9; III: length, 2.3/2.2; width, 2.7/2.9; IV: length, 2.5/3.1; width, 2.6/2.8; V: length, 5.5/5.6; width, 2.5/2.6; depth, 2.2/2.2. Telson length, 5.6/5.5. Vesicle: width, 2.4/2.4; depth, 1.8/1.8. Pedipalp: femur length, 4.0/3.5, width, 1.7/1.5; patella length, 4.6/4.1, width, 1.9/1.7; chela length, 8.6/7.7, width, 2.7/2.8, depth, 3.2/3.2; movable finger length, 5.2/4.4. Taxonomic comments on the subgenus Taurepania stat. n.
The genus Taurepania was created by [START_REF] Gonzálezsponga | Escorpiofauna de la region oriental del Estado Bolivar, en Venezuela[END_REF] based on the species Broteochactas porosus [START_REF] Pocock | Myriopoda and Arachnida. In: Report on a collection made by Messrs F. V. McConnell and J. J. Quelch at Mount Roraima in British Guiana[END_REF], one of the two species described by this author from Mt. Roraima. [START_REF] Gonzálezsponga | Escorpiofauna de la region oriental del Estado Bolivar, en Venezuela[END_REF], erroneously attributed the type locality of this species to Estado Bolivar in Venezuela. [START_REF] Sissom | Family Chactidae Pocock, 1893[END_REF] in the 'Catalog of the scorpions of the world', confirmed the type locality to Venezuela, but stated that [START_REF] Pocock | Myriopoda and Arachnida. In: Report on a collection made by Messrs F. V. McConnell and J. J. Quelch at Mount Roraima in British Guiana[END_REF] cited Guyana instead. In fact the field expedition performed by Messrs F. V. McConnell and J. J. Quelch in Mount Roraima, was a British expedition and a priori took place in British Guiana. The matter may be problematic since Mt. Roraima is located in a trifrontier location, between Brazil, Guyana and Venezuela.
Since the genus Taurepania and several other genera of Chactidae are very closely related to the genus Broteochactas, their status changed during the last decades. [START_REF] Lourenço | Diversité de la faune scorpionique de la region amazonienne; centres d'endémisme; nouvel appui à la théorie des refuges forestiers du Pléistocène[END_REF] considered Taurepania and several other genera as only groupofspecies within Broteochactas. In subsequent years some of these genera were rehabilitated as valid genera [START_REF] Lourenço | Scorpion biogeographic patterns as evidence for a NeblinaSão Gabriel endemic center in Brazilian Amazonia[END_REF]. Taurepania was maintained as a valid genus by [START_REF] Sissom | Family Chactidae Pocock, 1893[END_REF], but again invalidated by [START_REF] Soleglad | Highlevel systematics and phylogeny of the extant scorpions (Scorpiones: Orthosterni)[END_REF]. In reality, the species associated to Taurepania remain extremely poorly studied. All these species apparently are associated to several summits of Northern Amazonia. For this reason and until better studies may be possible on these summit populations, Taurepania is considered here as a subgenus of Broteochactas.
Biogeographic considerations
Because of their low vagility, scorpions have frequently been used as biogeographic tools [START_REF] Lourenço | The biogeography of scorpions[END_REF]. The two new species described here were collected in the summits of Serra da Mocidade National Park, an area located E to the Imeri Endemic Centre (Figs. 25 26) as defined by [START_REF] Lourenço | Diversité de la faune scorpionique de la region amazonienne; centres d'endémisme; nouvel appui à la théorie des refuges forestiers du Pléistocène[END_REF][START_REF] Lourenço | Scorpion biogeographic patterns as evidence for a NeblinaSão Gabriel endemic center in Brazilian Amazonia[END_REF]. These new descriptions provide further evidence of the very high biodiversity and the important levels of endemism around the Rio Negro region [START_REF] Lourenço | Scorpion diversity and endemism in the Rio Negro region of Brazilian Amazonia, with the description of two new species of Tityus C. L. Koch (Scorpiones, Buthidae)[END_REF][START_REF] Lourenço | A new species of Brotheas (Scorpiones, Chactidae) from the Rio Negro region in the State of Amazonas, Brazil[END_REF]. The Serra da Mocidade National Park may be considered as a subcentre of endemism related both to Imeri and Imataca centres.
Although several endemic centres of scorpions have been established within Amazonia, only a few can be considered to be well known. The best known centre is unquestionably Manaus in Brazil, but even in this area new species are continuously being discovered and described (e. g. [START_REF] Lourenço | Addition to the scorpion fauna of the Manaus region (Brazil), with a description of two species of Tityus from the canopy[END_REF]Lourenço & Araujo, 2004;[START_REF] Lourenço | A new synopsis of the scorpion fauna of the Manaus region in Brazilian Amazonia, with special reference to the 'Tarumã Mirim' area[END_REF][START_REF] Monod | A new species of Broteochactas Pocock, 1890 from Brazilian Amazonia (Scorpiones, Chactidae)[END_REF][START_REF] Pinto Da Rocha R | Broteochactas fei, a new scorpion species (Scorpiones, Chactidae) from Brazilian Amazonia, with notes on its abundance and association with termites[END_REF].
Since the region around the Rio Negro is much more vast than that of Manaus, many new taxonomic elements, mainly at the level of species, but also even of genera, may be expected to be discovered and described in coming years. Any final conclusions regarding the actual composition of the scorpion faunas of both Manaus and specifically of the Rio Negro region should be interpreted with caution because the results obtained may well be biased in consequence of insufficient collecting and field work. The inventory of scorpions may present difficulties because these animals are often extremely cryptic and some species remain known only from a single locality. Until better methods of sampling are used, precautions must be taken into consideration in the interpretation of all biogeographical results [START_REF] Prance | Forest Refuges: evidence from Woody Angiosperms[END_REF][START_REF] Lourenço | Diversité de la faune scorpionique de la region amazonienne; centres d'endémisme; nouvel appui à la théorie des refuges forestiers du Pléistocène[END_REF][START_REF] Lourenço | The biogeography of scorpions[END_REF].
The new evidence presented in this paper, based on scorpion studies, supports the conclusion that the upper Rio Negro and surrounded areas such as the 'Serra da Mocidade' National Park may represent important endemic centres.
Fig. 1 .
1 Fig. 1. General view of the 'Serra da Mocidade' National Park, showing the typical vegetation (photo by T. Laranjeiras).
Fig. 2 .
2 Fig. 2. A typical Inselberg formation in the 'Serra da Mocidade' National ark (photo by T. Laranjeiras).
Figs. 37 .
37 Figs. 37. Tityus (Atreus) generaltheophiloi sp. n. Female holotype. 3. Chelicera, dorsal aspect. 4. Idem, fingers and teeth in detail. 5. Metasomal segment V and telson, lateral aspect. 6. Pecten. 7. Cutting edge of movable finger showing series of granules.
Figs. 812 .
812 Figs. 812. Tityus (Atreus) generaltheophiloi sp. n. Female holotype. Trichobothrial pattern. 89. Chela, dorso external and ventral aspects. 1011. Patella, dorsal and external aspects. 12. Femur, dorsal aspect.
Fig. 13 .
13 Fig. 13. Habitat of Tityus (Atreus) generaltheophiloi sp. n., showing typical vegetation (photo F. F. Xavier Filho).
Figs. 1417 .
1417 Figs. 1417. Broteochactas (Taurepania) mauriciodiasi sp. n. Female holotype (14, 16) and male paratype (15, 17). 14. Chelicera, dorsal aspect. 1516. Metasomal segments IIIV and telson, lateral aspect. 17. Cutting edge of movable finger, showing granulations.
Figs. 1823 .
1823 Figs. 1823. Broteochactas (Taurepania) mauriciodiasi sp. n. Female holotype. Trichobothrial pattern. 1819. Chela, dorsoexternal and ventral aspects. 2022. Patella, dorsal, external and ventral aspects. 23. Femur, dorsal aspect.
Fig. 24 .
24 Fig. 24. Habitat of Broteochactas (Taurepania) mauriciodiasi sp. n., showing typical vegetation (photo F. F. Xavier Filho).
Fig. 25 .
25 Fig. 25. Map of Northern South America Amazonian and Guayana regions showing the known distribution of the most important Tityus (Atreus) species, including the type locality of Tityus (Atreus) generaltheophiloi sp. n.
Fig. 26 .
26 Fig. 26. Map of Northern South America Amazonian and Guayana regions showing the type localities of Broteochactas (Taurepania) mauriciodiasi sp. n. and Broteochactas (Taurepania) porosa (Mount Roraima).
Acknowledgements
I am most grateful to Marcio Oliveira from INPA, Manaus, Brazil, who arranged facilities for the study of the material related to the new species described here, to Thiago Laranjeiras and Francisco Felipe Xavier Filho, INPA, Manaus, Brazil, for the permission to use several personal photos in the article, to Lucienne Wilmé (Missouri Botanical Garden, Madagascar) for preparing the maps and to Michael M. Webber, University of Nevada, Las Vegas, USA, for her review of an earlier version of the manuscript.
The Expedition 'Biodiversity of the Serra da Mocidade' was the result of a collaboration between the Instituto Nacional de Pesquisas da Amazônia (INPA), Instituto Chico Mendes de Conservação da Biodiversidade (ICMBio), Comando Militar da Amazônia (CMA), and Grifa Filmes. We extend our acknowledgements to all these partners. |
01759162 | en | [
"shs.litt"
] | 2024/03/05 22:32:10 | 1994 | https://hal.science/cel-01759162/file/CRANERBC.pdf | Paul Carmignani
CRANE: THE RED BADGE OF COURAGE. Master Paul Carmignani S Stephen
Crane
STEPHEN CRANE: THE RED BADGE OF COURAGE
établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
1 PR. PAUL CARMIGNANI Université de Perpignan-Via Domitia
STEPHEN CRANE
THE RED BADGE OF COURAGE
A SLICE OF LIFE
When Crane published The Red Badge of Courage in 1895, his second attempt at novelwriting met vith triumphant success. His first novel or rather novelette, Maggie : A Girl of the Streets, dealt with the life of the underprivileged in their ordinary setting; it was published under a pseudonym with little or no success in 1893. With his Civil War story, Crane became famous almost overnight; at the time of publication he was chiefly a picturesque figure of the world of the press, the breeding-ground of many American literary talents.
Crane was born in Newark, New Jersey, on November 1st, 1871. He was the 14th child of Jonathan Townley Crane (D. D.), a clergyman who died in 1880 leaving the young Stephen under the care of his mother and elder brothers. Crane was brought up in a religious atmosphere but he did not follow in his father's footsteps and kept himself aloof from religion. However, he remained dominated by fundamental religious patterns and preceptscharity, fraternity, redemption and salvation which he kept at an earthly level.
In 1892, he became a free-lance journalist in New York after attending many schools and colleges where he evinced little taste for academic life. In New York he began his apprenticeship in bohemianism and learnt the value of careful on-the-spot study and observation of what he wanted to write about. His experience with life in the slums found expression in Maggie. In this novel, Crane showed himself to be impelled by the spirit of religious and social rebellion. Two years later the RBC saw print first in serial form and then as a book in New York and London where it was warmly received. Celebrity did not divest him of his feeling for the "underdog" for in that same year he ran into trouble with the metropolitan police force on account of a prostitute unjustly accused and bullied. Crane had to leave New York and accepted a commission to report the insurrection in Cuba against Spanish rule.
In 1897 he met Cora Howorth the proprietress of a house of ill fame. In spite of the scandal raised by such an association, they were to live together for the rest of his life. Crane deliberately sought experiences where he could pit himself against danger; he covered the Greco-Turkish War for the New York Journal and the Westminster Gazette and decided to stay on in England at the end of his commission. In 1898, he published "The Open Boat and Other Tales of Adventure". In England he associated with such writers as Conrad, Wells, James etc., but soon got tired of country life and itched for action again. He left England to report the Spanish-American conflict but this new experience seriously impaired his health. He was back in England one year after and turned out an impressive body of fiction to avoid bankruptcy (one volume of verse: War is kind; Active Service, etc.). Meanwhile his body and brain gradually weakened but he went on writing to the end. S. Crane died on June, 5, 1900, in Badenweiler (Germany) where Cora had taken him in the hope he would miraculously recover from tuberculosis.
COMPOSITION AND PUBLICATION
The novel was composed between April 1893 and the Fall of 1894; it relates a three day battle which took place at Chancellorsville (Va) from May 1st to May 3rd, 1863. The name is never mentioned in the story but critics and historians have been able to detect numerous allusions to the actual troop movements and setting. The Red Badge of Courage was 1st published in an abbreviated form as a newspaper serial in 1894. The text was shortened to 16 chapters out of 24. In June 1895 D. Appleton and Company agreed to publish the complete text and thus The Red Badge of Courage came out in book form on October 1 st , 1895. From the start it became a bestseller owing to the favorable reviews published in the British press.
Legend has it that Crane wrote the RBC on the dare of a friend to do better than Zola's La Débacle. This is a moot point; Crane never referred to such a wager though he stated in a letter: "I deliberately started in to do a potboiler [...] something that would take the boarding-school elementyou know the kind. Well I got interested in the thing in spite of myself. [...] I had to do it my own way."
Though Crane was later to declare that he had got all his knowledge of war on the football pitch as a boy, he read all the factual material he could get about the Civil War before composing the RBC. The most obvious source was the important series of military reminiscences, Battles and Leaders of the Civil War used ever since it was published by all writers on the Civil War. Crane's history teacher at Claverack, General John B. Van Petten, a veteran, is often mentioned as a possible source of information on the conflict between the States. Crane had also read Civil War periodicals and studied Winslow Homer's drawings and Matthew Brady's photographs.
Crane's novel was published at a time when America was undergoing great changes. The most prominent feature on the intellectual landscape of the age was disillusion and Crane's RBC is a faithful reflection of that crisis of ideal, the failure to recover from which marks the end of an era as well as a century and the beginning of modernity. The RBC represents a deliberate departure from the conventions of the "genteel tradition" of culture (the expression was coined by the American philosopher George Santayana and has been used since the late 19 th century to refer to a group of writers who had established literary, social and moral norms insisting on respect for forms and con-ventions). Crane falls in with the conventions of his time and ruthlessly debunks all the traditional views about heroism or the popular glorification of military courage. As a Crane scholar said "Crane meant to smash icons" in writing the RBC. Emerson is often cited to define Crane's purpose in art: "Congratulate yourself if you have done something strange and extravagant and broken the monotony of a decorous age". Although essentially American in his stance, Crane was also a rebel against many things American and the RBC may be considered as a tale of deflation aiming at the reduction of idealism.
STRUCTURE OF THE NOVEL
The novel consists of 24 chapters relating, as the original title for the novel pointed out -Private Fleming : His Various Battles -, the war experiences of a young boy leaving his mother's farm to enlist in the Union forces. The whole story mainly deals with the protagonist's evolution from a country boy to a veteran-like soldier. It is a chronicle of the boy's anxieties, moods and impressions as he progresses through the series of episodes making up the novel.
The formal structure of the book is rather simply Aristotelian. It has a beginning (Chapters I-IV) which gets the youth to real battle; a middle (Ch. V-XIII) which witnesses his runaway and return; and an end (Ch. XIV-XXIII) which displays his achievement of "heroism" at climax, followed by a certain understanding of it in a coda-like final chapter. The middle and end sections are replete with notations of Fleming's psychological responses to fear, stress and courage.
The narrative is conducted on two levels, the physical setting, the outer world where the action takes place, as opposed to the emotional plane or the mind of Henry Fleming. As the story is told from H. Fleming's point of view, the two planes are fused into one unified impression. The action can be divided into five parts:
1. Henry before the battle: Chapters I-IV 2. The youth sees real fighting: Chapters V-VI 3. Chapters VII-XI relate his flight 4. Chapters XII-XIV relate how H. F. got wounded and his return the regiment 5. Chapters XV-XXIII depict his achievement of heroism and are followed by a coda-like final chapter XXIV where H. F. reaches a certain degree of self-awareness.
The general pattern of the book is that of a journey of initiation or a spiritual journey reflecting the mental condition of one of the men in the ranks. Crane's aimas stated beforewas to carry out a detailed observation of "the nervous system under fire".
A Summary:
Chapt. I-II: the first two chapters are merely introductory chapters presenting the main characters in an anonymous way. We make the acquaintance of "the tall soldier" (2) later to be given his full identity, Jim Conklin (11); "the blatant soldier" (16), we do not learn his full name -Wilson until later (18) and "a youthful private" or "the youth" whose name is also revealed later on (72).
The young soldier hearing that the regiment is about to be sent into battle puts the central question:
"How do you think the reg'ment'll do?" (11). Both the hero and the regiment are untried and thus will have to face the same fate.
Ch. III: The regiment crosses the river. Henry is "about to be measured" (23). He experiences mixed feelings: he sometimes feels trapped by the regiment ("he was in a moving box", 23); the landscape seems hostile to him but his curiosity about war is stronger than his fears. Yet after his first encounter with the "red animal" (25) and a dead soldier (24), the hero begings to realize that war is not kind and more like "a trap" (25). The loud soldier gives Henry a "little packet" of letters (29).
Ch. IV: Rumors of an imminent advance run through the ranks. H and his comrades witness the retreat ("stampede", "chaos")of a first wave of Union soldiers; H. resolves "to run better than the best of them" (33) if things turn out badly.
Ch. V: This is the first picture of an actual battle in the book ("We're in for it", 35) and the protagonist goes through his baptism of fire in a kind of "battle sleep" (37). H. "becomes not a man but a member" (36) and experiences a "subtle battle brotherhood" (36) though he is somewhat disappointed by "a singular absence of heroic poses" (37). H. is compared to a "pestered animal" and "a babe" (37). Indifference of Nature (40).
Ch. VI: H. feels he has passed the test but when the enemy launches a 2 nd attack "self-satisfaction" (40) gives way to self-preservation. This is a turning-point in the novel; when H. is cut off from the community which the regiment symbolizes, he feels alone and panics. He flees from the battlefield and pities his comrades as he runs ("Methodical idiots! Machine-like fools", 45).
Ch. VII: H.'s disappointment: "By heavens, they had won after all! The imbecile line had remained and become victors" (47). Feeling "self-pity" (48) and "animal-like rebellion" Henry "buries himself into a thick woods" (48) and in the chapel-like gloom of the forest there takes place the 2 nd encounter with a dead man ("the dead man and the living exchanged a long look" 50). He believes that Nature can justify his running away (episode of the suirrel, 49).
Ch. VIII: H. meets "the spectral soldier" (54) who turns out to be Jim Conklin and "the tattered soldier" who plagues him with embarrassing questions ("Where yeh hit?" 56). The tattered soldier seems to be the embodiment of H's early idea of himself as a war hero. At this stage, Nature seems no longer comforting but hostile.
Ch. IX: Essentially concerned with the death of J. Conklin and its effect on Henry ("letters of guilt"/"He wished that he, too, had a red badge of courage", 57). Crane's realism is at its best in this chapter which comes to a close with one of the most famous images in all American literature: "The red sun was pasted in the sky like a wafer".
Ch. X: H. deserts the tattered soldier who keeps "raising the ghost of shem" (64) and comes closer to the battlefield. H. envies the dead soldiers.
Ch. XI: The men eventually retreat, which Henry interprets as a vindication of "his superior powers of perception" (70) but he nonetheless feels excluded from "the procession of chosen beings" (67) and stigmatized by "the marks of his flight" (68), hence "self-hate" (69) = "he was their murderer" (71).
Ch. XII: During the helter-skelter retreat of his comrades, H. gets struck with a rifle-butt and finally gets a wound not from the enemy but ironically from one of his fellow soldiers who is also fleeing in a panic.
Ch. XIII: Traces H's journey back to his regiment. Appearnace of "the cheerful soldier" who takes him back into the fold of the community. H. who lies: "I got separated from th' reg'ment" ( 80) is taken care of by Wilson.
Ch. XIV: H. wakes up to "an unexpected world" (85). Wilson has meanwhile undergone a most spectacular change: he is "no more a loud young soldier" (87).
Ch. XV: H toys with the idea of knocking his friend "on the head with the misguided packet" of letters but refrains from doing so ("It was a generous thing", 93). He builds up "a faith in himself" (92); "he had fled with discretion and dignity" and fancies himself "telling tales to listeners" (93).
Ch. XVI: H. assumes the rôle of Wilson, the former "loud soldier"; he rants and raves about the officers and the army and soons gains a measure of self-confidence. But the words of a "sarcastic man" turn him into "a modest person" again (97).
Ch. XVII: The enemy adavances. H. in a state of frenzy fires his gun like an automaton; an officer has to stop him and his comrades look upon him as a "war-devil" (103). Yet his achievement is diminished by the fact that "he had not been aware of the process. He had slept and awakening, found himself a knight".
Ch. XVIII: A short pause. H and his friend on an eminence overhear a conversation between two officers who call the regiment "mule-drivers" (106).
Ch. XIX: Put on their mettle by this sarcasm, H and Wilson lead the charge and even take hold of the regiment's flag. H. eventually experiences "a temporary absence of selfishness" (110).
Ch. XX: The regiment hesitates and retreats ("the retreat was a march of shame to him", 115); the enemy is as confused as the attackers (118). H. takes a stand and his comrades eventually drive back the enemy. Enthusiasm: "And they were men" (119).
Ch. XXI: The veterans make fun of the "fresh fish" who "stopped about a hundred feet this side of a very pretty success" (122). H. and Wilson are congratulated for their gallantry, hence "the past held no pictures of error and disappointment" (125).
Ch. XXII: Self-confidence; H. bears the colors and resolves "not to budge whatever should happen" (129).
Ch. XXIII: H. and the other men charge "in a state of frenzy" (132). Wilson captures the enemy flag.
Ch. XXIV: Ironic overtones / the men must go back to where they started from; all the fighting has been to no avail. H's "brain emerges" (137) after a momentray eclipse. H. reviews "the delightful images of memory" (138), "puts the sin at a distance" (139) and feels "he was a man". By the end of the novel he seems to have acquired a new awareness of his own powers and limitations though Crane's irony makes it hard to form a definitive opinion on the hero's progress Once brought down to its essentials the plot appears as a mere journey of initiation following the ternary pattern of any basic narrative process. Some sort of formalist approach could be applied to the story: According to T. Todorov quoting Alan Dundes: "le processus narratif de base consiste dans une action qui se developpe en 3 temps: état de départ, processus proprement dit, résultat" (p. 85).
Claude Brémond dans son ouvrage Logique du récit substitue à cette séquence une triade assez voisine correspondant aux trois temps marquant le développement d'un processus: virtualité, passage à l'acte, achèvement. Il précise de plus que "dans cette triade, le terme postérieur implique l'antérieur: il ne peut y avoir achèvement s'il n'y a eu passage à l'acte, il ne peut y avoir passage à l'acte s'il n'y a eu virtualité. 2° le passage à l'acte de cette virtualité (par exemple le comportement qui répond à 1'incitation contenue dans la situation "ouvrante"); 3° l'aboutissement de cette action qui "clôt" le processus par un succès ou un échec.
Nous pourrions proposer pour le RBC le schéma suivantmême s'il ne rend compte que d'un aspect de l'oeuvre: MOTHER- ----------------------------------------------------------------------FLAG
_Impunité BOY/YOUTH MAN IGNORANCE/INNOCENCE----------------------------------------KNOWLEDGE/GUILT
THE CHARACTERS
The characters in the RBC fall into 2 categories based on the opposition between the group or community on the one hand -say the Army and Regiment -and the individual or rank and file on the other. As Henry's mother tells him: "Yer jest one little feller amongst a hull lot of others..." (6)
The Army/The Regiment
The image of the "monster" is the most frequently associated with the army which is first described as a "crawling reptile" (15) and compared to "one of those moving monsters wending with many feet" (15). This "composite monster" (33) is given "eyes", "feet" (1) and "joints" (42).
The army is indeed an anonymous mass of men, "a vast blue demonstration" (8), a kind of arrested or ordered chaos ready to turn loose at the 1 st word of command. This accounts for the depiction of the regiment as a "mighty blue machine" (71) which of course works "mechanically" and sometimes "runs down" (116). This idea is also conveyed by the image of the box cropping up p. 23: "But he instantly saw that it would be impossible for him to escape from the regiment. It enclosed him.
And there were iron laws of tradition and law on four sides. He was in a moving box". It is in its fold that Henry experiences a mysterious fraternity "the subtle battle brotherhood" that comes from sharing danger with other people (36):
"He suddenly lost concern for himself and forgot to look at a menacing fate. He became not a man but a member. He felt that something of which he was a part -a regiment, an army, a cause or a country -was in a crisis. He was welded into a common personality which was dominated by a single desire. For some moments he could not flee no more than a little finger can commit a revolution from a hand"
The annihilation of personality in the course of a trance-like submersion in the group-will is to be opposed to the image of separation or even amputation from the same group:
"If he had thought the regiment was about to be annihilated perhaps he could have amputated himself from it" (36)
"He felt that he was regarding a procession of chosen beings. The separation was a threat to him as if they had marched with weapons of flame and banners of sunlight. He could never be like them."
In fact, the regiment sometimes appears as a sort of mother-substitute ("His [the tattered soldier's] homely face was suffused with a light of love for the army which was to him all things beautiful and powerful" 56) and Henry's return to the regiment may be likened to a return to the womb (see ch. 13 last section).
THE PRIVATES
The RBC focuses only on 3 soldiers: Henry Fleming, Jim Conklin and Wilson. Although the theme of the novel is the baptism of fire of a Union private the tone is psychological rather than military. Its main characters are neither fully representative nor yet particularly individual and are, most of the time designated as figures in an allegory. Crane aims at a certain depersonalization of his characterss they represent "Everyman". He tried to turn the battle he described into a type and to that effect he erased all names in the process of revision so as to give an allegorical tone to his war tale. According to M. H. Abrams's A Glossary of Literary Terms, an allegory "is a narrative in which the agents and action, and sometimes setting as well, are contrived not only to make sense in themselves but also to signify a 2 nd , correlated order of persons, things, concepts or events. There are two main types:
(1) Historical and political allegory, in which the characters and the action represent or allegorize, historical personages and events. [...]
(2) The allegory of ideas, in which the characters represent abstract concepts and the plot serves to communicate a doctrine or thesis. In its development the RBC bears strong resemblance to The Pilgrim's Progress from which it borrowed one of the oldest and most universal plots in literature: that of the Journey, by land or water. In the case of the RBC the pattern is that of a journey leading through various experiences to a moral or spiritual change. This explains why the personages are at first mere labels, archetypal characters slowly acquiring some sort of identity. Thus the novel refers to "the youth" or "youthful private" (25); "the tall soldier", (J.C 11); "the blatant soldier" (Wilson; 165; there also appears "a piratical private" (17); "a sarcastic man" (96) and lastly "the man of the cheery voice" (78) Our analysis will be limited to the first three ones.
HENRY FLEMING/THE YOUTH
When Henry Fleming, motivated by dreams of glory and stirred by voices of the past and visions of future deeds of bravery, enlists in the Union forces, he severs the bonds tying him to his mother and his rural life. Once in the army he is left to himself, stranded in a new environment where he must find new bearings: "Whatever he had learned of himself was here of no avail. He was an unknown quantity. He saw that he would again be obliged to experiment as he had in early youth. He must accumulate information of himself..." (9). From the outset, Henry is confronted with the question of self-knowledge; all he cares to know is whether he is the stuff that heroes are made from: "He was forced to admit that as far as war was concerned he knew nothing of himself" (9).
Henry must in other words prove himself in his own eyes as well as in other people's and he sets about it accountant-fashion; he is forever trying to square accounts with himself and to prove by dint of calculations that he can't be a coward:
"For days he made ceaseless calculations, but they were all wondrously unsatisfactory. He found that he could establish nothing" ( 12) "He reluctantly admitted that he could not sit still and with a mental slate and pencil derive an answer." (13)
Henry is holding an eternal debate with himself and this excruciating self-examination bespeaks a Puritan conscience, beset with self-doubts and the uncertainty of Salvation: "the youth had been taught that a man becomes another thing in a battle. He saw salvation in such a change" (29).
Fleming is afraid of being weighed in the balance and found wanting and ironically enough the more he compares himself with his comrades the more puzzled he is:
"The youth would have liked to have discovered another who suspected himself. A sympathetic comparison of mental notes would have been a joy to him. [...] All attempts failed to bring forth any statement which looked in any way like a confession to those doubts which he privately acknowledged in himself" (13)
Soon Henry comes to experience a feeling of seclusion: "He was a mental outcast" (20) and suffers from lack of understanding on the part of the others: "He would die; he would go to some place where he would be understood" (29). In fact Henry has only himself to blame; he is not kept apart, he does keep himself apart from the others. Characteristically, Henry turns the tables and rationalizes his situation by convincing himself that he is endowed with superior powers: "It was useless to expect appreciation of his profound and fine senses from such men as the lieutenant" (29)
or again: "He, the enlightened man who looks afar in the dark, had fled because of his superior perceptions and knowledge".
This accounts for the repeated theme of Fleming's prophetic rôle towards his comrades and the world: "How could they kill him who was the chosen of gods and doomed to greatness?" (92).
One is often under the impression that Henry suffers from delusions of grandeur and becomes a prey to his imaginings; Henry's imagination is both an asset and a liability; it sometimes goads him into a spurious sort of heroism:
"Swift pictures of himself apart, yet in himself, came to him -a blue desperate figure leading lurid charges with one knee forward and a broken blade high a blue determined figure standing before a crimson and steel assault, getting calmly killed on a high place before the eyes of all" (67-68)
oron the contraryparalyses him with groundless fears:
"A little panic-fear grew in his mind. As his imagination went forward to a fight, he saw hideous possibilities. He contemplated the lurking menaces of the future, and failed in an effort to see himself standing stoutly in the midst of them" (95).
Henry is actually stalemated by his imagination; the only way out is for him to take the plunge i.e. to find out in action what he cannot settle in his imagination: "He finally concluded that the only way to prove himself was to go into the blaze, and then figuratively to watch his legs and discover their merits and faults" (12)
After a trying period of waiting Henry sees action for the 1 st time in his life; in the furnace of battles the youth loses all concern for himself and for a brief moment dismisses all his qualms: "He became not a man but a member" [...] "He felt the subtle battle brotherhood more potent even than the cause for which they were fighting." (36). After his short brush with the enemy, the youth goes into an ecstasy of self-satisfaction; his appreciation of his own behaviour is out of proportion with the importance of the encounter:
"So it was all over at last! The supreme trial had been passed. The red, formidable difficulties of war had been vanquished. [...] He had the most delightful sensations of his life. Standing as if apart from himself, he viewed the last scene. He perceived that the man who had fought thus was magnificent."
His self-confidence however is short-lived; a second attack launched against his line causes his sudden panic and flight: "There was a revelation. He, too, threw down his gun and fled. There was no shame in his face. He ran like a rabbit. (41). As befits a country boy, Henry seeks refuge in Nature and even tries to enlist her sympathy:
"Nature had given him a sign. The squirrel, immediately upon recognizing danger, had taken to his legs without ado. [...] The youth wended, feeling that Nature was of his mind. She re-enforced his argument with proofs that lived where the sun shone". ( 49)
The dialogue he carries on with his own conscience often contains overtones of legalistic chicanery: it is a constant search for excuses to justify his cowardly conduct:
"His actions had been sagacious things. They had been full of strategy. They were the work of a master's legs." (48) "He felt a great anger against his comrades. He knew it could be proved that they had been fools."
Fleming even goes to the length of wishing the army were defeated; that would provide him with "a means of escape from the consequences of his fall." (71) The use of the term "fall" bears witness to the religious undertone of his experience and the protagonist's preoccupation with moral responsibility ("He denounced himself as a villain" [...] he was their [the soldiers'] murderer" 71)
and personal redemption has led many criticts to compare The Red Badge of Courage with Hawthorne's The Scarlet Letter. The same image lies at the core of the two novels; we find the "scarlet letter" on the one hand and reference to "a red badge" or "the letters of guilt [...] burned into [the hero's] brow" (57) on the other hand. Both novels deal with the discovery through sin, of true and authentic self, the isolation which is the consequence of that discovery, and the struggle to endure which is the consequence of that isolation. Henry knows "he will be compelled to doom himself to isolation" (71) because the community represents a possible danger from which he must take refuge ("The simple questions of the tattered man had been knife thrusts to him. They asserted a society that probes pitilessly at secrets until all is apparent" 65). The price Henry has to pay to be counted again in the ranks of his comrades is that of public acknowledgement of private sin. The price is too high and the youth will go on wandering about the battlefield.
The episode related in chapter 12 is one of the turning-points in Henry's progress. After being ironically awarded the red badge of courage he had been longing for Henry can return to his regiment and try to achieve salvation not through introspection or public confession but simply through action. From then on the story assumes the form of a journey of expiation. Henry is described as a daredevil fighting at the head of his unit during a charge and even catching hold of the colours of the regiment. Yet these deeds of bravery are motivated by his feeling of guilt and his spite over having been called a mule driver (106). Moreover his actions are performed in a sort of battlefrenzy which leaves little room for the expression of any conscious will and determination. War teaches Henry how to be brave but also blunts his moral sense; Henry becomes an efficient soldier but one may wonder if this is a desirable achievement for a country boy. Crane's irony also casts some doubts on the very value of Henry's heroism which appears not so much as a predictable possessionsomething one can just fight forbut as an impersonal gift thrust upon him by a capricious if not absurd Providence. Why was Henry, of all soldiers, spared and given the opportunity to "[...] rid himself of the red sickness of battle" (140) and to turn "with a lover's thirst to images of tranquil skies, fresh meadows, cool brooks -an existence of soft and eternal peace" ? There is some irony in considering that the vistas opening up before Henry are little different from what they would have been had he stayed on his mother's farm. Is that to say that after so many tribulations Henry is back to where he started from ? One may think so. Fleming went off to war entertaining delusions about its very nature and his own capacity for heroism and ends up with a naive vision of harmony with nature. Henry has undoubtedly grown more mature but he can still fool himself about the cause of his wound or the fact that he has fled from battle as much as he fooled other people.
("He had performed his mistakes in the dark, so he was still a man" 91). The fundamental question at issue is the following: does Henry Fleming develop in the course of the novel ? A quotation from p. 139 will provide us with a valuable clue:
"Yet gradually he mustered force to put the sin at a distance. And at last his eyes seemed to open to some new ways. He found that he could look back upon the brass and bombast of his earlier gospels and see them truly. He was gleeful when he discovered that he now despised them. With this conviction came a store of assurance. He felt a quiet manhood, nonassertive but of sturdy and strong blood. He knew that he would no more quail before his guides wherever they should point. He had been to touch the great death. He was a man."
The youth in his baptism of fire has acquired self-kwowledge and experience but a radical change has not taken place within him; he remains in his heroic pose at the close of the novel just as grotesque as the fearful "little man" he was at the beginning. As J. Cazemajou puts it in his study of the novel: "his itinerary led him not to eternal salvation but to a blissful impasse". Henry has become a man, true enough, but a man with a guilty conscience; "the ghost of his flight" and "the specter of reproach" born of the desertion of the "tattered man" in his sore need keep haunting Fleming. His manliness has been acquired at the expense of his humanity.
The Red Badge of Courage contains the account of a half-completed redemption. It is only in a satellite story entitled "The Veteran" that Henry drinks the bitter cup to the dregs and purges himself of his former lie by confessing his lack of courage on the battlefield. The other two characters the "tall soldier" and the "blatant soldier" are mere foils for Henry Fleming.
THE TALL SOLDIER
This character is doubtless one of the most controversial figures in the novel. A small critical warfare has been waged over his rôle and function and many interpretations have been put forward since the book saw print. One hardly expects to see the blustering, self-opinionated, smug news- Another interpreter of the novel, Jean Cazemajou rejects Stallman's thesis and suggests a more matter-of-fact reading. The "tall soldier" he argues, does not undergo any significant change as the story unfolds. Unlike Henry, Jim Conklin is not tormented by fears and self-doubts; he adjusts to his soldier's life with much philosophy:
"He accepted new environment and circumstance with great coolness, eating from his haversack at every opportunity. On the march he went along with the stride of a hunter, objecting to neither gait, nor distance." (28)
Even if Henry often looks to him for advice and guidance J. Conklin is no embodiment of the Redeemer. The French critic sees in this backwoodsman, led by instinct and immune to unpleasantness, a kind of ironic version of the traditional hero, some sort of natural man plunged into the furnace of war. Jim Conklin's messianic rôle is rather played down in Cazemajou's interpretation; J. Conklin's example has little impact on Henry's behaviour. Far from confessing his cowardice after Jim's death, Henry merely tries to conceal it from his comrades and the sight of the gruesome danse macabre of the spectral soldier triggers off in the youth a certain aggressiveness which is but fear or panic in a different guise. As for the wafer image, it is just an evocation of primitive rites to a blood thirsty God reminiscent of the Aztec religion.
Whatever the significance of "the tall soldier" may be, The Red Badge of Courage would lose much of its uniqueness without the haunting image of the "spectral soldier"' stalking stonily to the "rendez-vous" where death is awaiting him.
THE LOUD SOLDIER : WILSON
Wilson is a much less difficult character to deal with. His evolution from a bragging swashbuckler to a clear-sighted, modest combatant is almost the exact opposite of Henry's progress. In chapter 3 Wilson appears as a soldier spoiling for a fight:
"I don't mind marching, if there's going to be fighting at the end of it. What I hate is this getting moved here and moved there, with no good coming of its as far as I can see, excepting sore feet and damned short rations." (185).
He evinces great confidence in himself: "[...] I'm not going to skedaddle. The man that bets on my running will lose his money that's all." (19) Yet on hearing the first noises of battle, the loud soldier's confidence crumbles away and in a moment of weakness and self-pity he confesses his forebodings to Henry and entrusts him with a packet of letters: "It's my first and last battle, old boy.
[...] I'm a gone coon this first time and-and I w-want you to take these here things-to-my-folks". ( 29)
Unlike Henry, however, Wilson bears up well under the strain and passes his initiation successfully and acquires in the process of time a quiet manhood which Fleming is to attain only at the end of the battle:
"He was no more a loud young soldier. There was about him now a fine reliance. He showed a quiet belief in his purpose and his abilities. And this inward confidence evidently enabled him to be indifferent to little words of other men aimed at him" (87).
In the same way, the realisation of his own insignificance is sooner borne in upon him than is the case with Henry: "Apparently, the other had now climbed a peak of wisdom from which he could perceive himself as a very wee thing" 87). Such transformation is emphasized by the reference to the chasm separating Wilson's former self from his present condition ("He spoke as after a lapse of years" 88). In spite of the fact that Henry takes a kind of sadistic pleasure in his friend's embarrassment over getting back his letters, the two characters are drawn together and share the same experiences. They form one of those countless couples of males which are to be found in American Literature (Huck and Jim; George Milton and Lennie Small, etc.); their relations are always slightly tinged with homosexuality (cf. 83). Wilson's bombast and fiery spirits vanish for a time and when he is seen to make pacific motions with his arms and to assume the rôle of a peacemaker among his comrades (89) it comes as something of a surprise. But that peaceful mood does not last long and just like Henry Wilson is carried away by the battle-fury that seizes the regiment.
His daring reaches a climax with the capture of the enemy's flag ("The youth's friend went over the obstruction in a tumbling heap and sprang at the flag as a panther at prey" 134). The admiration of their comrades after their courageous attitude under enemy fire kindles the same feeling of elation in both of them: "they knew that their faces were deeply flushing from thrills of pleasure. They exchanged a secret glance of joy and congratulation". (125) Although they are running in opposite directions Henry's and Wilson's itineraries lead up to the same ambiguities, for their heroism is nothing but the ordinary stock of oourage among fighting men and is therefore of uncertain value or meaning. The ease with which they forget the conditions under which they acquired their courage diminishes them: "They speedily forgot many things. The past held no pictures of error and disappointment. They were very happy, and their hearts swelled with grateful affection for the colonel and the youthful lieutenant." (125)
As is often the case with Crane a strong ironic coloring can easily be detected here and he remains faithful to his intention never to point out any moral or lesson in his stories.
THEMES OF THE RED BADGE OF COURAGE
On the surface the RBC is a simple tale of warfare yet the scope of the story is greatly enlarged by the themes which run through the narrative and deepen its implications. This attempted survey of the themes underlying the novel is far from being comprehensive and will be limited to four of them, namely: the theme of War, the theme of Nature, the theme of the Sacred and the motif/ theme of Vision.
1) THE THEME OF WAR
The novel describes not only the war waged by the Yankees and the Rebs but above all the "the self-combat of a youth" who must prove himself in battle. Henry the protagonist is involved in more than one battle; he fights against the enemy true enough, but as the original title for the book suggests, he also fights private battles with himself. In the author's own words: "Doubts and he were struggling." (68)
Henry's fight sometimes achieves cosmic reaches when he comes to feel that "he was engaged in combating the universe" (73) or again ("Yesterday, when he had imagined the universe to be against him, he had hated it" 100). Thus we have war at every level; even Nature proves to be a vast field of battle where one has to fight for survival and this is quite in keeping with Crane's deeprooted convictionwhich lies at the heart of all his fictionthat the essence of life is warfare. The RBC is holding no brief for war; Crane wanted not only to give an account of an actual battle but also to deal with war in the abstract i.e. to debunk the concept of war that had gained currency in his time. War was then considered as a kind of social phenomenon partaking of the Sacred, a ritual game vith fixed rules and much glamour. Whether one was waging or depicting war decorum was always to be preserved. By constantly resorting to irony or the grotesque Crane exposes the seamy side of war and deglamorizes it. The soldiers don't even know why the war is fought; they are led into battle like a flock of sheep to the slaughterhouse, the leaders treat their men like animals. In short, the soldiers are just material that can be used without regard for their lives, mere cannonfodder.
The main feature in this study of war is disillusionment; after two days' battle the men march back to their starting-place, all their sacrifices have been to no avail. War is meaningless even if it sometimes appears as a testing-ground of man's courage and stamina but it is not purposeless. Crane anticipated modern theories on the nature of war when he described it as a periodically organized wastage, a process meant to dispose of surplus goods and human lives; in a former version of the text Henry declared: "War, he said bitterly to the sky, was a makeshift created because ordinary processes would not furnish deaths enough" (Cady 128) Despite these strictures, the narrative sometimes lifts war to the plane of the cosmic and the mythic: the battle sometimes appears as a re-enactment of Greek or Biblical struggles. Thus Crane reactivates the myth of Cadmos through references to Homeric struggles and images of dragons or of men springing fully armed from the earth: "The sun spread disclosing rays, and, one by one, regiments burst into view like armed men just born of the earth" ( 23).
The "red and green monster" looming up (43) bears close kinship to Appollyon the Biblical archetype described in Revelations. Yet, in spite of those references to bygone battles and heroic times, Henry's adventure ends neither in victory nor even in the sacrifice of his life; Henry's stature never exceeds that of a man. In fact, it might be argued that Henry develops into some sort of ironic anti-hero figure since the spirit of rebellion he evinces at the beginning of the story is by and by replaced by a more pliant disposition, a readiness to accept things as they are: "He knew that he would no more quail before his guides wherever they should point" (139). Linked to the theme of war are other attendant motifs such as the theme of courage; the theme of fear and last but not least the wound-motif. As these have already been dealt with, we'll just skip them here and proceed to a discussion of the other three.
2) THE THEME OF NATURE
Nature images pervade the narrative and a reference to Nature ends the book as one had begun it. Throughout the story, Nature appears as a kind of "objective correlative" of Henry's emotions. All that Henry can find in Nature is a reflection of his own psychological state continually wavering betveen enthusiasm and anxiety. Henry will be slow in realising the indifference or even hostility of Nature to Man. Meanwhile he constantly appeals to her for comfort, proofs and justifications of his behaviour: "This landscape gave him assurance. A fair field holding life. It was the religion of peace. [...] He conceived Nature to be a woman with a deep aversion to tragedy." (49)
Yet even if Nature "is of his mind" or "gives him a sign", there is no affinity, let alone complicity, between Man and Nature. Whatever Henry learns from Nature at one moment is contradicted in the next by the selfsame Nature (cf. the squirrel and gleaming fish episode 50). Much as he should like to become a prophet of that "religion of peace" that Nature apparently advocates, Henry is forced to admit that he's a victim of false appearances -Nature does not care about Man's fate:
"As he gazed around him the youth felt in a flash of astonishment at the blue, pure sky and the sun gleamings on the trees and fields. It was surprising that Nature had gone tranquilly on with her golden process in the midst of so much devilment" (40)
This astonishing revelation culminates in Henry's realization of his own littleness and insignificance: "New eyes were given to him. And the most startling thing was to learn suddenly that he was sery insignificant" (107). Nature can do without man even if the opposite isn't true; this startling discovery is most vividly expressed by one of the characters in "The Open Boat" (An Omnibus 439):
"When it occurs to a man that nature does not regard him as important, and that she feels she would not maim the universe by disposing of him, he at first wishes to throw bricks at the temple, and he hates deeply the fact that there are no bricks and no temples."
Man is thus made the plaything of Nature; his actions are performed under the pressure of the natural environment and the physical forces surrounding him. Nature is no veiled entity holding any revelation for man; she does not manifest any divine Presence. In its midst man is just a pawn to multiple compulsions; this aspect justifies the reference to naturalism (a point we will discuss later on) which has been made in connection with Crane's novel.
3) THE THEME OF THE SACRED
Though the protagonist never seeks comfort in religion, the RBC can't be read outside a religious tradition. The narrative abounding in Biblical references and religious symbols describes the sufferings of an unquiet soul assailed by the classic doubt of the Calvinist: "Am I really among the elect ?" That question is of course transposed in different terms but one is struck by the parallelism between Henry's situation and his Puritan forebears':
"He felt that he was regarding a procession of chosen beings. The separation was as great to him as if they had marched with weapons of flame and banners of sunlight. He could never be like them. He could have wept in his longings." (67) This similarity is also reinforced by the fact that, since the Middle Ages, the spiritual life of man has often been likened to a pilgrimage or a battle ("[...] a man becomes another thing in a battle. He saw his salvation in such a change". 29 / "[...] with a long wailful cry the dilapidated regiment surged forward and began its new journey !" 113)
As is seen in the last quotation the progress of the group parallels that of the solitary hero, H.
Fleming. The RBC presents a new version of the Fall of man entrapped by his curiosity and vanity and a new process of Election through suffering and wounds. Henry's moral debate also points up his religious upbringing. His conscience is racked by remorse and he considers himself as a murderer because he deserted his friends and thus holds himself responsible for the death of his comrades:
"His mind pictured the soldiers who would place their defiant bodies before the spear of the yelling battle fiend and as he saw their dripping corpses on an imagined field he said that he was their murderer." 71
Henry awaits retribution because if he sometimes finds some comfort in the fact that he did his misdeed in the dark ("He had performed his mistakes in the dark, so he was still a man." 91), he knows in his heart of hearts that all his actions are performed under the gaze of a dreadful God whose attention nothing escapes. This supreme Judge is symbolized by the haunting memory of Henry's mother ("I don't want yeh to ever do anything, Henry that yeh would be ashamed to let me know about. Jest think as if I was a-watching yeh." 6) and the collective consciousness of the regiment ("an encounter with the eyes of judges" 91).
Henry's experiences are essentially expressed in terms of vision and one might say quoting
Emerson that in the RBC the hero undergoes "a general education of the eye"/"I". The novel traces the protagonist's evolution from a certain way of world-watching to a more active stance; the visions of a naive, innocent eye will be replaced by those of a more conscious eye. Henry goes to war out of curiosity, because in the author's own words "he ha[s] longed to see it all." (4) and he is stirred by visions of himself in heroic poses. The motif of the "eye" is stressed from the outset; the opening description of the novel sets going a train of images that will run through the narrative and stresses a fundamental opposition between "seeing" and "being seen"; the active and passive poles:
"From across the river the red eyes were still peering." ( 14)
"The mournful current moved slowly on, and from the water, shaded black, some white bubble eyes looked at the men." ( 23)
Henry is at first some sort of onlooker. After the short encounter with the enemy, he sees himself in a new light (41) and comes to believe he is endowed with superior powers of perception naturally translated in visual terms: "There was but one pair of eyes in the corps" (25)/"This would demonstrate that he was indeed a seer." (70) This accounts for the prophetic rôle which he believes to be his. In the chapel-like forest the perspective changes; there is a reversal of situations: the onlooker is now looked at by somebody else: "He was being looked at by a dead man" (50) From now on Henry will have to face his comrades' probing eyes: "In imagination he felt the scrutiny of his companions as he painfully laboured through some lies" (68) / "Wherever he went in camp, he would encounter insolent and lingeringly-cruel stares." (72) Quite characteristically, Henry puts all his fears out of sight and indulges in daydreaming and gratifying pictures of himself (69) where he is seen to advantage: "Swift pictures of himself, apart, yet in himself, came to him.[...] a blue determined figure [...] getting calmly killed on a high place before the eyes of all." (68) Henry's hour of glory comes after his capture of the flag; he is then the cynosure of every eye: "[...] they seemed all to be engaged in staring with astonishment at him. They had become spectators." (102): "He lay and basked in the oocasional stares of his comrades" (103). The roles are now reversed; the spectator has himself become an object of admiration; Henry is now looked upon as a hero.
By going through fire and water Henry gains a new awareness, undergoes a radical change which is of course expressed in a new angle of vision: "New eyes were given to him. And the most startling thing was to learn suddenly that he was very insignificant" (106).
The youth develops a new way of observing reality; he becomes a spectator again but can see sverything in the right perspective, in its true light:
"From this present view point he was enabled to look upon them (his deeds, failures, achievements) in spectator fashion and to criticise them with some correctness, for his new condition had already defeated certain sympathies" (137)
So, the wheel has come full circle: "He found that he could look back upon the brass and bombast of his earlier gospels and see them truly." (139)
Nevertheless the hero remains in the end as susceptible as ever to the lure of images and the circular structure of the novel is brought out by the last vision Henry deludes himself with: "He turned now with a lover's thirst to images of tranquil skies, fresh meadows, coo1 brooks-an existence of soft and eternal peace."
IMAGES AND SYMBOLS IN THE RED BADGE OP COURAGE
One of the most original facets of Crane's war novel is its consistent use of manifold images and symbols which sometimes give this story the style of a prose poem. The overall tonality of the novel owes much of its uniqueness to the patterns of natural, religious or even mechanistic imagery created by the author. Our study sill be limited to these three sets of images.
Nature and animal images
The protagonist's rural background is made manifest by the impressive number of scenes or similes referring to animals and forming a conventional bestiary by the side of a Christian demonology swarming with monsters directly borrowed from Greek or Biblical literatures. The RBC evinces a truly apocalyptic quality in the description of War which is compared to "a red animal" (75)
"gulping [thousands of men] into its infernal mouth" (45). The two armies are, more often than not, associated with images of "monster" (14/33), "serpent" (15) and "dragons" (44); their encounter is likened to that of two panthers ("He conceived the two armies to be at each other panther fashion." 51). The panther and the eagle seem to be the only two animals suggestive of some war-like qualities; the other animals appearing in the novel are more tame (mainly farm animals) and as a general rule carry different connotations: the soldiers are led into battle like "sheep" (108-111), "they are chased around like cats" (98), to protect themselves from the bullets "they dig at the ground like terriers" (26) and after charging like "terrified buffaloes" (72) they are eventually "killed like pigs" (25). The hero's lack of courageand staturein the face of danger is suggested by comparisons referring to even smaller animals: the rabbit (43), the chicken (44) and the worm, the most despicable of all ( "He would truly be a worm if any of his comrades should see him returning thus, the marks of his flight upon him.", 68). Henry's companions are also likened to animals: Wilson to a "coon" (29) and then to a "panther" (134); Jim Conklin is once associated to the "lamb" during the death scene -a fitting symbol of peacefulness for the most philosophical of characters ("[...] the bloody and grim figure with its lamblike eyes" 55).
Most of these images illustrate the fact that war brings out the most primitive impulses and instincts in man and reduces him to the level of a beast ("he [the youth] had been an animal blistered and sweating in the heat and pain of war." 140) or again, (103) "he had been a barbarian, a beast". In the midst of the fight Henry experiences "an animal like rebellion" (48) and in a regressive response to fear and shame he develops "teeth and claws" to save his skin. Death is even described as an animal lurking within the soldiers; when Jim Conklin dies "it was as if an animal was within and was kicking and tumbling furiously to be free." (61)
The flag, of all things pertaining to the army, is the only one to be endowed with positive connotations; it is evocative of a "bird" (40), of a "woman" (113) and of a "craved treasure of mythology" (133).
The same feminine qualities are sometimes attributed to Nature but this is not consistent throughout the novel as we've already seen. The main characteristic of Nature whether in the shape of the Sun or the Sky, the Forest or the Fields is indifference pure and simple. The only revelation in store for Henry when he yields to Nature's "beckoning signs" is that of a "charnel place" (85).
Religious imagery
As a P. K. (preacher's kid) brought up in a religious atmosphere, Stephen Crane commanded an impressive stock of religious and Biblical references. As J. Cazemajou points out in his short but thought-provoking study of Stephen Crane:
He was deeply conscious of man's littleness and of God's overbearing power. Man's wandering on the earth were pictured by him as those of a lonely pilgrim in a pathless universe. Crane's phraseology comes directly from the Bible, the sermons, and the hymns whioh had shaped his language during his youth. The topography of his stories where hills, mountains, rivers, and meadows appear under symbolie suns or moons is, to a large extent, an abstraction fraught with religious or moral significance. [...] In Crane's best work the imagery of the journey of initiation occupies a central position and reaches a climactic stage with some experience of conversion. He did not accept, it is true, the traditional interpretation of the riddle of the universe offered by the Methodist church.
Nevertheless he constantly used a Christian terminology, and the thoughts of sin inspired his characters with guilty fears and stirred up within them such frequent debates with a troubled conscience that it is impossible to study his achievement outside a religious tradition (37) This goes a long way toward explaining the following images:
in terms of shooting (camera-work) e.g. after the panning shot setting the stage for future action (beginning of the novel) we have a close-up (Henry is picked out in the mass of soldiers p. 2: "There was a youthful private, etc.") and then a flashback when the story shifts from the present to an earlier part of the story (parting scene with Henry's mother, 4). Numerous examples are to be found throughout the narrative.
The above remarks refer to the narrator's point of view but a second point of view functions in the narrative: the author's (four points of view interact in a novel: the author's, the narrator's, the character's or characters" and last but not least the reader's; their importance varies from one novel to another). The author's point of view manifests itself in the use of irony which constantly betrays an ambiguous presence. Who are the following appreciations ascribable to ?:
"The music of the trampling feet, the sharp voices, the clanking arms of the column near him made him soar on the red wings of war. For a few moments he was sublime." ( 68)
"He drew back his lips and whistled through his teeth when his fingers came in contact with the splashed blood and the rare wound," (
"And for this he took unto himself considerable credit. It was a generous thing," (93)
As one of Crane critics rightly points out: "Irony is not an ideal but a tactic. It's a way of taking the world slantwise, on the flank" (Cady, 90) in order to better expose its weaknesses and its flaws. Crane's irony was the basis of his technique; he levelled it at all abstract conventions and pomposities; at God, then at Man and at the Nation. There are of course, several types of irony but according to M. H. Abrams (A Glossary of Literary Terms): "In most of the diverse critical uses of the term "irony", there remains the root sense of dissimulation, or of a difference between what is asserted and what is actually the case". The duplicity of meaning is the central feature of irony The RBC sometimes exhibits structural irony i.e. uses a naive hero or narrator "whose invincible simplicity leads him to persist in putting an interpretation on affairs which the knowing reader-who penetrates to and shares, the implicit point of view of the authorial presence behind the naive person-just as persistently is able to alter and correct." Cf. page 64:
"Yeh look pretty peek-ed yerself," said the tattered man at last. "I bet yeh "ve got a worser one than yeh think. Ye'd better take keer of yer burt. It don't do t'let sech things go. It might be inside mostly, an" them plays thunder. Where is it located ?"
The reader knows at this stage that the hero has escaped unscathed and does not deserve such eager care.
Cosmic irony is naturally at work in the RBC; it involves a situation "in which God, destiny, or the universal process, is represented as though deliberately manipulating events to frustrate and mock the protagonist." p. 83). Cf. 47:
The youth cringed as if discovered in a crime. By heavens, they had won after all ! The imbecile line had remained and become victors. He could hear the cheering [...] He turned away amazed and angry. He felt that he had been wronged. [...] It seemed that the blind ignorance and stupidity of those little pieces had betrayed him. He had been overturned and crushed by their lack of sense in holding the position, when intelligent deliberation would have convinced them that it was impossible
Irony is inherent in the very structure of the narrative since at the end of the story after threedays' battle the army must recross the river from where the attack was launched at the beginning.
The whole tumult has resulted in no gain of ground for the Union forces: "'I bet we're goin't'git along out of this an'back over th'river,' said he." p. 136
As is customary with Crane, man is involved in an absurb and tragic situation which highlights his insignificance and the ridiculousness of his efforts.
Impressionism
Although J. Conrad defined criticism as "very much a matter of vocabulary very consciously used", one is hard put to it to give a satisfactory definition of the terms "impressionism" and "impressionistic". It's nevertheless a fact that the epithet "impressionistic" was often applied to Crane's work in the author's lifetime. The term comes from the French impressionist painters who rebelled against the conventions and the official conservatism of their time. They determined to paint things as they appeared to the painter at the moment, not as they are commonly thought or known to be.
Such tendency was of course at variance with the tenets of realism for realism demanded responsibility to the common view whereas impressionism demanded responsibility only to what the unique eye of the painter saw. As one critic pointed out "the emphasis on the importance of vision is perhaps the only common denominator between this style of painting and Crane's art" (Cady). The credo of literary impressionists is, if we are to believe Conrad, "by the power of the written word to make you hear, to make you feelit is, before all, to make you see." The best account of the socalled impressionistic technique of the author has been given by the critic Robert H. Stallman whom I'll quote extensively:
Crane's style is prose pointillism. It is composed of disconnected images, which coalesce like the blobs of colour in French impressionist paintings, every word-group having a cross-referenee relationship, every seemingly disconnected detail having interrelationship to the configurated whole.
The intensity of a Crane work is owing to this patterned coalescence of disconnected things, everything at once fluid and precise. A striking analogy is sstablished between Crane's use of colors and the method employed by the impressionists and the neo-impressionists or divisionists, and it is as if he had known about their theory of contrasts and had composed his own prose paintings by the same principle. [...] Crane's perspectives, almost without exception, are fashioned by contrastsblack masses juxtaposed against brightness, colored light set against gray mists. At dawn the army glows with a purple hue, and "In the eastern sky there was a yellow patch like a rug laid for the feet of the coming sun; and against it, black and pattern-like, loomed the gigantic figure of the colonel on a gigantic horse'"(239). (An Omnibus p. 185-86).
Crane's striving after effects of light and shade and his concern for the effect of light on color are evidenced by the way he uses adjectives (or nouns used adjectivally). Crane actually paints with words and adjectives just as artists paint with pigments and this brings about most astounding effects and associations. Thus "red", "blue", "black" and "yellow" are the dominant hues of the novel: the regiment is "a vast blue demonstration" (8); the youth must confront "the red formidable difficulties of war" (41); he labours under "the black weight of his woe" (67) or is carried away "on the red wings of war" (68) then he walks "into the purple darkness" (76) before traversing "red regions" (134) resonant with "crimson oaths" (138). Some descriptions are saturated with visual impressions:
There was a great gleaming of yellow and patent leather about the saddle and bridle, The quiet man astride looked mouse-colored upon such a splendid charger. ( 46)
The clouds were tinged an earthlike yellow in the sunrays and in the shadow were a sorry blue. The flag was sometimes eaten and lost in this mass of vapour, but more often it projected sun-touched, resplendent. ( 42)
The reader also comes across a few examples of synaesthesia (association of sensations of different kinds; description of one kind of sensation in terms of another; color is attributed to sounds, odor to colors, sound to odors, and so on): "shells explodins redly" (31)/"a crimson roar" (51)/"red cheers" (53)
Pathetic fallacy
Another feature of Crane's stylea possible outgrowth of his interest in visionis the use of pathetic fallacy or ascription of human traits to inanimate nature. There's in Crane's work a strong tendency towards personification or anthropomorphism. The narrative teems with notations giving a dynamising, anthropomorphic quality to the object described. Here are a few telling examples:
"The guns squatted in a row like savage chiefs. They argued with abrupt violence." (40)
"The cannon with their noses poked slantingly at the ground grunted and grumbled like stout men, brave but with objections to hurry." (46)
"Trees confronting him, stretched out their arms and forbade him to pass" (52)
"Some lazy and ignorant smoke curled slowly." (118)
The youth listens to "the courageous words of the artillery and the spiteful sentences of the musketry" (53); the waggons are compared to "fat sheep" (66) and the bugles "call to each other like brazen gamecocks." (86).
Crane rarely aims at giving the reader a direct transcript of experience; reality is always pervaded by a strong subjective coloring which gives it unexpected and sometimes disquieting dimensions.
Realism and naturalism
There now remains for us to examine, however briefly, the realistic and naturalistic interpretations of Crane's work. Realism is perhaps of all the terms applied to Crane, the most suitable.
In a letter to the editor of Leslie's Weekly Crane wrote that:
"I decided that the nearer a writer gets to life the greater he becomes as an artist, and most of my prose writings have been toward the goal partially described by that misunderstood and abused word, realism."
What is meant by realism ? According to M. H. Abrams it is a special literary manner aiming at giving:
"the illusion that it reflects life as it seems to the common reader. [...] The realist, in other words, is deliberately selective in his material and prefers the average, the commonplace, and the everyday over the rarer aspects of the contemporary scene. His characters therefore, are usually of the middle olass or (less frequently) the working-class people without highly exceptional endowments, who live through ordinary experiences of childhood, adolescence, love, marriage, parenthood, infidelity, and death; who find life rather dull and often unhappy, though it may be brightened by touches of beauty and joy; but who may, under special circumstances, display something akin to heroism."
The RBC is obviously not consistently realistic yet one cannot read far into it without discovering a few distinctly realistic touches whether it is in the depiction of life in the army with its constant drilling and reviewing or in the emphasis on the commonplace and unglamorous actions.
Realism is also to be found in the style of the book when S. Crane reproduces the language heard at the time with its numerous elisions and colloquial or even corrupt expressions:
"'What's up, Jim?' 'Th' army's goin' t'move.'
'Ah, what yeh talkin' about ? How yeh know it is ?' 'Well, yeh kin b'lieve me er not, jest as yeh like,. I don't care a hang.'"(2) This is quite different from the style used for the narrative and the descriptions; this second style is characterized by its original figures of speech, its use of adjectives and its impressionistic technique (see examples in the above section). In fact, Crane's work manages a transition from traditional realism (with the extended and massive specification of detail with which the realist seeks to impose upon one an illusion of life) to an increasingly psychological realism. The emphasis on detail is characteristic of the realistic method of writing as was demonstrated by R. Jakobson in Fundamentals of Language (77-78): "Folloving the path of contiguous relationships, the realistic author metonymically digresses from the plot to atmosphere and from the characters to the setting in space and time." Crane's originality hovever lies in the fact that he held realism to be a matter of seeing, a question of vision. The novelist, he wrote, "must be true to himself and to things as he sees them."
What about Crane's alleged naturalism ? Naturalism is "a mode of fiction that was developed by a school of writers in accordance with a special philosophical thesis. This thesis, a product of post-Darwinian biology in the mid-nineteenth century, held that man belongs entirely in the order of nature and does not have a soul or any other connection with a religious or spiritual world beyond nature; that man is therefore merely a higher-order animal whose character and fortunes are determined by two kinds of natural forces, heredity and environment." (Abrams, 142)
True, Crane makes use of animal imagery (especially to describe Henry Fleming's panic syndrome) but he ignores the last two factors mentioned in the above definition. The only naturalistic element studied by Crane is aggressive behaviour and a certain form of primitiveness but his utilization of such material is quite different from Zola's, Crane's naturalism is more descriptive than illustrative and conjures up a moral landscape where the preternatural prevails.
The least one can say at the close of this survey of the many and various responses The RBC has aroused in critics is that the novel is much more complex than one would imagine at first glance. Such a multiplicity of widely diverging critical appreciations points to one obvious conclusion, namely, that "in art, in life, in thought, [Crane] remained an experimenter, a seeker of rare, wonderful gifts, an apprentice sorcerer." (Cady,46). -Q/W/E/R/T/Y, n° 4, octobre 1994 (PU Pau)
BIBLIOGRAPHY
[...] Le schéma suivant résume ce jeu d'options: ACHÈVEMENT PASSAGE A L'ACTE ÉVENTUALITÉ INACHÈVEMENT NON PASSAGE A l'ACTE Ainsi, pour qu'un segment temporel quelconque (un événement, une relation, un comportement, etc.) puisse être donné in extenso dans un récit, il faut et il suffit que soient données les modalités de son origine, celles de son développement, celles de son achèvement. De plus il s'agit d'un processus orienté, d'une virtualite qui s'actualise et tend vers un certain terme connu d'avance. La séquence élémentaire, qui reproduit ce processus s'articulera typiquement en trois moments principaux, chacun donnant lieu à une alternative: 1° une situation qui ouvre la possibilité d'un comportement ou d'un événement (sous réserve que cette virtualité s'actualise);
[...] A well-known example of allegory is Bunyan's The Pilgrim's Progress (1678) which personifies such abstract entities as virtues, vices, states of mind, and types of character. This book allegorizes the doctrines of Christian salvation by telling how Christian, warned by Evangelist, flees the City of Destruction and nakes his way laboriously to the Celestial City; on his way he encounters such characters as Faithful, Hopeful and the Giant Despair and passes through places like the Slough of Despond, the Valley of the Shadow of Death and Vanity Fair."
monger of the introductory chapter turn into the tragic, poignant sacrificial victim of Chapters 8-9.Two critics Robert W. Stallman and Daniel Hoffman contend that Jim Conklin whose initials are the same as those of Jesus Christ is intended to represent the Saviour whose death in a symbolic novel richly laden with Christian references somehow redeems Henry, the sinner. The key to the religious symbolism of the whole novel is according to the former the reference to "the red sun pasted in the sky like a wafer" (62). For Stallman, the "wafer" means the host, the round piece of bread used in the celebration of the Eucharist, and thus Jim Conklin becomes a Christ-figure. The likeness is re-inforced by the fact thc Jimts wounds conjure up the vision of the stigmata; he is wounded in the side ("the side looked as if it had been chewed by wolves," 62) and his hands are bloody ("His spare figure was erect; his bloody hands were quiet at his side," Ibid.). During the solemn ceremony of his death agony, Henry "partakes of the sacramental blood and body of Christ, and the process of his spiritual rebirth begins at the moment when the waferlike sun appears in the sky. It is a symbol of salvation through death"(An Omnibus, 200). It goes without saying that such a highly personal Christian-symbolist reading of the novel is far from being unanimously accepted by other Crane scholars. Edwin H. Cady, among others, regards Conklin as a mere "representation of the sacrified soldier [...] occupying in the novel a place equivalent to that of the Unknown Soldier in the national pantheon." (140). His death simply provides Crane with a new occasion to expose the savagery of war and furnishes a dramatic counterpoint to Henry's absorption in his own personal problems.
-
CADY, Edwin H. Stephen Crane. New Haven: College & University Press, 1962. -CAZEMAJOU, Jean. Stephen Crane. University of Minnesota Pamphlets on American Writers, n° 76, Minneapolis, 1969 -STALLMAN, Robert W. Stephen Crane : An Omnibus. New York: Alfred A. Knopf, 1970. -GONNAUD, M., J. M. Santraud et J. Cazemajou. Stephen Crane : Maggie, A Girl of the Streets & The Red Badge of Courage. Paris: Armand Colin, 1969 -PROFILS AMERICAINS: Les Écrivains américains face à la Guerre de Sécession, n°3, 1992, CERCLA, Université P. Valéry, Montpellier III.
-Nature = Church
The narrator mentions "the cathedral light of the forest" (26) then its "aisles" (27); "a mystic gloom" (14) enshrouds the landscape; the trees begin "softly to sing a hymn of twilight" (51); the hush after the battle becomes "solemn and churchlike" (127) and "the high arching boughs made a chapel [...] where there was a religious half-light (50).
-Henry's experiences = a spiritual itinerary
Henry is "bowed down by the weight of a great problem" (14) later he is crushed by "the black weight of his woe" (67). The change that war is going to bring about in him is tantamownt to salvation ("He saw his salvation in such a change" 27). The reader often comes across such telling words as: "revelation" (43) "guilt" (139); "sin" (Ibid.) "retribution" (91); "letters of guilt" (57); "shame" (64); "crime" (65); "ceremony" (61); "chosen beings" (67); "prophet" (70) and so on.
-War = A blood-swollen God "The wargod" ( 45) is one of the central figures in the RBC and the soldiers often appear as "devotee(s) of a mad religion, blood-sucking, muscle-wrenching, bone-crushing" (60).
The list could be lengthened at will; it is however abundantly clear that the religious vein is one of the most important in the novel. It is worth noting yet that there is no revelation in The RBC, no "epiphany" in its usual sense of a manifestation of God's presence in the world.
Mechanistic images
The RBC is a faithful, though oblique, reflection of the era in which it was written; it expresses certain doubts about the meaning of individeal virtue in a world that has suddenly become cruel and mechanical. The Civil War brought about in both armies "a substitution of mechanical soldierly efficiency for an imaginary chivalric prowess" (Crews, XVIII). This evolution is made manifest by the numerous mechanistic images interspersed in the narrative; there is first the image of the "box" (23), then the "din of battle" is compared to "the roar of an oncoming train" (29); the enemy is described in terms of "machines of steel. It was very gloomy struggling against such affairs, wound up perhaps to fight until sundown" (43). One sometimes gets the impression that war is described as if it were a huge factory; this is apparent (127): [...] an interminable roar developed. To those in the midst of it it became a din fitted to the universe. It was the whirring and thumping of gigantic machinery." Or again p. 53: "The battle was like the grinding of an immense and terrible machine to him. Its complexities and powers, its grim processes, fascinated him. He must go close and see it produce corpses."
Even the bullets buff into the soldiers "with serene regularity, as if controlled by a schedule" (117), In "the furnace roar of battle" men are turned into "methodical idiots ! Machine-like fools !" (45) and the regiment after the first flush of enthusiasm is like "a machine run down" (116). All these images contribute to a thorough debunking of war which no longer appears kind and love1y.
STYLE AND TECHNIQUE
The RBC is notable for its bold innovations in technique and the fact that its method is all and none. Never before had a war tale been told in this way; as a consequence, Crane's art has stirred up endless debates about whether the author was a realist, a naturalist or an impressionist. All possible labels have, at one time or another, been stamped upon Crane as an artist; we'll use them as so many valuable clues in appraising Crane's style and technique.
Point of view
According to the famous critic Percy Lubbock (The Craft of Fiction) "point of view" means "the relation in which the narrator stands to the story". In the RBC, the narrator's point of view is through Fleming's eyes i.e. the reader sees what and as Henry does though he is never invited to identify wholly with him. In fact, the narrative in the novel is conducted in a paradoxica1 way because if the voice we hear is that of some sort of third-person objective narrator, the point of view is located at almost the same place as if this were a first-person narrative: just behind the eyes of Henry Fleming. The description of the hut on page 3 is a case in point:
"He lay down on a wide bunk that stretched across the end of the room. In the other end, cracker boxes were made to serve as furniture. They were grouped about the fireplace. A picture from an illustrated weekly was upon the log walls and three rifles were paralleled on pegs."
The scene is described as if Henry were taking it in whereas the panorama depicted in the opening section of the novel is attributable to a detached narrator standing a godlike position above the landscape: "The cold passed reluctantly from the earth, and the retiring fogs revealed an army stretched out on the hills, resting. As the landscape changed from brown to green, the army awakened, and began to tremble with eagerness at the noise of rumors... etc." (1)
The reader, hovever, is not just shown things from the outside as they impinge upon the senses of an unknown observer; he sometimes goes "behind the scenes" and even penetrates into the inner life of the protagonist: "A little panic-fear grew in his mind. As his imagination went forward to a fight, he saw hideous possibilities. He contemplated the lurking menaces of the future, and failed in an effort to see himself standing stoutly in the midst of them. He recalled his visions of broken-bladed glory, but in the shadow of the impending tumult he suspected them to be impossible pictures." (9)
Here the character is interiorized; the author allows himself the liberty of knowing and revealing what the hero is thinking and feeling. This is quite in keeping with the author's intention to study the reactions of "a nervous system under fire".
Another interesting and arresting aspect of Crane's utilization of point of view is his "cinematic" technique. Crane sometimes handles point of view as if it were a movie camera thus anticipating the devices of this new art form. A certain number of particular effects can be best described |
00175917 | en | [
"shs.eco",
"sde.es"
] | 2024/03/05 22:32:10 | 2006 | https://shs.hal.science/halshs-00175917/file/Flachaire_Hollard_05.pdf | Emmanuel Flachaire
Guillaume Hollard
Jason Shogren
Stéphane Luchini
Controlling starting-point bias in double-bounded contingent valuation surveys by
Keywords: starting point bias, contingent valuation JEL Classification: Q26, C81
In this paper, we study starting point bias in double-bounded contingent valuation surveys. This phenomenon arises in applications that use multiple valuation questions. Indeed, response to follow-up valuation questions may be influenced by the bid proposed in the initial valuation question. Previous researches have been conducted in order to control for such an effect. However, they find that efficiency gains are lost when we control for undesirable response effects, relative to a single dichotomous choice question. Contrary to these results, we propose a way to control for starting point bias in double-bounded questions with gains in efficiency.
Introduction
There exist several ways to elicit individuals' willingness to pay for a given object or policy. Contingent valuation, or CV, is a survey-based method to measure nonmarket values, among the important literature see [START_REF] Mitchell | Using Surveys to Value Public Goods : The contingent Valuation Method[END_REF], [START_REF] Hausman | Contingent valuation : A critical assessment[END_REF], [START_REF] Bateman | Valuing Environmental Preferences: Theory and Practice of the Contingent Valuation Method in the US, EU, and Developing Countries[END_REF]. To elicit the individual maximum willingness to pay, participants are given a scenario that describes a policy to be implemented. They are then asked to report the amount they are ready to pay for it.
In order to elicit WTPs, the use of discrete choice format in contingent valuation surveys is strongly recommended by the work of the NOAA panel [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. It consists of asking a bid to the respondent with a question like if it costs $x to obtain . . . , would you be willing to pay that amount? Indeed, one advantage of the discrete choice format is that it mimics the decision making task that individuals face in everyday life since the respondent accepts or refuses the bid proposed. However, one drawback of this format is that it leads to a qualitative dependent variable (the respondent answers yes or no) which reveals little about individuals' WTP.
In order to gather more information on respondents' WTP, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation[END_REF] proposed to add a follow-up discrete choice question to improve efficiency of discrete choice questionnaires. This mechanism is known as the double-bounded model. This basically consists of asking a second bid to the respondent, greater than the first bid if the respondent answers yes to the first bid and lower otherwise. A key disadvantage of the double-bounded model is that subject's responses to the second bid may be influenced by the first bid proposed. This is the so called starting-point bias.
Several studies document that iterative question formats produce anomalies in respondent behavior. Empirical results show that inconsistent results may appear, that is, the mean WTP may differ significantly if it is implied by the first question only or by the follow-up question. Different interpretations have been proposed -the first bid can be interpreted as an anchor, a reference point1 or as providing information about the cost -as well as different models to control for these anomalies (see [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF][START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF][START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF]. However, these studies suggest that when we control for such undesirable response effects, efficiency gains can be lost relative to a single dichotomous choice question.
At the moment, it is still difficult to control for such effects in an effective manner. The purpose of this paper is to address this issue. We present and compare different models previously proposed in the literature. We also develop new econometric models that combine the main feature of existing models. Our empirical results provide strong evidence that we can obtain a gain in efficiency by taking into account the follow-up question. They give a better understanding of how subjects form their responses to the payment questions.
The paper is organized as follows. In section 2, we review the econometric models proposed in the literature and we propose new models. In section 3, we compare these different models with an application. Conclusions are drawn in section 4.
Econometric models
In this section, we review different models proposed in the literature to control for the anchoring effect, shift effect and framing effect. Then, we propose new models that combine all these effects.
Let us first consider W 0i , the true willingness to pay of individual i, which is defined as follows
W 0i = x i (β) + u i u i ∼ N (0, σ 2 ) (1)
where the unknown parameters β and σ 2 are respectively a k × 1 vector and a scalar, where x i is a non-linear function depending on k independent explanatory variables. The number of observations is equal to n and the error terms u i are normally distributed with mean zero and variance σ 2 . The willingness to pay (WTP) of the respondent i is not observed but his answer to a bid b i is. The subject's answers are defined as
r i = 1 if W 0i > b i and r i = 0 if W 0i ≤ b i (2)
where r i = 1 if the respondent i answers yes to the first question and r i = 0 if the respondent i answers no to the first question.
The double bounded model, proposed by [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation[END_REF], consists of asking a second bid (follow-up question) to the respondent. If the respondent i answers yes to the first bid, b 1i , the second bid b 2i is higher and lower otherwise. The standard procedure, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation[END_REF], assumes that respondents' WTPs are independent of the bids and deals with the second response in the same manner as the first discrete choice question:
W 1i = W 0i and W 2i = W 0i . (3)
An individual answers yes to the first bid if W 1i > b 1i and yes to the second bid if W 2i > b 2i . Thus, the double bounded model assumes that the same random utility model generates both responses to the first and the second bid.
However, introduction of follow-up questioning can generate inconsistency between answers to the second and first bids. To deal with inconsistency of responses, several models have been proposed in the literature.
Anchoring effect
Herriges and Shogren (1996)'s approach considers a model in which the follow-up question can modify the willingness to pay. According to them, respondents combine their prior WTP with the value provided by the first bid, this anchoring effect is then defined as follows
W 1i = W 0i and W 2i = (1 -γ) W 1i + γ b 1i (4)
where 0 ≤ γ ≤ 1. An individual answers yes to the first bid if W 1i > b 1i and yes to the second bid if W 2i > b 2i . From (4), it follows that,
r 1i = 1 ⇔ W 0i > b 1i and r 2i = 1 ⇔ W 2i > b 2i (5)
The economic interpretation is rather simple. Individuals are supposed to adjust their initial WTP by doing a weighted average of this WTP with the proposed amount. Thus, γ measures the importance of anchoring. It ranges from γ = 0 which means that no anchoring is at work, to γ = 1 which means that subjects ignore their prior WTP and replace it with the proposed bid. This model is thus a simple and efficient manner to test the importance of anchoring. The wider is the anchoring effect, the less information provided by the follow-up question.
A more general model
This last model assumes that only the follow-up question gives rise to anchoring effects and only the first bid has an influence on the second answer. These two last hypotheses are quite restrictive and we can show that the model is still valid if we consider a more general anchoring effect, that is, both bids can influence subject's responses.
Let us assume that individuals can combine their prior WTP with the values provided by the current and by the past bids offer. It leads us to consider the following model
W 1i = (1 -γ) W 0i + γ b 1i and W 2i = (1 -δ) W 1i + δ b 2i (6)
where 0 ≤ γ ≤ 1 and 0 ≤ δ ≤ 1. An individual answers yes to the first bid offer if:
r 1i = 1 ⇔ W 1i > b 1i ⇔ (1 -γ) W 0i + γ b 1i > b 1i ⇔ W 0i > b 1i (7)
This last condition suggests that a potential anchoring effect of the first bid offer does not influence the subject's response to the initial question. An individual answers yes to the second bid offer if:
r 1i = 1 ⇔ W 2i > b 2i ⇔ (1 -δ) W 1i + δ b 2i > b 2i ⇔ W 1i > b 2i (8)
This last condition suggests that a potential anchoring effect of the second bid offer does not influence the subject's response to the follow-up question. Moreover, we can see that the first bid offer can influence the second answer, because W 1i is a combination of the prior WTP and of the value provided by the first bid.
Finally, these results show that the current bid offer can have an impact on the WTP but does not affect the subject's responses. Only the first bid offer can influence the answer to the follow-up question. It follows that the parameter δ cannot be estimated. This suggest the remarkable conclusion that when we use the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] in practice, we can assume a potential anchoring effect of both bids.
Shift effect
Alberini, Kanninen, and [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] assume that the proposition of two bids may cause individual WTP amount to systematically shift between the two responses. Thus, the two answers are not based on the same WTP and this can explain inconsistency of responses. A shift in WTP is defined as follows:
W 1i = W 0i and W 2i = W 1i + δ (9)
where the parameter δ represents the structural shift.
Such a model is inspired by the following intuition. The first bid may be interpreted as providing information about the cost of the object. Thus, an individual who accept the first bid offer may understand the second bid as a proposition to pay an additional amount for the same object. It follows that this individual may cut down their answers to take that phenomenon into account. Symmetrically, when an individual rejects the first bid offer, the follow-up question could be interpreted as a proposition for a lower quality level of the object. Again, it may lead individual to cut down their answers. In such case, the parameter δ is expected to be negative. A positive δ is however possible and could be interpreted as "yea saying" behavior: an individual overestimate its WTP in order to acknowledge the interviewer's proposition. But, we are not aware of data supporting this interpretation, i.e. estimated values of δ are negative.
Note that a model with shift effects assumes that only the follow-up question gives rise to shift effect and the shift is independent of the bids proposed. These two last hypothesis are quite restrictive. Indeed, it could be difficult to believe that the respondent answers the first question truthfully, and that the behavioral responses is not the same if the proposed bid is close to the individual's true WTP or if it is far from it. However, these hypotheses are required by an identification condition and we cannot relax them as we have done in the anchoring model. [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF] modifies the Herriges and Shogren anchoring model to allow both anchoring and shift effects,
Anchoring and Shift effects
W 1i = W 0i and W 2i = (1 -γ) W 1i + γ b 1i + δ ( 10
)
The interpretation is simply a certain combination of both the anchoring and the shift effect explanations. Indeed, we can rewrite
W 2i = W 1i + γ (b 1i -W 1i ) + δ,
that is, an individual may update its prior WTP with a constant term (shift) and a multiplicative factor of the distance between the prior WTP and the first bid offer (anchoring). See [START_REF] Aadland | Incentive incompatibility and starting-point bias in iterative valuation questions: comment[END_REF] and [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions: reply[END_REF] for estimation details. (2002) proposes decomposing iterative questions into their ascending and descending sequences. His empirical results suggest that inconsistency of responses occur only in ascending sequences. It leads him to recommend using in practice the doublebounded model with only decreasing follow-up questions. This last model can be written,
Framing effect
DeShazo
W 1i = W 0i and W 2i = W 0i if r 1i = 0 (11)
The distinction between ascending and descending sequences leads Deshazo to attribute the parameter inconsistency to framing effect rather than anchoring effect. Indeed, using prospect theory [START_REF] Kahneman | Prospect theory: an analysis of decisions under risk[END_REF], he argues that if the first subject's response is "yes", the first bid offer is interpreted as a reference point: compared to it, the follow-up question is framed as a loss and thus, individuals are more likely to answer " no" to the second offer. In contrast, if the first subject's response is "no", the first bid offer is not interpreted as a reference point. Thus, the behavioral responses to ascending versus descending iterative questions are different.
New models
Empirical results based on all the previous models show that in the presence of anchoring effect, shift effect or framing effect, the estimated mean and the estimated dispersion of WTP can be significantly biased. [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] conclude that the efficiency gains from the follow-up question are lost once we controlled for the anchoring effect. They suggest to use the single-bounded model in the presence of significant anchoring effect and thus, to remove the follow-up questions. [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF] shows that most of the biases are more likely to occur in the ascending sequence. It leads him to recommend to keep the follow-up questions from the descending sequences and to remove the follow-up questions from the ascending sequences only.
In order to get information from the ascending sequences back, we propose to correct biases in the follow-up questions from the ascending sequences2 . We consider three different models, with W 1i = W 0i :
Framing & Anchoring effects -
W 2i = W 1i + γ (b 1i -W 1i ) r 1i (12)
If the subject's response is "no" to the first bid offer r 1i = 0 and W 2i = W 1i , otherwise the WTP is updated with an anchoring effect, as defined in the model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF].
Framing & Shift effects -
W 2i = W 1i + δ r 1i (13)
If the subject's response is "no" to the first bid offer r 1i = 0 and W 2i = W 1i , otherwise the WTP is updated with a shift effect, as defined in the model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF].
Framing & Anchoring & Shift effects -
W 2i = W 1i + γ (b 1i -W 1i ) r 1i + δ r 1i (14)
If the subject's response is "no" to the first bid offer r 1i = 0 and W 2i = W 1i , otherwise the WTP is updated with both anchoring and shift effects, as defined in the model proposed by [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF]. Implementation of this model can be based on a probit model, with the probability that the individual i answers yes to the j th question, j = 1, 2 equals to:
P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )D j r 1i + λ(D j r 1i ) (15)
where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σ -γσ) and λ = δ/(σ -γσ). Based on this equation, the parameters are interrelated according to:
β = ασ, γ = θσ/(1 + θσ) and δ = λσ(1 -γ). (16)
The models proposed in ( 12) and ( 13) are two special cases of the model proposed in (14), respectively with δ = 0 and γ = 0. It follows that, they can be implemented based on the probability (15), respectively with λ = 0 and θ = 0.
Application
To preserve the natural reserve using an entrance fee. The survey was administered to 218 recreational visitors during the spring 1997, using face to face interviews. Recreational Visitors were selected randomly in seven sites all around the natural reserve. The WTP question used in the questionnaire was a dichotomous choice with follow-up4 . There was a high response rate (92.6 %). For a complete description of the contingent valuation survey, see [START_REF] Claeys-Mekdade | Quelle valeur attribuer à la camargue? une perspective interdisciplinaire économie et sociologie[END_REF].
Means of the WTPs were estimated using a linear model [START_REF] Mcfadden | Issues in the contingent valuation of environmental goods: Methodologies for data collection and analysis[END_REF]. Indeed, [START_REF] Crooker | Parametric and semi-nonparametric estimation of willingness-to-pay in the dichotomous choice contingent valuation framework[END_REF] show that the simple linear probit model is often more robust in estimating the mean WTP than others parametric and semiparametric models. Table 1 presents estimated means μ and estimated dispersions σ of the WTPs for all models, with standard errors given in parentheses. The mean of WTPs is a function of parameters: its standard error and its confidence interval cannot be obtained directly from the estimation results. Confidence intervals of μ are presented in brackets, they are obtained by simulation with the Krinsky and Robb procedure, see Haab and McConnell (2003, pp 106-113) for more details.
From table 1, it is clear that the standard errors (in parentheses) and the confidence intervals (in brackets) decrease considerably when one uses the usual double-bounded model (Double) instead of the single bounded model (Single). This result confirms the expected efficiency gains provided when the second bid is taken into account [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF]. However, estimates of the mean of WTPs in both models are very different (113.52 vs. 81.78). Moreover, the mean of WTPs of the single bounded model, μ = 113.52, does not belong to the 95% confidence interval of the mean of WTPs in the double bounded model, [78.2; 85.5]. Such inconsistent results lead us to consider other models, as presented in the previous section.
At first, we estimate a model with anchoring effect (Anchoring), as defined in (4) by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]. From table 1, we can see that the anchoring parameter, γ = 0.52, is significant. Indeed, a Likelihood Ratio test, equals to twice the difference of the loglikelihood estimates (LR=5.78, P -value = 0.016), rejects the null hypothesis γ = 0. This test confirms the presence of an anchoring effect in the respondents' answers. When correcting for anchoring effect, results are consistent, in the sense that, the mean of WTPs of the single bounded model, μ = 113.52, belongs to the 95% confidence interval of the anchoring model, [98.2; 155.4]. However, standard errors and confidence intervals increase significantly, so that, even if follow-up questioning increases precision of parameter estimates (see Double), efficiency gains are completely lost once the anchoring effect is taken into account (see Anchoring). According to this result, "the single-bounded approach may be preferred when the degree of anchoring is substantial" (Herriges and Shogren, 1996, p 124).
Then, we estimate a model with shift effect (Shift), as defined in (9) by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. Results lead to similar conclusions than the double bounded model. Indeed, we can see a large gain in efficiency: standard errors and confidence intervals are more precise. Moreover, results are inconsistent: the mean of WTPs of the single bounded model μ = 113.52 does not belong to the 95% confidence interval of the shift model, [85.6; 93.8].
Parameter estimates of a model with both anchoring bias and shift effects, as defined in (10) by [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF], are given in the line named Anchoring & Shift. Based on the criterion of maximum likelihood, this model is better than the others ( l = -172.82). Results are consistent, in the sense that, μ = 113.52 belongs to the 95% confidence interval of the model with anchoring and shift effects [107.3; 176.0]. However, we can see a loss of precision compared to the single bounded model.
The only one model, previously presented in the literature, which give consistent results with the single bounded model and a gain in efficiency is the model proposed by [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF], as defined in (11). Results are presented in the line named Framing. The 95% confidence interval [90.9; 121.8] includes the mean of WTPs μ = 113.52 and is narrower than the 95% confidence interval of the single bounded model [87.9; 138.7]. In his conclusion, Deshazo recommends to remove all the answers which could be influenced by framing effect, that is, the answers to the second bids if the respondents answer yes to the first bids.
From the previous results, it is clear that there is no way to handle the problem of starting point bias in an effective manner. This suggests that the best we can do in practice is to remove the answers which could be subject to starting point bias. Nevertheless, the use of iterative questions should provide more information about the distribution of WTPs. Then, better results should be expected if all the answers to iterative questions are used and if a correct model of starting-point bias is used. To go further, we consider the three new models proposed in the last section, which consider all the answers to the second bids:
• Line Fram & Anch presents estimation results of the model defined in (12), that is, a model with an anchoring bias in the ascending sequence only. We can see that the 95% confidence interval of the mean of WTPs is equal to [93.6; 119.3]. It is clear that for these three models, results are consistent with the single bounded model: the mean of WTPs μ = 113.52 belongs to the three confidence intervals. Furthermore, results are more precise: the standard errors (in parentheses) are smaller and the confidence intervals (in brackets) are narrower than those of the single-bounded model.
In addition, we can remark that the two models Fram & Anch and Fram & Shift are special cases of the model Fram & Anch & Shift, respectively with δ = 0 and γ = 0. From this last more general model, we cannot reject γ = 0 (LR= 0.004, P -value= 0.99), but we can reject δ = 0 (LR=10.31, P -value=0.001). These results lead us to select the model Fram & Shift as the one which fit better our contingent valuation data, that is, a model with shift effect in the ascending sequences only.
Table 2 presents full econometric results of several models with consistent results: the single-bounded model (Single), the model of Deshazo (Framing) and our selected model (Fram & Shift). The estimates of the vector of coefficients β (rather than β/σ), the standard deviation σ and the shift parameter δ are directly presented, see equations ( 1), ( 11) and (13). It is clear from this table that the standard errors in the Fram & Shift model are nearly always significantly reduced compared to the standard errors in the other models. Indeed, only one parameter is significant in the Single model when eight parameters are significant in the Fram & Shift model. In other words, efficiency gains are still present in our selected model (which take into account all the answers) compared to the other models (which remove answers that could be influenced by the first bid offer).
Conclusion
Follow-up questions in double bounded model are expected to give more information on the willingness-to-pay of respondents. Then, many economists have favored this last model to obtain gains in efficiency over the single bounded model. However, recent studies show that this model can be inadequate and can give inconsistent results. Many different models have been considered in the literature to correct anomalies in respondent behavior that appear in dichotomous choice contingent valuation data. However, the corrections proposed by these models show that efficiency given by the iterative questions are lost when inconsistency of responses is controlled.
The main contribution of this paper is to propose a model to control for startingbias in double bounded model, and, contrary to previous research, still have gains in efficiency relative to a single dichotomous choice question. [START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF] shows that descending and ascending sequences have different behavioral responses and recommend to restrict the follow-up questions only if the answers to the first bids are no. To benefit from more information, rather than not taking into account the answers which could be influenced by the first bid offer, we propose different models of starting-point bias in ascending iterative questions only. Our empirical results show that a model with shift effects in the ascending questions gives consistent results with the single bounded model and provides large efficiency gains. This support the idea that framing, anchoring and shift effects can be combined in an efficient manner.
Table 1 :
1 test our model empirically, we use the main results of a contingent valuation survey which was carried out within a research program that the French Ministry in charge of environmental affairs started in 1995. Is is based on a contingent valuation survey which involves a sample of users of the natural reserve of Camargue 3 . The purpose of the contingent valuation survey was to evaluate how much individuals were willing to pay to Mean and dipersion of WTPs in French Francs
Model µ σ γ δ ℓ
Single 113.52 [87.9;138.7] 45.38 (23.61) - - -53.3
Double 81.78 [78.2;85.5] 42.74 (5.23) - - -179.6
Anchoring 126.38 [98.2;155.4] 82.11 (40.83) 0.52 (0.23) - -176.7
Shift 89.69 [85.6;93.8] 44.74 (5.77) - -8.10 (2.90) -175.3
Anchoring & Shift 141.38 [107.3;176.0] 85.50 (43.78) 0.52 (0.24) -7.81 (2.91) -172.8
Framing 106.72 [90.9;121.8] 40.39 (11.91) - - -68.8
Fram & Anch 106.71 [93.6;119.3] 60.19 (14.77) 0.40 (0.16) - -176.9
Fram & Shift 116.98 [103.9;129.7] 65.03 (14.40) - -30.67 (14.33) -171.8
Fram & Anch & Shift 116.39 [101.4;131.1] 64.63 (16.34) -0.02 (0.42) -31.60 (21.77) -171.8
Table 2 :
2 Parameter estimates, standard errors in parentheses (⋆: significant at 95%)
• Line Fram & Shift presents estimation results of the model defined (13), that is, a model with a shift effect in the ascending sequence only. We can see that the 95% confidence interval of the mean of WTPs is equal to [103.9; 129.7].
• Line Fram & Anch & Shift presents estimation results of the model defined in (
14
), that is, a model with an anchoring bias and a shift effect in the ascending sequence only. We can see that the 95% confidence interval of the mean of WTPs is equal to
[101.4; 131.1]
.
[START_REF] Kahneman | Reference points, anchor norms, and mixed feelings[END_REF] proposes clear definitions of anchoring and framing effects and emphasizes the difference in the underlying mental processes.
As long as the model(11), proposed by DeShazo (2002), provides consistent results with the singlebounded model, biases occur in ascending sequences only. Thus, there is no need to consider more complicated models where biases occur in both ascending and descending sequences.
The Camargue is a wetland in the south of France covering 75 000 hectares. The Camargue is a major wetland in France and is host to many fragile ecosystems. The exceptional biological diversity is the result of water and salt in an "amphibious" area inhabited by numerous species. The Camargue is the result of an endless struggle between the river, the sea and man. During the last century, while the construction of dikes and embankments salvaged more land for farming to meet economic needs, it cut off the Camargue region from its environment, depriving it of regular supplies of fresh water and silt previously provided by flooding. Because of this problem and to preserve the wildlife, the water resources are now managed strictly. There are pumping, irrigation and draining stations and a dense network of channels throughout the river delta. However, the costs of such installations are quite large.
The first bid b 1i was drawn randomly from{5, 10, 15, 20, 25, 30, 35, 40, 45,
50, 60, 70, 80, 90, 100}. The second bid b 2i was drawn randomly from the same set of values with b 2i < b 1i and with the additional amount 3 (resp. b 2i > b 1i and 120) if the answer to the first bid was no (resp. yes). The number of answers (no,no), (no,yes), (yes,no) and (yes,yes) are respectively equal to 20, 12, 44 and 121. |
01759239 | en | [
"phys.cond.cm-s"
] | 2024/03/05 22:32:10 | 2017 | https://theses.hal.science/tel-01759239/file/RATTER_2017_diffusion.pdf | Monica Bengt
Miriam Derek
Keywords: 4, 4 Critical current fluctuations in SQUIDs
I thank the support of my brother Patrik and his new family Virág and Janka, and also my brothers-in-
Acknowledgement
It is not a secret that this journey was not easy. That this work is finished is not only my achievement. It would not have been possible without the support of my supervisors, colleagues, friends and family. I would like to express my gratitude to them in the following paragraphs.
I would like to start by thanking Dominique Mailly, Grégory Abadias, Alain Marty and François Lefloch for accepting to be part of my jury, and taking interest in my research.
I owe thanks to the Nanosciences Foundation, particularly Alain Fontaine, Marie-Anne Carré and Fériel Kouiten-Sahraoui for funding the project my PhD was part of. I am especially grateful for the one month (July 2017) I could spend in Grenoble to conclude the writing of this manuscript.
My sincere thanks go to my supervisors, Klaus Hasselbach and Bruno Gilles, for proposing a great project. The scope of it, encompassing two fields of research, is what attracted me to it in the first place. Admittedly, this made my work difficult at times, but now I know that it was good that I was constantly challenged, and never quite comfortable. After all, how boring life would be if there was nothing left to learn. I am also grateful for my supervisors' kindness, guidance, their patient discussions with me and the encouragement they provided over the years. I consider myself lucky for having had supervisors who were generous with their time, and were available when I asked them to discuss. Above all, I am incredibly thankful that they did not give up on me during my long absences in the past year. I owe thanks to Olivier Buisson and his group. A large part of my work was done with his group, and although I was formally not part of it, I felt like I belonged. I had several interesting discussions with Olivier, and he kindly advised me and guided me during my time at Institut Néel. I am grateful to Cécile Naud for introducing me to cryogenics and lithography fabrication. It is thanks to her dedication that we succeeded in fabricating rhenium SQUIDs.
I would like to thank the whole NanoFab team for the training they gave me, and for i ii their help and advice on the fabrication of rhenium. I thank Guillaume Beutier and Marc Verdier for introducing me to synchrotron experiments. I count our time in Diamond Light Source among the most exciting experiences during my PhD.
I would like to thank Stéphane Coindeau, Frédéric Charlot, Florence Robaut and Sabine Lay for contributing to my work with measurements, and for the help they gave me in the analysis of the data. I thank Pierre Rodière for his measurement on one of my samples, and for his input on my defence presentation. I thank Virginie Simonet for taking me on my first tour of Institut Néel, and introducing me to Klaus.
During my time in Grenoble I made many friends.
Eva and Rosen were not only my office mates, but were my neighbours, and we quickly became good friends. We shared dinners and games, sometimes several times a week. With Eva we learned to sew and perfected our knitting skills together. Eva, we have yet to make our big skirts and wear them on top of a windy hill.
I would like to thank again the hospitality of Andrew, Nat, Sarah and Amy. I spent a week with them before my defence, and had a really excellent time. I hope to visit them again soon. I thank Markus for helping me out, and for arranging my inscription for the last year of doctoral school. I had lots of beers and pizza at Brasserie du Carré with Kristijan, Daria, Marek, Hannu, Tilo, Ingo, Andrew and Markus. These were really great times, and I miss our dinners and their company a lot. I had way too much coffee with Rémy, Javier, Luca, Etienne, Jorge and Farshad during my time at Institut Néel. I really did not need to drink all that coffee; I just enjoyed spending time with them. I am also grateful to have met and spent time with John, Marian, Dipankar, Gyuri, Márti, Alex, Alexadre, Dibyendu, Ivy, Clément, Farida, Hanno, Benjamin, Hasan, Solène and Maxime.
I owe the greatest gratitude to my parents, Marianna and József Ratter. I only have the abilities which allowed me to pursue a PhD because they provided me with the opportunity to learn. They supported all my endeavours, and allowed me to make my own mistakes, for which I am grateful. Köszönet mindenért! My parents-in-law Klazien and Dennis W. T. Nilsen were already familiar with the process of getting a PhD. They told me that it was hard for Dennis and for Gøran as well. It gave me a lot of comfort knowing that I was not alone. I also thank them for all of their encouragement and support and for their belief in me.
Introduction
The foundations of today's computers were laid down by Alan Turing in 1936, who developed a model for a programable machine, now known as the Turing machine. The first electronic computers appeared shortly after. With the invention of the transistor in 1947, hardware development took off, and computer power has been growing exponentially since, changing the world at an unprecedented pace and scale [START_REF] Michael | Quantum Computation and Quantum Information[END_REF].
However, it is argued that conventional computers will not be able to keep up with this established trend much longer. Due to the decreasing size of the electronic components, quantum effects are beginning to interfere with their operation. Furthermore, the time required to solve a problem with a conventional algorithm grows exponentially with the number of operations. This puts constraints on the finesse in a simulation [START_REF] Michael | Quantum Computation and Quantum Information[END_REF].
One path proposed is to redefine computation as we know it, and use quantum computers. In a quantum computer, bits are replaced by quantum bits or qubits. Unlike a bit, which can either be '0' or '1', a qubit can have a state which is the superposition of '0' and '1':
|ψ = α |0 + β |1 ,
where α 2 + β 2 = 1. N bits in both a quantum and in a conventional computer can have a total of 2 N states. The qubits can occupy all these states simultaneously. Algorithms that can exploit the superposition of states already exist. An example of this is Shor's algorithm, which demonstrated that factorisation of large integers can be solved efficiently. It is believed that this problem has no efficient solution on a conventional computer [START_REF] Michael | Quantum Computation and Quantum Information[END_REF][START_REF] Dumur | A V-shape superconducting artificial atom for circuit quantum electrodynamics[END_REF].
Bits in a computer are expected to preserve their states for a period of time. This is not different for a quantum computer either. However, the loss of coherence in qubits makes developing a quantum computer challenging.
One of the candidates to realise a qubit is based on Josephson junctions. Josephson junctions consist of two superconductors separated by a thin (∼ 1 nm) insulator barrier.
INTRODUCTION
They are described in more detail in section 3.3.1. The time scales over which a qubit preserves information are called the coherence times. These coherence times are limited by noise, which is ascribed to fluctuating charges in the insulator barrier. A frequently used material for the barrier is aluminium oxide. It is prepared by the subsequent deposition and oxidation of aluminium. The result is an amorphous layer, which is noted with the chemical formula AlO x . It is suspected that the aluminium is not fully oxidised in this form, and that this is the origin of the two-level fluctuations that lead to decoherence [START_REF] Grigorij | Strain tuning of individual atomic tunneling systems detected by a superconducting qubit[END_REF].
Consequently, the path to the quantum computer goes via employing new or unconventional materials and exploring the parameter space of deposition and growth conditions, in order to obtain high quality superconductor-insulator-superconductor junctions, which prerequisite for qubits.
The aim of this project was to eliminate disorder by growing epitaxial films. For epitaxial growth, the lattice parameter match is an important criterion. The lattice of rhenium, which is superconducting below 1.7 K, has an excellent match with the lattice of Al 2 O 3 . Furthermore, crystalline rhenium is very stable, and does not oxidise.
Rhenium has been the subject of a few studies in the recent past. Oh et al. grew rhenium thin films onto Al 2 O 3 using DC and RF sputtering. They observed the growth of epitaxial islands with spiral structure [START_REF] Oh | Epitaxial growth of rhenium with sputtering[END_REF].
Welander grew rhenium films on niobium surfaces [START_REF] Paul | Structural evolution of re (0001) thin films grown on nb (110) surfaces by molecular beam epitaxy[END_REF]. Niobium was first grown onto an Al 2 O 3 substrate, and this growth has been shown to be epitaxial [START_REF] Wildes | The growth and structure of epitaxial niobium on sapphire[END_REF]. Rhenium films grown this way were smooth, and fully relaxed by 20 nm thickness.
The molecular beam epitaxy growth of rhenium onto Al 2 O 3 substrate was started by B. Delsol in SIMaP [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. These films were used to fabricate microwave resonators [START_REF] Dumur | A V-shape superconducting artificial atom for circuit quantum electrodynamics[END_REF][START_REF] Dumur | Epitaxial rhenium microwave resonators[END_REF], and to study the proximity effect with graphene [START_REF] Tonnoir | Induced superconductivity in graphene grown on rhenium[END_REF]. Rhenium, indeed, grows epitaxially onto single crystal Al 2 O 3 . In the next step , crystalline aluminium would be deposited onto the flat surface of epitaxial rhenium, and oxidised. Our aim was that this would result in a crystalline, fully oxidised barrier. However, as is shown throughout this work, it is not so easy to produce a flat rhenium surface. The surfaces of our epitaxial films are decorated with spirals and deep holes. Such a topography is not adequate for the deposition of a second layer. Consequently, we achieved an understanding of the processes driving the growth mechanism, as we successfully identified dewetting as the culprit for the presence of holes.
The epitaxial growth of rhenium thin films onto single crystal Al 2 O 3 substrates was realised using molecular beam epitaxy. Following the characterisation of the films, wires and SQUIDs (superconducting quantum interference device) were fabricated using laser and electron lithography. The transport properties of these devices were studied at low temperatures.
Chapter 1 starts with a short description of the materials used in this work, rhenium and the single crystal Al 2 O 3 substrate. Following that, the requirement of ultra-high vacuum in epitaxial growth is explained. Sections 1.3 and 1.4 describes the molecular beam epitaxy setup, and the characterisation techniques that were used to prepare and study the films presented in this work. The final section of this chapter deals with the theoretical background of crystal growth, spiral growth, and thermal grooving. These theories are referred to in chapter 2.
The first section of chapter 2 discusses several aspects of rhenium growth on Al 2 O 3 specifically. The preparation procedure of the substrate is described first, then the temperature of the evaporating rhenium is estimated from the observed deposition rate. The critical thickness, above which dislocations are expected to appear, is also given here.
The following section studies how the temperature of the sample during growth influences the surface topography and the crystallography of the film. Section 2.3 shows that rhenium undergoes dewetting during growth when approximately 10 nm thickness is reached. Finally, the thermal transfer model is described, which was developed to calculate the temperature of the growing film.
The first section of chapter 3 presents the phenomena of superconductivity, and explains the basic theories that were developed to describe it. In the following section superconducting devices, namely the Josephson junction and the SQUID, are described. The final section of this chapter gives a short description on the two refrigerators that were used to reach temperatures below 1 K, and measure the transport properties of our samples.
In the first two sections of chapter 4 the lithography process and the fabricated circuit designs are presented. Section 4.3 discusses the transport measurements on the wires. The shape and width of the superconducting transition with respect to the topography and crystallography is studied. The final section presents the critical current oscillations of two low-noise SQUIDs.
1
Molecular beam epitaxy
In this chapter, first rhenium and the substrate material are introduced, then the theoretical and experimental basis for molecular beam epitaxy and crystal growth is given. In the second section the motivation for the use of ultra-high vacuum environment is explained. After that, the molecular beam epitaxy setup, and the available characterisation techniques are described. The following section deals with the basic theories of crystal growth, and the roll of misfit strain on dislocation formation. Finally, Burton, Cabrera and Frank's theory on spiral growth, and Mullins' theory on thermal grooving is introduced. These theories will be referred to in chapter 2, where the experimental results are discussed.
Materials
Rhenium
History and occurrence
The existence of rhenium was predicted by Mendeleev. It is the last discovered element that has a stable isotope, and was first detected in platinum ore in 1925 by Walter Noddack, Ida Tacke, and Otto Berg in Germany [START_REF] Noddack | Die ekamangane[END_REF]. It was named after the river Rhine.
In 1928 Noddack et al. were able to extract 1 g of pure rhenium by processing 660 kg of molybdenite (MoS 2 ) [START_REF] Noddack | Die herstellung von einem gram rhenium[END_REF].
Rhenium is among the rarest elements in the Earth's crust. It has not yet been found in pure form, and the only known rhenium mineral, ReS 2 called rheniite, was described as recently as 1994. It was discovered condensing on the Kudriavy volcano, on the Iturup island, disputed between Japan and Russia [START_REF] Korzhinsky | Discovery of a pure rhenium mineral at kudriavy volcano[END_REF]. The volcano discharges 20-60 kg rhenium per year mostly in the form of rhenium disulfide.
The primary commercial source of rhenium is the mineral molybdenite (MoS 2 ) which contains about 3% Re. Chile has the world's largest rhenium reserves and was the leading producer as of 2005. The total world production of rhenium is between 40 and 50 tons/year, and the main producers are in Chile, the United States, Peru, and Poland.
Physical and chemical properties
Rhenium is a silvery-white heavy metal from the third row of the transition metal block, with atomic number 75. Its melting point (3186 • C) and boiling point (5630 • C) are among the highest among the elements.
It crystallises in the hexagonal close-packed structure, shown in figure 1.1, with lattice parameters a = 0.2761 nm and c = 0.4456 nm [START_REF] Liu | Effect of pressure and temperature on the lattice parameters of rhenium[END_REF]. The density of rhenium is also among the highest, with 21.02 g/cm 3 measured at room temperature.
Rhenium has one stable isotope, 185 Re, which is in minority. Naturally occurring rhenium is composed of 37.4% 185 Re, and 62.6% 187 Re. 187 Re is unstable but its half life is very long, about 10 10 years [START_REF] Audi | The nubase evaluation of nuclear and decay properties[END_REF].
MATERIALS
Electron configuration of rhenium is [Xe] 4f 14 5d 5 6s 2 . Its oxidation state is known to vary between -3 and +7 skipping over -2, the most common being +7, +6, +4, and +2. There are several rhenium oxides, the most common is Re 2 O 7 , is colourless and volatile. Other oxides include ReO 3 , Re 2 O 5 , ReO 2 , and Re 2 O 3 [START_REF] Greenwood | Chemistry of the Elements[END_REF].
Pure rhenium is a superconductor, and its first recorded transition temperature was 2.42 K [START_REF] Daunt | Superconductivity of rhenium[END_REF]. Rhenium alloys show higher transition temperatures: rhenium-molybdenum is superconductive under 10 K [START_REF] Lerner | Magnetic properties of superconducting mo-re alloys[END_REF], and tungsten-rhenium at around 4-8 K [START_REF] Neshpor | Superconductivity of some alloys of the tungsten-rhenium-carbon system[END_REF].
Application
As a refractory metal, rhenium shows extraordinary resistance against heat and wear. Most of its applications are centred around this property.
Nickel based alloys that contain up to 6% of rhenium are used in jet engine parts or in industrial gas turbine engines. 70% of the worldwide rhenium production is used in this field. Tungsten-rhenium alloys are used as X-ray sources and thermocouples for temperatures up to 2200 • C.
The low vapour pressure of rhenium makes it suitable to be used as filaments in mass spectrometers, gauges and photoflash lamps.
Alloyed with platinum, it is used as a catalysts in the production of lead-free, highoctane gasoline.
Al 2 O 3 substrate
Rhenium thin films were deposited onto single crystal α-Al 2 O 3 substrates. Several polymorphs of Al 2 O 3 exists, α-Al 2 O 3 is the most stable, and is the only phase that occurs naturally. α-Al 2 O 3 is called corundum. Corundum is a rock-forming mineral. It is transparent, and in its chemically pure form has a white hue. In nature, corundum is rarely pure, and can appear in many different colours depending on the impurities. Coloured corundum is frequently used as a gemstone, best known varieties of it are ruby and sapphire.
Synthetic Al 2 O 3 crystals are prepared with the Czochralski growth process. A precisely oriented seed crystal is introduced into the molten Al 2 O 3 , and slowly pulled. The melt crystallises onto the seed matching its orientation.
Al 2 O 3 crystallises in the trigonal crystal system, in the R3c space group. Its lattice parameters are a = 0.476 nm and c = 12.993 nm [20]. In the lattice, six oxygen ions form a slightly distorted octahedron around an aluminium ion. Two octahedra are shown in two different orientations in figure 1.2(a) and 1.2(b). In figure 1.2(c) the stacking of the octahedra is shown, as they form the lattice.
Along the c-axis the structure is an alternation of one oxygen and two aluminium layers, shown in figure 1.3(a). The two neighbouring Al layers are separated by approx- imately 0.06 nm and shifted laterally. The separation between two layers of oxygen is the sixth of the c lattice parameter, 0.22 nm. The double Al layers also have an average spacing of 0.22 nm.
In figure 1.3(b) the view of the c plane is shown, terminated by oxygen ions, that form triangles over the Al ions.
All our substrates were c-plane, they were cut perpendicular to the crystallographic c-axis, along the (001) plane.
The basics of molecular beam epitaxy
Epitaxy is a Greek composite word, epi meaning 'above', and taxis meaning 'an ordered manner'. It roughly translates 'arranging upon'. Epitaxy occurs when a metastable material nucleates onto a crystalline substrate in registry with its crystalline order, as shown on figure 1.4 [START_REF] Arthur | Molecular beam epitaxy[END_REF]. This process allows the preparation of single crystal thin films.
Depending on the phase of the metastable material the epitaxy can be solid phase, liquid phase or vapour phase epitaxy. In chemical vapour deposition volatile precursors decompose onto or react with the substate to produce the layer. In other vapour epitaxy techniques the source is sputtered or ablated. These techniques allow fast growth of thin films therefore they are reliably used in the semiconductor industry and in research. Molecular beam epitaxy utilises beams of atoms or molecules in an ultra-high vacuum environment (10 -10 Torr) that are incident upon a heated crystal whose surface is atomically flat and clean [START_REF] Arthur | Molecular beam epitaxy[END_REF]. Depositions rates are much lower than in the above mentioned techniques, around 1 monolayer/minute, allowing the growth of single crystals and sub-monolayer composition control. The ultra-high vacuum conditions makes it possible to incorporate characterisation techniques, such as electron diffraction, X-ray photoemission spectroscopy, and sample preparation techniques such as ion etching. All these make MBE the ideal research tool for developing new materials.
The development of MBE was driven by the decreasing dimensions of semiconductor devices [START_REF] Cho | 20 years of quantum cascade lasers (QCLs) anniversary workshop[END_REF], and by the interest in heterostructures made out of semiconductors with different energy gaps [START_REF] Mccray | MBE deserves a place in the history books[END_REF]. Several unsuccessful attempts were made to grow such structures [START_REF] Collins | Electrical and optical properties of GaSb films[END_REF][START_REF] Günther | Aufdampfschichten aus halbleitenden III-V-verbindungen[END_REF]. Breakthrough came from the field of surface sciences in 1968, when Arthur observed that growth rate is not only the function of vapour pressure, but is strongly influenced by vapour-surface interactions [START_REF] Arthur | Interaction of Ga and As 2 molecular beams with GaAs surfaces[END_REF][START_REF] Arthur | GaAs, GaP , and GaAs x P 1 -x epitaxial films grown by molecular beam deposition[END_REF]. His discovery paved the way for the stoichiometric growth of compounds where the components have very different vapour pressures. 1968 marks the birth of MBE.
The advance of the supporting techniques was essential to the rapid evolution of MBE. Quadruple mass spectrometry was used in the study of surface-vapour interactions by Arthur [START_REF] Arthur | Interaction of Ga and As 2 molecular beams with GaAs surfaces[END_REF], and it remains a key component of MBE chambers to ensure a clean UHV environment. In 1969 A. Y. Cho was the first to use reflection high energy electron diffraction setup (RHEED) in the MBE chamber to investigate the growth process in situ. He showed that MBE is capable to produce atomically flat, ordered layers [START_REF] Cho | Morphology of epitaxial growth of GaAs by a molecular beam method: The observation of surface structures[END_REF][START_REF] Cho | GaAs epitaxy by a molecular beam method: Observations of surface structure on the (001 ) face[END_REF]. During these years compact electron guns became available, which made it possible to routinely combine MBE with RHEED, allowing the study of wide range of materials. From then on MBE was an essential part of several important studies and discoveries: (b) Time required for the formation of a monolayer as the functions of pressure. In ultrahigh vacuum, the mean free path is so long that collisions can be neglected, and it takes several hours for a monolayer to form from the residual molecules.
fractional Hall effect [START_REF] Tsui | Two-dimensional magnetotransport in the extreme quantum limit[END_REF], band-gap engineering [START_REF] Capasso | Band-gap engineering via graded gap, superlattice, and periodic doping structures: Applications to novel photodetectors and other devices[END_REF], quantum cascade laser [START_REF] Faist | Quantum cascade laser[END_REF], zerodimensional structures [START_REF] Goldstein | Growth by molecular beam epitaxy and characterization of InAs/GaAs strained-layer superlattices[END_REF], quantum dot lasers [START_REF] Ledentsov | Optical properties of heterostructures with InGaAs -GaAs quantum clusters[END_REF], giant magnetoresistance [START_REF] Fert | The origin, development and future of spintronics[END_REF][START_REF] Grünberg | From spinwaves to giant magnetoresistance (GMR) and beyond[END_REF]. In the following sections the operational principles of MBE and the supporting techniques are detailed, starting with the importance of ultra-high vacuum conditions. MBE operates in ultra-high vacuum. To reach 10 -10 Torr from atmospheric pressure, the chamber has to be evacuated by running high performance pumps for several days. After, the chamber walls, and all the instruments and surfaces are heated to aid the evaporation of molecules that were absorbed from the air. At the end of this procedure, in the ultra-high vacuum regime, the residual gas mainly consists of hydrogen molecules and methane. To maintain the low pressure, continuous pumping is necessary using ion pumps.
The kinetic gas theory demonstrates the necessity of ultra-high vacuum. The residual molecules are moving rapidly around the chamber, occasionally colliding with the wall, instruments, samples, or with each other. From the kinetic gas theory, the mean free path of the particles (λ) and the rate of collisions with a surface (N coll ) at pressure P can be calculated:
λ = k B T √ 2πd 2 P , N coll = P √ 2πk b T m ,
where k B is the Boltzmann constant, T is the temperature inside the chamber in kelvin, d is the diameter, and m is the mass of a molecule [START_REF] Henini | Molecular beam epitaxy: from research to mass production[END_REF].
In figure 1.5(a) the mean free path and the rate of collisions are shown as the function of pressure. In the calculation a hydrogen molecule was considered. Values are in the same range for the residual molecules that are commonly found in ultra-high vacuum. At atmospheric pressure the mean free path is in the range of nanometers, but at pressures where MBE operates it is around 100 km. This means that in a chamber with dimensions of 1-2 meter the particles can move without collisions. The beam of molecules/atoms can reach the substrate without reacting with other species on the way. Another advantage is that ultra-high vacuum allows the use of electron beam at high or low energy, the beam will not be scattered even at long distances (∼1 meter).
To calculate the time it takes for a monolayer to form from the residual molecules, the following is considered: on a surface of area of 1 m 2 there are approximately 10 19 atoms. Using the collision rate and assuming the colliding molecule sticks to the surface, one can calculate how long it takes for a monolayer to form from the residual particles: τ [s] = 10 19 N coll . This time is plotted as the function of pressure in figure 1.5(b). In ultra-high vacuum τ can be measured in hours. In MBE deposition rates are low, therefore, deposition of a sample can take hours. Keeping the rate of collisions low by keeping the pressure in the ultra-high vacuum range ensures the purity of the sample.
MBE instrumentation
The MBE setup used for this work is shown in figure 1.6. It consists of four interconnected chambers.
The introduction chamber, noted by label 1 in figure 1.6, is the only chamber that is brought to atmospheric pressure regularly, as it is used for the introduction of the substrates. It is pumped to 10 -7 Torr before it is opened towards the other chambers with higher vacuum levels. Otherwise it is kept at static vacuum. Before opening it towards the atmosphere, it is flooded with nitrogen gas. The intermediate chamber, noted by label 2 in figure 1.6, connects the other three chambers together.
Deposition chamber
The deposition chamber, labeled as 'Dep. chamber' in figure 1.6, is where the thin films are deposited. It is equipped with a Leybold quadrupole mass spectrometer that is used to monitor the composition of the residual gas inside the chamber. The schematics of the spectrometer is shown in figure 1.7. A quadrupole mass spectrometer has three parts. The first part is an ioniser, that ionises the molecules passing through it by electron bombardment. The second part is a mass-to-charge ratio filter, and the third part is the detector. The mass-to-charge ratio filter consists of two pairs of cylindrical electrodes in quadrupolar arrangement, as shown in figure 1.7. A potential of ±(U + V sin(ωt)) is applied between them, where U is a DC voltage and Vsin(ωt) is an AC voltage. The trajectory of ions travelling between the the four rods will be affected by the field, so that only ions with the set mass-to-charge ratio will reach the detector (red path in figure 1.7). The others will be thrown off course (blue path in figure 1.7). A massto-charge ratio spectrum is obtained by changing the voltage applied to the electrodes. From the spectrum, the composition of the residual gas can be determined.
The deposition chamber is also equipped with two Riber evaporation systems which consist of an electron gun, bending magnet, metal charges, and controlling electronics. The schematics of the evaporation system is shown in figure 1.8. The metal charge is heated with a 10 kV electron beam extracted from a tungsten filament. The beam scans the charge to ensure uniform heating. To adjust the heating power, and thus the rate of deposition, the current of the beam can be adjusted. The metal charge used in the present studies was 99.95% rhenium supplied by Neyco. To achieve a deposition rate of 0.1 Å/s -0.2 Å/s of rhenium, the beam current was set to approximately 200 mA.
Figure 1.8: Schematics of the evaporation system: electrically heated tungsten wire biased by 10 kV ejects electrons that are directed onto a metal charge using a magnetic field.
The substrate is placed horizontally on a manipulator above the charge. At its position the flux of atoms arriving at the surface is homogeneous. The deposition can be turned on and off with the use of a shutter located below the substrate. The manipulator is equipped with a furnace that consists of a tungsten filament, shown in figure 1.9. The substrate can be heated in two ways using this furnace: either by thermal radiation, or by electron bombardment. Infrared radiation is emitted by the tungsten filament when it is heated by a current running through it (up to approximately 10 A). Increasing the current will increase the temperature of the substrate. We can reach around 900 • C this way. When applying a voltage (400 V -800 V) between the sample and the filament, electrons are emitted. The temperature of the substrate is adjusted by the emission current (up to approximately 100 mA). We can reach around 1000 • C by electron bombardment.
The manipulator head is shown in figure 1.9. The temperature of the substrate is measured by a thermocouple that is located in the middle of the manipulator head, and is pressed against the back side of the substrate. There is an uncertainty in the contact between the thermocouple and the substrate, thus the value measured this way is an approximate of the real surface temperature. Also, the thermal and optical properties of the sample can change during growth, which affects the surface temperature. This change cannot be detected with the thermocouple. Figure 1.9: Furnace and thermocouple in the manipulator: the substrate is placed on top of the tungsten filament, the thermocouple is pressed against its back side.
Another way of measuring the temperature in an ultra-high vacuum environment, is to use a pyrometer. The pyrometer is located outside of the chamber, looking at the sample through a viewport. It measures the thermal radiation emitted by the material. For this method to give a reliable result, the viewport has to be made out of a material whose transmission as the function of wavelength is well known (usually Al 2 O 3 ). Also, the sample surface has to be aligned parallel with the window of the pyrometer. In the deposition chamber we cannot fulfil these requirements due to geometric constraints. The only way to measure the temperature of the sample during growth is by the thermocouple.
Using molecular beam epitaxy, films with thicknesses ranging from a few Å to 100 nm are routinely deposited. To be able to prepare samples in this wide range of thickness, precise measurement of the deposition rate is necessary. A microbalance made out of a quartz single crystal is the most commonly used tool to monitor the deposited thickness. The quartz microbalance consists of a quartz crystal, cut along a specific crystallographic orientation, with an alternating voltage applied to it. Due to the piezoelectric effect this voltage generates a standing wave in the crystal at a well defined frequency (resonance frequency) in the MHz range. When the mass of the crystal increases, the resonance frequency decreases. From the frequency shift the deposited mass and the thickness can be calculated. There are two Leybold quartz balances in our deposition chamber, located close to the sample, shown in figure 1.6.
The lifetime of a quartz microbalance is limited due to the deposit building up on it. To lengthen its lifetime, we are able to turn the measurement on and off using a shutter placed in front of the quartz crystal.
Characterisation chamber
There is a second ultra-high vacuum chamber connected to the deposition chamber. This is the characterisation chamber, labeled as 'Char. chamber' in figure 1.6. It is equipped with instruments that allow the investigation and preparation of the sample before or after the deposition, without exposing it to air. Instruments available in the chamber are the following: X-ray photoelectron spectroscopy (XPS), argon ion gun, low energy electron diffraction, a furnace that can reach over 2000 • C, and a pyrometer to measure the temperature. In this work, only the XPS was used, only that technique is discussed in detail in the following section.
Thin films characterisation techniques
In situ characterisation techniques
Some investigative techniques are available without having to remove the sample from the vacuum chamber. XPS is used to check the chemical composition of the surface of the substrate or the deposited film before or after deposition. RHEED can be used before, after, or during deposition to monitor the crystallographic properties of the film.
X-ray photoelectron spectroscopy
XPS is used to study the chemical composition of the surface. The principle of the technique, shown in in figure 1.10(a), is the following: the sample is irradiated with a known energy X-ray beam, and the electrons (mostly photoelectrons) that escape the material are sorted by their kinetic energies, and counted.
The setup consists of an X-ray tube, shown by the upper arrow in figure 1.6, and a detector, shown by the lower arrow. The anode material in the X-ray tube is magnesium, and the radiation corresponding to its Kα line with an energy of 1253.6 eV is used. The detector has two parts: an energy analyser with an energy window that is scanned over a given voltage range, and an electron multiplier for amplifying the current of the electrons.
The binding energy of the electrons are the characteristics of an atom or a molecule. From the kinetic energy of an emitted electron its binding energy can be calculated as follows:
hν = E B + Φ + E k → E B + Φ = hν -E k , (1.1)
where h is the Planck constant, ν is the wavelength of the exciting X-ray beam, E B is the binding energy of an electron, Φ is the work function that depends on the material and the instrument, and E k is the kinetic energy of the electron. Precise value of Φ is not known but it is small [START_REF] Moulder | Handbook of X-ray photoelectron spectroscopy[END_REF].
A typical XPS spectra is shown on figure 1.11. Most of the peaks indeed correspond to photoelectrons that were excited from the core shells of the atoms. There is, however, an other process, called Auger effect, which can result in peaks: a photoelectron leaves a vacancy on an inner shell that is filled by an electron from a higher shell. Then a second electron, an Auger electron, is emitted, carrying off the excess energy, leaving behind a doubly-charged ion. the first electron, the photoelectron, originated from the K level, its place was taken by an electron from the valence level, and the Auger electron was also from the valence level [START_REF] Moulder | Handbook of X-ray photoelectron spectroscopy[END_REF].
A ghost peak is also noted in figure 1.11 in blue. This is the result of copper contamination in the anode. The energy of the X-ray photons emitted by the copper contamination are different, therefore, the kinetic energy of the electrons they excite from the same shells are different too. When calculating the binding energy, only the Kα line of magnesium is considered in equation 1.1. This gives small intensity peaks in the spectrum at a wrong binding energy [START_REF] Moulder | Handbook of X-ray photoelectron spectroscopy[END_REF].
Even though the penetrations depth of the X-rays are relatively large (1-10 µm), the mean free path of electrons at these energies is restricted to a few nanometers due to strong electron-electron scattering. Thus we only get information from the top few atomic layers. A significant number of electrons undergo inelastic scattering processes, losing some of their kinetic energy, and thus add to the background. This is the reason for the step-like structure of the graph, that can most clearly be observed between the peaks Re4d and Re4f.
From the intensity of the XPS peaks, the surface monolayer coverage can be calculated. We used this method in other projects, and the detailed derivation is given in appendix B.
Reflection high energy electron diffraction
The deposition chamber is equipped with a Staib RHEED setup. The technique has been widely used to monitor the surface structure of the films during growth since the '70s. The setup consists of an electron gun, shown in figure 1.6 by an arrow labeled as RHEED, and a phosphor screen on the opposite side. The electron gun produces an electron beam with an energy of 20 keV, that is directed onto the surface of the growing crystal at a grazing angle (1 • -3 • ). Geometry of the RHEED setup is shown in figure 1.12. From the diffraction pattern the physical state of the surface can be determined: in-plane lattice parameter, orientation, symmetry of reconstruction.
Figure 1.12: Geometry of RHEED: monochromatic electron beam is directed onto the growing crystal surface, the diffraction pattern is detected by a phosphor screen.
In an elastic scattering process the energy of the scattered particle is conserved:
E I = E F = 2 k 2 I 2m = 2 k 2 F 2m → k F = k I = k, (1.2)
where E I and E F are the energies of the incident and scattered electrons, is the reduced Planck constant, k I and k F are the magnitudes of the wave vectors of incident and scattered electrons, and m is the electron mass. Laue's condition of diffraction states that the wave vector in diffraction can only change by a vector that is a reciprocal vector (g hkl ) of a scattering crystal:
k F -k I = g hkl , (1.3)
where
g hkl = ha * 1 + ka * 2 + la * 3 and a * i = 2π a j × a k a i • (a j × a k )
.
(1.4) a i,j,k and a * i,j,k are real and reciprocal lattice vectors respectively, and h, k, and l are integers [START_REF] Sólyom | Fundamentals of the Physics of Solids, volume I. Structure and Dynamics[END_REF][START_REF] Ichimiya | Reflection high energy electron diffraction[END_REF]. A more detailed discussion on diffraction can be found in section 1.4.2.
The solutions of equations 1.2 and 1.3 can be obtained geometrically by the Ewald construction, where the vector k I is placed in the reciprocal lattice of the diffracting volume so that its tail end is on a reciprocal point. Then a sphere with radius k is drawn around the head of the vector k I . Diffraction occurs in all the directions, where the sphere intersects a reciprocal lattice point [START_REF] Sólyom | Fundamentals of the Physics of Solids, volume I. Structure and Dynamics[END_REF][START_REF] Ichimiya | Reflection high energy electron diffraction[END_REF]. The detecter is placed in the forward direction, as shown in figure 1.12, thus we can only observe waves, that are diffracted forward.
The radius of the Ewald sphere can be calculated from the de Broglie wavelength of the electrons:
λ = h p , (1.5)
where h is the Planck constant, and p is the momentum of the electrons.
In case of high energy electron beams (>50 keV), relativistic effects have to be taken into account. For a 20 keV electron beam the relativistic correction in the wavelength is only 1%, but for the sake of completeness the relativistic calculation is shown here [START_REF] Ichimiya | Reflection high energy electron diffraction[END_REF].
Energy (E) of a particle with rest mass of m e (electron mass) is
E = p 2 c 2 + m 2 e c 4 = T + m e c 2 , (1.6)
where c is the speed of light, and T is the kinetic energy. From equation 1.6 the momentum can be expressed as follows:
p 2 c c = T 2 + 2T m e c 2 .
(1.7) Kinetic energy of a particle with charge e (electron charge) accelerated by a voltage U is the following:
T = 1 2 m e v 2 = U e. (1.8)
Using equations 1.5, 1.7, and 1.8, choosing an accelerating voltage of 20 kV, the wavelength, and the magnitude of the wave vector is:
T = 1 2 m e v 2 = U e = 0.09Å → k = 2π λ = 73Å -1 .
(1.9) The advantage of grazing incidence is its sensitivity to the surface structure of the sample. Just by glancing at the diffraction pattern it can be determined whether the surface is flat or has grain structure. In the following, the construction of the diffraction patterns are discussed starting with the case of island growth.
In figure 1.13(a) diffraction from a surface, that is covered with islands, is shown, schematically in real space. The electron beam travels through these islands in transmission. The diffracting volume is extended in all three directions, which in reciprocal space corresponds to reciprocal lattice points. This reciprocal lattice is shown in figure 1.13(b) with the Ewald sphere. Constructive interference occurs in directions where the Ewald sphere intersects the reciprocal lattice points. In figure 1.13(c) a cross section of the Ewald sphere and the reciprocal lattice is shown. The points of intersections are clearly visible, they will define all the possible directions of the outgoing wave vector (k F ). The intersection of a reciprocal lattice point and the electron beam is projected onto the phosphor screen. Due to the finite crystallite size, the reciprocal lattice points have an finite width. This results in spherical diffraction spots with a finite diameter. In figure 1.14(a), diffraction from a smooth surface is shown schematically, in real space. Diffraction happens in reflection, and the penetration depth of the electron beam is restricted to a few atomic layers. The third dimension of the diffracting volume is reduced. The reciprocal lattice of a two dimensional periodic structure consists of rods, that are perpendicular to the surface. The distance between the reciprocal lattice rods corresponds to the inverse of the in-plane lattice constant.
In figure 1.14(b) the reciprocal lattice rods and the Ewald sphere are shown. Diffrac-tion is observed in directions where the Ewald sphere intersects the reciprocal lattice rods. In figure 1.14(c) a cross section of the reciprocal lattice and the Ewald sphere are shown. Dimensions in the figures 1.14(b) and 1.14(c) are not accurate, the radius of the Ewald sphere is much larger than the spacing between the reciprocal lattice rods. Therefore, the intersections between them are extended along the direction of the surface normal. This is illustrated in figure 1.14(d). This is the reason why in the RHEED pattern of a film with a smooth surface and good crystalline quality sharp streaks are observed.
Ex situ characterisation techniques
Surface topography and the crystallographic properties of the substrates and the thin films were investigated using several techniques outside of the vacuum chamber. Atomic force microscopy was used to measure the topographic features; X-ray diffraction was used to check the orientations and verify the thicknesses of the films. These two techniques are described below.
Atomic force microscopy
Topography of the films and the substrates were measured using a Veeco Dimension 3100 atomic force microscope. Atomic force microscopy (AFM) belongs to the family of scanning probe microscopes. The AFM probe is an atomically sharp silicone tip attached to a cantilever. They have resonance frequency around 300 kHz.
The AFM cantilever is very flexible, and small forces that act between the sample and the tip can bend it according to Hook's law:
F = kz, (1.10)
where k is the spring constant of the cantilever, and z is the displacement of the tip.
Forces can have different sources depending on the sample, mostly it is due to electrostatic interaction. What is important, that the magnitude of the force decreases with the distance. This allows imaging the topography, by keeping the interaction between the tip and the surface constant [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF].
In this work, the AFM was used in tapping mode. In tapping mode the cantilever is oscillated so that it lightly taps on the surface of the sample at the lowest point of its swing. The frequency of the oscillation is near to the resonance frequency of the cantilever, where the amplitude is most sensitive to changes. The sample surface is scanned with the oscillating tip, while a feedback loop maintains a constant amplitude, ie. constant CHAPTER 1. MOLECULAR BEAM EPITAXY surface-tip distance, by lowering or lifting the probe. The feedback signal on the vertical module is calibrated, so that it gives the vertical movement of the AFM tip. Plotting this over the scanned are gives the topographic image of the surface [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF]. The schematics of the AFM is shown in figure 1.15(a). The cantilever is connected to a tube made out of a piezoelectric ceramic. This tube is composed of two parts corresponding to the lateral (x, y), and the vertical (z) directions. The vertical module, shown in figure 1.15(b), consists of two cylindrical electrodes separated by the piezoelectric ceramic. The voltage applied to the piezoelectric ceramic is adjusted by the feedback loop, and causes the part to contract or to extend, lifting or lowering the tip, respectively [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF].
The lateral module, also shown in figure 1.15(b), has four pairs of electrodes arranged around the piezoelectric ceramic tube. The ones opposite to each other receive the same signal but with opposite sign, so while one side extends the other contracts, thus causing the tube to tilt. The shape of the signal applied to these electrodes to generate the scanning raster motion is shown in figure 1.15(b) [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF].
The movement of the tip can be monitored with the use of a laser light that is directed onto the backside of the cantilever, as shown in figure 1.15(a). It is reflected towards a split photodiode detector that has two separate parts: A and B. The output of the detector is I A -I B I A +I B , where I A and I B are the signals on each diode. From this value the vertical position of the tip can be reconstructed [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF].
Three types of image can be obtained from an AFM scan: height, amplitude, and phase image. The height image is the one mentioned above, when the vertical position of the oscillating tip is adjusted to keep a constant amplitude. The vertical movement of the tip is plotted as the function of the coordinates of the scanned area, which directly corresponds to the topography of the surface [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF]. The change in the signal on the photodiode detector can also be plotted, this corresponds to the changes in the amplitude of the oscillation, so it is referred to as amplitude image. The feedback loop should keep this value constant, but rapid changes in the topography will show in the amplitude image.
The third value that can be used to create an image is the phase difference between the driving AC signal, and the oscillation of the cantilever. This can show changes in the interaction between the tip and the sample. This is called phase image. Determining what causes the changes in the phase is a science in itself [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF].
In figures 1.16 and 1.17 the differences between the height and amplitude image, and Images were analysed using the software Gwyddion [42]. Noise was reduced on all of them with in-built algorithms. Also, an algorithm called planefit was used on all of them, unless otherwise stated. Planefit is used to remove the slope across an image that could be caused by uneven mounting of the sample. In the case of stepped surfaces, which have a slope by nature, planefit has an effect that is illustrated in figures 1.18.
In figure 1.18(a), a simulated stepped surface is shown. The intensity of the colour is proportional to the surface height. A cross section of the surface is shown in figure 1.18(c) with the blue line. The plane fit algorithm determines the average slope of the surface, shown in green in figure 1.18(c), and subtracts it from the raw height data. The result is shown in figure 1.18(b), and its cross section in figure 1.18(c) in red.
After the slope is subtracted, the surface appears to be jagged. Nonetheless, it is preferred to use the planefit algorithm even on stepped surfaces, because it reduces the range of the vertical scale, making the features easier to observe.
Lateral resolutions of the AFM is ideally around 1 nm, but it strongly depends on the quality of the tip. The tip degrades over time, because it keeps touching the surface again and again. Vertical resolution is approximately 0.1 nm [START_REF]Scanning probe microscopy training notebook. Digital Instruments[END_REF].
X-ray diffraction
X-ray diffraction (XRD) was used to determine the crystal structures of the deposited thin films. Diffraction of high energy electrons was discussed briefly already in section 1.4.1 using Laue's condition of diffraction, and the Ewald sphere. In this section a short summary is given on the hexagonal crystal structure, then the diffraction phenomena and XRD is described in more detail.
Notes on the hexagonal crystal system. In a crystalline material the atoms are arranged periodically in all three directions of space, forming a crystal lattice. The smallest volume that have the overall symmetry of the crystal is called the unit cell. The length of the vectors (lattice vectors: a, b, c) that define it are the lattice parameters. Rhenium crystallises in the hexagonal system. The a and b hexagonal lattice vectors make an angle of 120 • , and the c vector is perpendicular to the ab plane. a and b are equal in length (a = 0.2761 nm), but c is longer (c = 0.4456 nm) [START_REF] Liu | Effect of pressure and temperature on the lattice parameters of rhenium[END_REF].
Atoms on a crystal lattice form a series of crystal planes. Infinite number of such planes can be defined. A crystal plane intercepts the lattice vectors at points a h , b k , c l . (hkl) are the Miller indexes and they define the orientation of a plane with respect to the coordinate system of the unit cell. Parallel planes are noted using the same Miller indices, and are spaced at equal distances (d hkl ). d hkl for hexagonal crystals can be obtained using the following equation:
1 d 2 hkl = 4 3
h 2 + hk + k 2 a 2 + l 2 c 2 . (1.11)
It is sufficient to use the three Miller indices to identify a plane or a direction in the hexagonal system, however, it does not have the same convenience, as it has in an orthogonal system. In an orthogonal system, indices of equivalent planes and directions can be generated by the permutation of the three Miller indices. This does not work with the Miller indices of a hexagonal crystal. However, permutation does work with the Bravais-Miller indices.
In the Bravais-Miller coordinate system a fourth, redundant axis is introduced in the ab plane, with 120 • apart from a and b. Crystal planes and directions are noted with the four Bravais-Miller indices, (hkil). Equivalent directions and planes can, in this notation, be obtained by the permutation of the first three indices. For example, a hexagonal prism is shown in figure 1.19(a). A plane parallel to the c axis is highlighted. All six of such planes around the prism are equivalent. Figure 1.19(b) shows the in-plane axes of the Miller coordinate system in blue (a and b axes), and the Bravais-Miller coordinate system in red (a 1 , a 2 , and a 3 axes). The intersections of the planes with the axes give the indices in the two system. The Miller indices are: [START_REF] Matthias | Superconductivity and electron concentration[END_REF], [START_REF] Cooper | Bound electron pairs in a degenerate fermi gas[END_REF], and (010). The Bravais-Miller indices are: (1100), (1010), and (0110). This demonstrates that indices in the four axis notation can be obtained by the permutation of the first three indices. In this work, mostly the Miller indices are used.
Indices can be transformed from one notation to the other. In case of a plane, the fourth index (i) is obtained as follows:
i = -(h + k).
(1.12)
A direction [U V T W ] can be converted to the three indices [uvw] as follows:
u = 2U + V, v = 2V + U, w = W. (1.13)
X-ray diffraction. In an XRD experiment, the sample is subjected to an X-ray plane wave (e -ik i r ) with a known wave vector (k i ), therefore known energy and propagation direction. We used two laboratory XRD setups: the Huber 4-cycle diffractometer and the Rigaku SmartLab high-resolution diffractometer. Both instruments use the Kα line of copper. The intensity of this emission is split in two: 2/3 Kα 1 with wavelength 1.540562 Å, and 1/3 Kα 2 with wavelength 1.544398 Å. In the Huber 4-cycle diffractometer both wavelengths were used, in the Rigaku SmartLab high-resolution diffractometer the Kα 2 line is removed. The X-rays are scattered by the electrons in the sample. The scattering is assumed to be elastic, only momentum transfer occurs. This means, the outgoing wave vector has the same length as the incoming wave vector (|k i | = |k f |), and their vectorial difference is called the scattering vector:
q = k f -k i .
(1.14) The scattered amplitude is the sum of the scattering from each atom in the illuminated volume, which, because X-rays are scattered by electrons, can be expressed as the Fourier transform of the electron density:
A(q) = V f (r)e iqr dr, (1.15)
where the integral is taken across the illuminated volume [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF].
The periodic arrangement of the atoms in a crystal lattice results in constructive (in specific cases destructive) interference whenever the scattering vector coincides with a reciprocal lattice vector (g hkl ) [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF]. This gives Laue's condition of diffraction:
g hkl = q.
(1.16)
Laue's condition of diffraction is illustrated in figure 1.20. k i , k f and g hkl form an isosceles triangle, where the angle enclosed by the two equal sides is 2θ, thus the following relation holds:
|g hkl | = 2|k i | sin θ.
(1.17)
Using the properties of the reciprocal lattice, it can be shown that g hkl is perpendicular to the plane series with indices (hkl), and its length is related to the spacing d hkl :
|g hkl | = m 2π d hkl , (1.18)
where m is an integer, which refers to the order of the reflection [START_REF] Sólyom | Fundamentals of the Physics of Solids, volume I. Structure and Dynamics[END_REF]. Substituting 1.18 in equation 1.17, and using relation |k| = 2π/λ we obtain Bragg's condition for diffraction: Bragg's condition of diffraction is also illustrated in figure 1.20.
Bragg's law shows a simple relationship between wavelength, angle of reflection, and lattice spacing. During an elastic diffraction experiment the angular distribution of the scattered intensity is measured. From the angles, the lattice spacings can be determined. Different lattice spacings correspond to different orientations, thus the texture of the film can be determined from a few measurements.
The schematics of a 4-cycle diffractometer is shown in figure 1.21. All circles are aligned so that their centres coincide with the centre of the sample. The detector and the source can move along the red circle. The angle between the incident beam and the surface of the sample is θ. In the symmetric, θ-2θ measurement, the angle between the incident beam and the detector is 2θ, and the source and the detector are moved symmetrically, as shown in figure 1. 22(a). During this measurement the direction of the scattering vector remains perpendicular to the surface, and its length changes. The sample is scanned for all d hkl values of planes that are parallel to the surface. Grains with different orientations are detected this way. Symmetric reflections are often called specular reflections. In the rocking curve measurement the detector and the source are fixed at a θ and a 2θ value where a specular peak was found. The sample is 'rocked' along the red circle in small steps. This is shown in figure 1.22(b). In this case, the length of the scattering vector is fixed, and its direction changes. Small rotations of grains with the same orientation can be detected this way. This is called mosaicity.
It was mentioned above that infinite number of planes can be defined in a crystal lattice. This means that, besides the specular reflections, several asymmetric reflections can be found. This concept in shown in figure 1.23(a). The scattering vector shown in pink was found by a θ-2θ scan. α is the angle between the specular (pink), and the asymmetric (blue) reflections. For a hexagonal structure α can be computed using the following expression:
cos α = 4 2a 2 h 1 h 2 + k 1 k 2 + 1 2 (h 1 k 2 + h 2 k 1 ) + 3a 2 4c 2 l 1 l 2 d h 1 k 1 l 1 d h 2 k 2 l 2 .
(1.20)
Using the angle α, the source and the detector can be moved on the asymmetric reflection.
To verify the crystallinity of the sample in-plane, a Φ scan is conducted, which is shown in figure 1. 23(b). The sample is rotated around the scattering vector of a specular reflection (|| surface normal), along the blue (Φ) circle in figure 1.21, while the detector is set on an asymmetric reflection. If the sample is crystalline, the number of reflections seen in a full rotation reflects the symmetry of the rotation axis. For example, rhenium grows epitaxially on Al 2 O 3 with orientation (002). The (002) axis has hexagonal symmetry, so when we set the source and the detector on the (103) asymmetric reflection, and rotate the sample around the (002) direction we expect to observe 6 bright signals coming from the planes equivalent to [START_REF] Maxwell | Isotope effect in the superconductivity of mercury[END_REF]. As an illustration of the technique, the above example is shown in figure 1.24. Here, Φ was scanned in a 180 • interval, and in addition a χ scan was performed. A 2D projection of the diffraction peaks can be observed. Indeed, within half a circle, 3 diffraction peaks appear. This shows that the rhenium film has a single in-plane orientation. This technique was used to determine the in-plane relationship between the substrate and the film which is presented in section 2.1.3. The scattered amplitude (A N ) is the sum of amplitudes from each plane:
A N (q) ∝ N -1 n=0 e -iqnd = with k=e -iqd 1 + k + k 2 + ... + k N -1 = 1 -k N 1 -k . (1.21)
Equation 1.21 can be arranged in the following form:
A N (q) ∝ sin qN d 2 sin qd 2 • e -iq(N -1)d , (1.22)
From this, the equation that describes the intensity is:
I N (q) ∝ sin 2 qN d 2 sin 2 qd 2 .
(1.23) Equation 1.23 is called the interference function, and was used to fit high-resolution X-ray data presented in chapter 2.
MBE growth
Adsorption and growth modes
During molecular beam epitaxy growth, a charge is heated to temperatures where it slowly evaporates. The deposition chamber contains the vapour phase of the material to-be-deposited and also a heated substrate in the solid phase. Crystal growth happens at the interface of the two phases [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF]. Atoms of the vapour phase arrive on the surface of the solid phase. Growth will take place when the arriving atoms of the vapour phase attach to the solid phase at a higher rate than they reevaporate, which implies a departure from equilibrium conditions [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF].
Atoms arriving at the substrate have a temperature distribution corresponding to the source (T source ). Upon arrival they either reach thermal equilibrium with the substrate at the substrate temperature (T substrate ), or reevaporate at a temperature T reevap . This process is quantitatively described by the accommodation coefficient [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF]:
α = T source -T reevap T source -T substrate . (1.24)
Thus equation 1.24 expresses the extent to which the arriving atoms reach thermal equilibrium with the surface. α equals zero when T reevap = T source , which means that the atoms reevaporate immediately, before they had time to loose from their energy and lower their temperature. The other limit is when T reevap = T substrate , and α = 1. In this case thermalisation is perfect, the arriving atoms cool to the temperature of the substrate. Atoms that have reached the equilibrium do not necessarily remain on the surface permanently. It is still possible for them to reevaporate at the temperature of the substrate. Sticking or condensation coefficient gives the probability that an atom will adhere to the surface [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF]. It is defined as the number of adhered atoms (N adh ) over the total number of arriving atoms (N tot ):
s = N adh N tot , (1.25)
where for the accommodation coefficient only the temperatures are considered, in the sticking coefficient the nature of the physical or chemical bond is also included.
Absorption of an atom can be chemical, when ionic or covalent bonds are formed between the adsorbate and the adsorbent: electrons are transferred. It can be physical, when there is no electron transfer, van der Waals bond connects the two parts. Usually in MBE growth both of them are present subsequently [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF]. Once atoms are adsorbed on the surface three things can happen: they can be incorporated into the crystal where they are, they can diffuse to find an energetically more favourable location, or they can reevaporate. This is shown in figure 1. 26(a). What an energetically favourable location for an adsorbate is, depends on the surface tensions between the interfaces, and the amount of material that has already been deposited. The relation developed by Young, which explains the shapes of liquid droplets on solid surfaces, is valid for solid adsorbate too. It demands that the forces acting on the surfaces are in balance:
γ SV -γ SA -γ AV cos θ = 0 → cos θ = γ SV -γ SA γ AV , (1.26)
where γ SV and γ SA are the surface tensions between substrate and vapour, and substrate and adsorbate, respectively. γ AV cos θ is the projection of the surface tension between the adsorbate and vapour to the plane of the substrate surface. θ is the angle between the surface of the substrate and the adsorbate. Geometry is shown in figure 1.26(b).
When γ SV < γ AV + γ SA , θ has a finite value, and it is energetically favourable to keep the area of the substrate-vapour interface at maximum, which will force the adsorbate to form islands. This is called Vollmer-Weber island growth mode, and is depicted in figure 1. 27(b). This growth mode is often observed when metal is grown on an insulator [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF]. When the relation is reversed, γ SV > γ AV + γ SA , θ angle cannot be defined. It is now favourable to reduce the substrate-vapour interface by the formation of an adsorbate layer. This is called Frank-van der Merve layer-by layer growth mode, shown in figure 1.27(a). This growth mode is observed in the case of adsorbed gases on metals, semiconductors grown on semiconductors, or in metal-metal systems [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF].
There is a third growth mode, which is called Stranski-Krastanov layer-plus-island growth mode, and is shown in figure 1.27(c). In this case the growth starts layer by layer. After a few monolayer was deposited the growth mode changes into island growth. The change in the growth mode is triggered by the changing of the surface tension with increasing thickness. Surface tension is affected by many factors including strain or surface reconstruction [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF].
Dislocations and misfit
Dislocations
Dislocations are two dimensional defects in crystals. Depending on the orientation of the Burgers vector with respect to the dislocation line, edge and screw dislocations can be defined. A closed loop (M N OP Q) is drawn in the crystal that encloses the dislocation. (b) Burgers circuit is copied into a perfect crystal, where it is not closed. Burgers vector connects the starting point (M ) and the final point (Q) of the Burgers circuit [START_REF] Hull | Introduction to dislocations[END_REF].
In case of an edge dislocation, shown in figure 1. 28(a), an extra half plane is present in the crystal. A screw dislocation, shown in figure 1.29(a), can be imagined by cutting the crystal in half but not all the way, and displacing one half of the crystal by one lattice spacing relative to the other half. If a screw dislocation reaches the surface of the crystal, a step appears [START_REF] Hull | Introduction to dislocations[END_REF].
Dislocations can be characterised by their Burgers vectors. A Burgers circuit is any atom-to-atom path which forms a closed path. Burgers circuits are shown in figures 1.28(a) and 1.29(a). In figure 1.28(a), the circuit M N OP Q encloses an edge dislocation, in figure 1.29(b), a screw dislocation. If the same path is taken in a dislocation free crystal, as shown by the arrows in figure 1.28(b) and 1.29(b), and the path does not close, it must contain at least one dislocation. The vector required to close the loop, is called the Burgers vector. In figures 1.28(b) and 1.29(b), vectors pointing from points Q to M are the Burgers vectors. It can be observed, that the Burgers vector of a pure edge dislocation is perpendicular to the line of the dislocation. In case of a pure screw dislocation, it is parallel [START_REF] Hull | Introduction to dislocations[END_REF].
Dislocations in real materials are neither pure edge nor pure screw type. They are a mixture of both. A closed loop (M N OP Q) is drawn in the crystal that encloses the dislocation. (b) Burgers circuit is copied into a perfect crystal, where it is not closed. Burgers vector connects the starting point (M ) and the final point (Q) of the Burgers circuit [START_REF] Hull | Introduction to dislocations[END_REF].
Dislocations distort the crystal lattice, they induce elastic stress in the material. The stress around a dislocation scales with 1 r , where r is the distance from the dislocation. When r = 0, the stress in infinite which is not possible. This divergence is caused by the break down of the elastic theory at the vicinity of the dislocation. The elastic theory neglects the atoms, and treats the material as a continuum. To avoid infinite stress, an arbitrary cutoff radius, core radius (r 0 ) is defined, and calculations are stopped there. Reasonable values for the core radius are in the range of 1 nm [START_REF] Hull | Introduction to dislocations[END_REF].
Misfit
Heteroepitaxy refers to the growth of a layer onto a chemically different material. Due to the chemical difference they favour different interatomic distances, i.e. their bulk lattice parameters are different. This is shown in figure 1. 30(a). The difference can be expressed by the misfit:
i = a si -a li a li , i = x, y (1.27)
where a s and a l are the lattice parameters in the two directions (x, y) perpendicular to the growth direction. This means that if the layer grows epitaxially, its structure matches perfectly with the substrate. Thus, the layer experiences a homogeneous strain.
Strain energy scales with the volume, it increases with thickness. Above a critical thickness it becomes energetically favourable to release part of the strain by the spontaneous formation of dislocations, shown in figure 1.30(b).
Critical thickness
The existence of the critical thickness, where misfit dislocations appear, was first predicted by Frank and van der Merwe. It was treated theoretically by several authors and confirmed experimentally.
Formula for the critical thickness can be derived by comparing the work that is required to form a dislocation (W d ), and the work that can be gained from the stress field when a dislocation is formed (W m ). The thickness where the work gained equals to the work required, defines the critical thickness.
The geometry of a misfit dislocation is shown in figure 1.31(a). The coordinate system is taken so that the y axis is perpendicular to the surface. The dislocation line lies along the z axis, at the interface between the film and the substrate. Along this axis the strain is uniform. The grey plane is the plane where the dislocation can glide, and it divides the crystal in two, signed as (+) and (-) [START_REF] Freund | Thin film materials: stress, defect formation and surface evolution[END_REF].
To calculate the work that is required for the formation of a dislocation, the following is considered: a stress free crystal is cut along the glide plane from the surface to the dislocation, and material with radius r 0 is removed, centred on the eventual dislocation line. The path of the cut is shown in figure 1. created by the cut are displaced by on offset defined by the Burgers vector (b) of the dislocation. The energy per unit length that is stored in the material as the result of these operations is the following:
W d (η) = Γ 1 2 T i u i dl, (1.28)
where index i denotes the x, y, z coordinates of the corresponding vectors, Γ is the boundary of the region created by the cuts in the material, l is the arc length along this boundary, and T is the traction required to maintain the imposed displacement. Traction is related to the stress tensor (σ ij ) and the surface normal (n j ): T i = σ ij n j . Evaluation of the integral is lengthy and beyond the scope of this work. More details can be found in reference [START_REF] Freund | Thin film materials: stress, defect formation and surface evolution[END_REF]. The following expression is a good approximation of the final result:
W d (η) = µ[b 2 x + b 2 y + (1 -ν)b 2 z ] 4π(1 + ν) ln 2η r 0 , (1.29)
where b x , b y , and b z are the components of the Burgers vector, η is the distance of the dislocation from the surface of the crystal. µ is the elastic sheer modulus, it is the property of the material defined as the ratio of sheer stress to the sheer strain. Finally, ν is the Poisson ratio, it is also the property of the material, defined as the transversal strain over the axial strain. Poisson ratio is the measure of the Poisson effect: when a material is compressed in one direction, it tends to expand in the two other perpendicular directions. Most often Poisson ratio is negative [START_REF] Freund | Thin film materials: stress, defect formation and surface evolution[END_REF].
Next, the work done by the background stress field in forming a dislocation, where the stress is caused by the the misfit, is calculated. The same thought process, shown in figure 1.31(b), is followed, except this time the crystal is strained by misfit, and the stress field is considered to be unaffected by the formation of the dislocation. The work done by the field can be calculated by the same formula as before:
W m (η) = Γ 1 2
T i u i dl.
(1.30)
The difference here is that the stress is the misfit stress, not the stress caused by the dislocation. In this case, the formula for the stress is simply σ m = µ m , where m is the misfit strain from equation 1.27. The result of the integration is the following:
W m (η) = -b x σ m η. (1.31)
Dislocations spontaneously appear in the film when W d (h c r) + W m (h c r) = 0. The full expression for the critical thickness can be found in reference [START_REF] Freund | Thin film materials: stress, defect formation and surface evolution[END_REF]. An approximation, which is valid when the critical thickness is larger than the magnitude of the Burgers vector is the following:
b 2 x + b 2 y + (1 -ν)b 2 z 8π(1 + ν)b x h cr ln 2h cr r 0 = m h cr b. (1.32)
Positive critical thickness can only be defined, when the misfit and b x have the same sign, which means that only dislocations that relive the strain are allowed [START_REF] Freund | Thin film materials: stress, defect formation and surface evolution[END_REF]. Equation 1.32 was used to derive the critical thickness of rhenium on Al 2 O 3 . This is presented in section 2.1.3
Growth on a stepped surface and spiral growth
The best way to grow a good quality film is to use a single crystal substrate which was cut along a low energy crystal plane. It is impossible to cut a substrates from a bulk precisely along a certain direction, there will always be a miscut angle, which is usually in the range of 0.1 • . This small deviation from the low energy configuration is going to drive the atoms in the surface region to rearrange themselves and form steps with one atomic height, as shown in figure 1.32. The widths of the steps are equal, and are defined by the angle of the miscut [START_REF] Pimpinelli | Physics of crystal growth[END_REF].
The step edges provide efficient nucleation sites for the adatoms. They allow a so called step flow growth, shown schematically in figure 1.33. This was first described by Burton, Cabrera and Frank before MBE existed. In this growth mode, growth only happens at the step edges, and the terraces move or flow as more and more atoms are deposited [START_REF] Pimpinelli | Physics of crystal growth[END_REF]. Without the presence of steps, the adatoms diffuse on the surface until enough of them meet, and a critical nucleus is formed. The critical nucleus contains the minimum number of adatoms that can be stable on the substrate surface. They can capture further atoms and initiate the growth of islands. The size of the critical nucleus depends on the temperature. At low temperature a single adatom can be stable, at higher temperature two or three or more atoms are needed. These islands grow according to the mode defined by the surface free energies, and when they are big enough they coalesce. Along the line of coalescence defects can easily occur, such as grain boundaries or holes.
Step flow growth overcomes these issues as arbitrary lines of coalescence have no time to form [START_REF] Pimpinelli | Physics of crystal growth[END_REF].
The presence of steps is not enough for step flow growth to occur. If adatoms have no time to reach a step edge before forming a critical nucleus, islands grow on the terraces. Diffusion length defines the length an adatom can travel, before meeting an other adatom. Assuming a dimer is stable, it can be said that the requirement for step flow growth to occur is to have larger diffusion length than step width [START_REF] Pimpinelli | Physics of crystal growth[END_REF].
When discussing growth on a stepped surface, an important effect has to be mentioned: the Schwoebel barrier. Schwoebel barrier is the energy potential an adatom has to overcome when diffusing over a step. An edge dislocation produces a slanting step on the surface, which will act as a nucleation site for the arriving atoms. Nucleating adatoms keep creating steps, as a result the surface will grow in a spiral manner [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF].
Schwoebel barrier is explained in figure 1.34. The adatom is show in grey. The potential felt by the adatom is shown schematically in this figure. The potential has a maximum at the step edge. When an adatom reaches the end of a terrace, to step down to the terrace below, it has to pass through a position where it does not have many neighbours. This is what creates the potential barrier [START_REF] Pimpinelli | Physics of crystal growth[END_REF].
The Schwoebel barrier is felt by an adatom diffusing from a lower to a higher terrace as well. In this case the adatom is in a potential well when it reaches the step edge, as the coordination number is the highest there.
A special case of step flow growth was described by Burton, Cabrera and Frank in 1951 [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF]. It occurs when a screw dislocation reaches the surface, and creates a step. This step will act as a nucleation site, and the adatoms arrange themselves along it. Because the step created by the screw dislocation slopes, and disappears into the crystal, the adatoms perpetually create steps as they nucleate. This is shown in figure 1. [START_REF] Fert | The origin, development and future of spintronics[END_REF]. Growth around such step creates a spiral structure. Topology of the spirals depends on the sign of the screw dislocation that initiated the growth. A single spiral is created by a single dislocation, shown in figure 1.36(a). If there are two dislocations present with opposite signs, separated by a distance larger than 2πρ c , where ρ c is the critical radius, they start to grow independently, and form a double spiral when they overlap. This is shown in figure 1.36(b). If the distance between them is smaller, no spiral growth occurs [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF]. Dislocation pairs of like signs separated by larger than 2πρ c exhibit similar growth to opposite sign pairs. They turn separately until they meet. Following that, they grow as one spiral, as it is shown in figure 1.37 [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF].
When the dislocations of like sign are closer, growth still occurs. These spirals have no intersection point except in the origin, which means that they will grow separately. Turns of both spirals reach the whole area. This statement is true for any number of dislocations within a 2πρ c distance. This case is illustrated in figure 1.38 [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF].
The shape of the spirals are determined by the dependence of the growth rate on crystallographic orientation. In the case when it is independent, the shape is circular, when it is dependent, spirals are deformed into polygons [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF].
Thermal grooving
In polycrystalline thin films grooves can spontaneously develop along grain boundaries at elevated temperatures. This process is called thermal grooving, or dewetting, and was theoretically investigated by W. W. Mullins [START_REF] Mullins | Theory of thermal grooving[END_REF]. He derived the time dependent profile of a surface around a grain boundary during thermal grooving. Two cases were considered: one where the transport of the matter was driven by evaporation-condensation, and one where it was driven by surface diffusion. For both cases partial differential equations are derived, solved and the results are compared.
In the case of evaporation-condensation, it is shown that evaporation is proportional to the surface area. Therefore, the flux of atoms emitted by a curved surface is higher than by a flat surface. On the other hand, the mean free path of the metal atoms in the vapour phase is large, thus the density of metal vapour is equal all across the surface. The result is that less atoms condensate on the curved surface than evaporate, the grain boundary walls shift away from their original position, and the groove deepens [START_REF] Mullins | Theory of thermal grooving[END_REF].
The partial differential equation is derived from the approximation of the Gibbs-Thompson formula, which gives the equilibrium vapour pressure (p = ∆p + p 0 ) of a surface segment with curvature K:
∆p p 0 = K γΩ k B T , (1.33)
where p 0 is the equilibrium vapour pressure of a plane surface, γ is the surface free energy, Ω is the molecular volume, k B is the Boltzmann constant, and T is the temperature. This approximation is valid when p/p 0 is close to 1 [START_REF] Mullins | Theory of thermal grooving[END_REF].
The number of atoms emitted by the surface can be calculated from the equilibrium pressure. It is required that the densities of the vapour over the flat and the curved surfaces are equal. The net loss of atoms of the curved surface equals the difference between the number of atoms leaving the curved surface and the number of atoms leaving the flat surface. From this, the rate of advance of a profile element can be obtained. Using the definition of curvature, the differential equation for the time evolution of the surface profile is the following:
∂y ∂t = Ay , where A = p 0 γΩ 2 (2πM ) 1 2 (k B T ) 3 2 , y = ∂ 2 y ∂x 2 , (1.34)
and M is the molecular mass. This partial differential equation is solved with boundary conditions y(x, 0) = 0, and y (0, t) = tan β = m, where β is the angle of the surface, shown is figure 1.39 [START_REF] Mullins | Theory of thermal grooving[END_REF].
The solution is the following:
y ec (x, t) = -2m(At) 1 2 ierfc x 2(At) 1 2
, where (1.35)
ierfc (t) = ∞ t erfc (u)du = 2 √ π ∞ t ∞ u e -z 2 dz.
The time evolution of the surface profile is plotted in figure 1.40. Parameter A was arbitrarily chosen as 5000, β was 5 degrees. The units along the x, and y axis are a measure of length, the numbers in the legend are a measure of time. When surface diffusion drives the transfer of matter, the partial differential equation is derived from the dependence of the chemical potential (µ) on the surface curvature:
µ(K) = KγΩ.
(1.36)
Nernst-Einstein relation gives the average velocity of the surface atoms in the presence of a chemical potential gradient:
v = - D s k B T ∂µ ∂s = - D s γΩ k B T ∂K ∂s , (1.37)
where D s is the surface diffusion coefficient, and s is the arc length along the profile [START_REF] Mullins | Theory of thermal grooving[END_REF]. Equation 1.37 multiplied by the number of atoms per unit area (ν) gives the surface current. Divergence of the surface current gives the increase of the number of atoms per surface area. From this, the partial differential equation for the surface profile is the following:
∂y ∂t = -B ∂ ∂x 1 (1 -y 2 ) ∂ ∂x y (1 + y 2 ) 3 2
, where
B = D s γΩ 2 ν k B T . (1.38)
An approximation of equation 1.38 was solved. The approximation is valid in cases when m is small, and is referred to as small slope approximation: The boundary conditions were: y(x, 0) = 0, y (0, t) = tan β = m, and y (0, t) = 0. The solution of equation 1.39 is the following function:
∂y ∂t = -By . (1.39)
y sd (x, t) = m(Bt) 1 4 Z x (Bt) 1 4
, where
Z(u) = inf n=0 a n u n . (1.40)
The a n coefficients are:
a 0 = - 1 2 1 2 Γ 5 4 = -0.7801, a 1 = 1, a 2 = - 1 2 3 2 Γ 3 4 = -0.2885,
a 3 = 0, a n+4 = a n • n -1 4(n + 1)(n + 2)(n + 3)(n + 4) (1.41)
The time evolution of the surface profile is plotted in figure 1.41. Parameter B was arbitrarily chosen as 10 10 , and β was 5 degree. The units along the x, and y axis are a measure of length, the numbers in the legend are a measure of time.
For both cases the groove (x = 0) deepens as time passes. The overall shape of the surface profile does not change with time. The most important difference is that while evaporation-condensation profile increases monotonously along the profile line, surface profile that was shaped by surface diffusion shows a local maxima close to the groove, after which it flattens [START_REF] Mullins | Theory of thermal grooving[END_REF]. These results are used to describe our films in section 2.3.
Growth and characterisation of rhenium thin films
In this chapter the growth of rhenium onto single crystal Al 2 O 3 is presented. First the preparation of the substrate, then the evaporation of rhenium is described. From the frequently observed deposition rates, the temperature of the evaporating rhenium is estimated. Next, the epitaxial relationship between the rhenium and the substrate is presented, and the critical thickness of the rhenium is calculated from the misfit strain. In the following section, the effect of temperature on the properties of the film is studied on samples with 3 different thicknesses, then it is shown that rhenium undergoes dewetting, when its thickness reaches approximately 10 nm. Lastly, a model to calculate the temperature of the growing film is presented.
Growth procedure
Preparation of the substrate
Single crystal α-Al 2 O 3 substrates were purchased from Neyco. They were all 0.5 mm thick, and measured either 15 mm x 15 mm or 13 mm x 13 mm in the plane.
Al 2 O 3 is a frequently used substrate material, as many preparation procedures as users can be found in literature [START_REF] Yoshimoto | Atomic scale formation of ultrasmooth surfaces on sapphire substrates for high quality thin film fabrication[END_REF][START_REF] Heffelfinger | Steps and the structure of the (0001 ) α-alumina surface[END_REF][START_REF] Heffelfinger | Mechanisms of surface faceting and coarsening[END_REF][START_REF] Van | Evolution of steps on vicinal (0001 ) surfaces of α-alumina[END_REF][START_REF] Ribič | Behavior of the (0001 ) surface of sapphire upon high-temperature annealing[END_REF][START_REF] Cuccureddu | Surface morphology of c-plane sapphire (α-alumina) produced by high temperature anneal[END_REF][START_REF] Wang | Effect of the environment on α -Al 2 O 3 (0001 ) surface structures[END_REF][START_REF] Scheu | Manipulating bonding at a Cu/(0001 )α -Al 2 O 3 interface by different substrate cleaning processes[END_REF][START_REF] Eng | Structure of the hydrated α -Al 2 O 3 (0001 ) surface[END_REF][START_REF] Oh | Control of bonding and epitaxy at copper/sapphire interface[END_REF][START_REF] Neretina | The role of substrate surface termination in the deposition of (111 ) CdTe on (0001 ) sapphire[END_REF][START_REF] Walters | The surface structure of α -Al 2 O 3 determined by low-energy electron diffraction: aluminum termination and evidence for anomolously large thermal vibrations[END_REF][START_REF] Toofan | The termination of the α -Al 2 O 3 (0001) surface: a LEED crystallography determination[END_REF]. Based on these examples, we have also developed our own predeposition treatment. AFM height image taken of a substrate as received is shown in figure 2.1(a). The surface is covered with particles of various sizes. From analysing the profile of the surface, the height of the larger particles was found to be approximately 10 nm. Number density measured on a 6 µm x 6 µm AFM image was 130 per µm 2 . These small islands can influence the growth of rhenium by acting as a nucleation site. They have to be removed.
Substrates were first washed in an RBS detergent solution purchased from Chemical Products then rinsed with deionised water. Afterwards, they were cleaned with acetone in ultrasonic bath. Finally, they were put in ethanol and dried in nitrogen flow. An AFM height image taken after the cleaning procedure is shown in figure 2.1(b). A few larger particles are still visible but their density is reduced ten fold, to only 10 per µm 2 .
After the cleaning, substrates were placed in a clean quartz tube to be annealed in a muffle furnace in air atmosphere. Quartz at this temperature can get soft, and deform due to creeping. For this reason, we designed a special tube: the inner quartz tube is supported by an outer alumina tube. A drawing of our design is shown in figure 2.2.
The temperature was raised linearly to 1100 • C from room temperature in 7 hours. Substrates were annealed at this temperature for an hour, then the furnace was switched off and let to cool. It took 4-5 hours to reach room temperature.
AFM height image taken after the heat treatment is shown in figure 2.3. As a result For the sample shown in figure 2.5(a), the annealing time was reduced to 30 minutes. Monoatomic steps began to form on the substrate, but small islands can be observed along the edges. The time was not long enough to complete the development of the steps. In case of the example shown in figure 2.5(b), the temperature of the annealing was reduced to 1000 • C. The step edges appear to be sharp, well defined but they are decorated with kinks, and large islands can be observed in-between. The temperature does not appear to be high enough to straighten the steps, and atoms do not have enough energy to reach the edges, so they form islands. If islands grow large enough they coalesce with the step edge, and form a structure similar to a peninsula. On a nominally flat (001) Al 2 O 3 surfaces only steps with single atomic height develop when they are annealed under 1200 • C. The coalescence of steps occurs at higher temperatures [START_REF] Yoshimoto | Atomic scale formation of ultrasmooth surfaces on sapphire substrates for high quality thin film fabrication[END_REF][START_REF] Heffelfinger | Steps and the structure of the (0001 ) α-alumina surface[END_REF][START_REF] Ribič | Behavior of the (0001 ) surface of sapphire upon high-temperature annealing[END_REF][START_REF] Cuccureddu | Surface morphology of c-plane sapphire (α-alumina) produced by high temperature anneal[END_REF]. Our setup was limited to 1100 • C. states that Al is the most stable termination even in O 2 atmosphere at high pressures, and oxygen can only be stable if hydrogen is present at the surface, in which case the termination is hydroxide [START_REF] Wang | Effect of the environment on α -Al 2 O 3 (0001 ) surface structures[END_REF]. We followed the same substrate preparation procedure for all our substrates and the resulting properties of the substrates and the deposited thin films were reproducible. We do not know what the terminations of the substrates are but we expect them to be consistent.
As was described before, in crystalline Al
Evaporation of rhenium
Rhenium has a low vapour pressure, and is therefore difficult to evaporate. As such, it needs to be heated to high temperatures (∼3000 • C) to achieve a reasonable deposition rate.
The first investigation of liquid evaporation of mercury into vacuum was conducted by Hertz in 1882 [63]. He concluded that the evaporation rate of a liquid cannot exceed a maximum value at a certain temperature, and this theoretical maximum is obtained only when as many atoms or molecules leave as would be required to exert the equilibrium vapour pressure (P v ) on the surface, and none of them return. This means that the number of atoms/molecules (dN source ) evaporating from a surface area A source during time dt has to be equal to the impingement rate on the surface corresponding to the pressure inside the chamber (P ):
dN source dt = A source (P v -P ) N A 2πM k B T , (2.1)
where N A is the Avogadro number, M is the molar weight, k B is the Boltzmann constant and T is the temperature measured in K [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF].
The observed evaporation rates are generally below the theoretical maximum. Based on this Knudsen argued that a certain fraction of the molecules contribute to the vapour pressure but not to the evaporation rate. The theoretical evaporation rate should be multiplied by the thus defined evaporation coefficient, α. This form of equation 2.1 is known as Hertz-Knudsen equation. α is measured experimentally, and here is considered to be 1. Later Langmuir showed that the Hertz-Knudsen equation applies to the evaporation from the surface of a solid as well [START_REF] Herman | Molecular beam epitaxy: Fundamentals and current status[END_REF]. [START_REF] Lide | CRC Handbook of Chemistry and Physics[END_REF], [START_REF] Carl | Handbook of Vapor Pressure, volume 4: Inorganic compounds and elements[END_REF], and [START_REF] Plante | Vapor pressure and heat of sublimation of rhenium[END_REF].
The vapour pressure can be calculated using the empirical equations 2.2 and 2.3. The parameters and their range of validity for rhenium is listed below.
Between 298 K -2500 K [START_REF] Lide | CRC Handbook of Chemistry and Physics[END_REF]:
log P v (Pa) = 5.006 + A + BT -1 + C log T,
where A = 11.543, B = -40726, C = -1.1629.
(2.2)
Between 2480 K -5915 K [START_REF] Carl | Handbook of Vapor Pressure, volume 4: Inorganic compounds and elements[END_REF]:
log P v (Pa) = (A + BT -1 + C log T + DT + E 2 ) • 133.322, where A = -31.5392, B = -3.2254e4, C = 12.215, D = -1.2695e-3, E = 3.7363e-8. (2.
3)
The vapour pressure of rhenium was measured by Plante et al. in a narrow temperature range that overlaps with the range of validity of equations 2.2 and 2.3 [START_REF] Plante | Vapor pressure and heat of sublimation of rhenium[END_REF]. This experimental data was used to check the values given by the equations. The results of equations 2.2 and 2.3, and the values found in reference [START_REF] Plante | Vapor pressure and heat of sublimation of rhenium[END_REF] are shown in figure 2.6. The calculated data from both equations and the experimentally measured data match well. The deposition rate is given by the evaporated particles that reach and stick to the substrate. For the sake of simplicity, the assumption is that the sticking coefficient is 1. Every particle reaching the substrate sticks to it.
The number of particles that reach the surface of the substrate depends on the geometry of the setup. To calculate the arrival rate, the following equation can be used for the Knudsen cell, which is an evaporation cell, where the source material is enclosed, and its vapour can escape through a small hole:
dN sub A sub dt = dN source dt 1 πr 2 cos θ cos (θ + φ), (2.4)
where r is the distance between the source and the substrate, and the angles are shown in figure 2.7 [START_REF] Henini | Molecular beam epitaxy: from research to mass production[END_REF].
In the MBE setup used, an open crucible was employed rather than a Knudsen cell, but equation 2.4 can be used to provide an estimate. The distance between the the substrate and the source is about 40 cm. The angles are small, and therefore considered to be 0 for this calculation.
The deposition rate was calculated using equation 2. Several streaks are visible, caused by the white-hot rhenium particles leaving the charge at high speed.
Several of these droplets were found when the chamber was opened. They are all perfectly round and have a smooth surface. A scanning electron microscope (SEM) image taken on one of them is shown in figure 2.9b. The analysis of the emitted characteristic X-rays induced by the electron beam in the SEM confirmed that the metal ball is indeed rhenium. The shape and the surface of the ball suggests that part of the rhenium is molten during deposition, and that the ejected droplets most likely to come from the liquid phase.
Rhenium on Al 2 O 3
The aim of this section is to study the epitaxial relationship between rhenium and Al 2 O 3 , and to determine the thickness above which dislocations are expected to appear in the film.
The atomic arrangement in the first layers of the growing film mimics the lattice of the substrate, and thus the film is under a strain, induced by the substrate. This strain is called misfit, and its consequences in the relationship between Re and Al 2 O 3 are discussed below.
In figures 2.10(a) and 2.10(b) the lattices of rhenium and Al 2 O 3 are depicted with the epitaxial orientations. positioned on top of the Al atoms, and are thus neighboured by 3 oxygen atoms at each surface site. The number density of the rhenium atoms is higher in the lattice than the aluminium density in Al 2 O 3 , where only 2/3 of the octahedral positions are filled. This is why rhenium can be observed to be positioned in the empty hexagonal spaces as well. These sites have the same oxygen coordination as the Re atoms on top of the Al 2 O 3 octahedra. We note that 3 out of 6 oxygens that are visible around the Re atoms belong to the lower plane of oxygens, and do not coordinate the rhenium atoms.
In figure 2.10(b) the view of the two lattices are shown viewed along the a axis of the substrate. The spacing between two rhenium planes is 0.22 nm.
The rhenium and the Al 2 O 3 lattices match very well. There is an epitaxial relationship between the lattices, which with the Bravais-Miller indices is (0001)Al 2 O 3 //(0001)Re and <2110>Al 2 O 3 //<0110>Re. The two lattices are rotated by 30 • in-plane with respect to each other, which can be observed in figure 2.10(a). The angle of rotation was confirmed using XRD. The Φ scans measured on a film and its substrate are shown in figure 2.11. The (102) reflection of rhenium and the (104) reflection of the substrate were located, and the sample was rotated around the specular (001) direction. (001) axis of the substrate has trigonal symmetry, and indeed the equivalent reflections appear 120 • apart. For rhenium this axis has sixfold symmetry, and its equivalent reflections are 60 • apart. The angular separation between the reflections of the two materials is consistent with the 30 • the rotation.
The 30 • rotation is taken into account when calculating the misfit. Furthermore, the a lattice parameter of the the Al 2 O 3 lattice was divided by 2. The misfit strain at room temperature is the following:
a (RT) = a s /2 -a Re cos 30 • a Re cos 30 • = a s - √ 3a Re √ 3a Re = -0.0043. (2.5)
This value corresponds to a very small misfit strain. As a comparison, ZnSe grown on GaAs has a lattice mismatch of 0.27%, BeTe on GaAs has -0.48%, and both are said to be nearly lattice-matched. The lattice mismatch and the elastic properties of the rhenium defines how thick it can grow in registry with the substrate, without defects [START_REF] Henini | Molecular beam epitaxy: from research to mass production[END_REF].
The negative sign means that the rhenium lattice in bulk has a larger a lattice parameter than the substrate, and therefore when grown pseudomorphically on Al 2 O 3 , is compressed in-plane. As a result of the compression along both in-plane directions, the lattice extends out-of-plane. The strains are connected via the Poisson's ratio ν, which is 0.2894 for rhenium [START_REF] Murli | Ultrasonic equation of state of rhenium[END_REF]:
c (RT) = - 2ν 1 -ν a = 0.0035. (2.6)
The sample is heated during deposition, which means both lattices are expanded. The value of misfit is therefore different at the deposition temperature than it is at room temperature. The high temperature lattice parameters can be obtained from the thermal expansions of the two materials along the a axis, which are the following:
Rhenium from 293 K to 1900 K [START_REF] Touloukian | Thermal Expansion: Metallic Elements and Alloys[END_REF]: The misfit was calculated for a wide temperature range, and it is shown in figure 2.12 by the green curve. Figure 2.12 also shows the thermal expansion coefficients of the two materials along the crystal axis a. The sample temperature is between 700 • C -1000 • C during deposition. The misfit changes one tenth of a percent between room temperature and 1000 • C.
∆L L (%) = -0.195 + 6.513e-4 • T + 5.412e-8 • T 2 -1.652e-11 • T 3 . (2.
The strain caused by the misfit can only be accommodated by the growing film up to the critical thickness. When the critical thickness is reached, dislocations spontaneously appear to relieve the strain. The derivation to obtain the critical thickness was given in section 1.5.2. The obtained formula is an approximation, which is valid until the critical thickness is larger than the magnitude of the Burgers vector. It is the following:
b 2 x + b 2 y + (1 -ν)b 2 z 8π(1 + ν)b x h cr ln 2h cr r 0 = m h cr b, (2.9)
where b x , b y , and b z are the components of the Burgers vector, ν is the Poisson ratio, r 0 is the dislocation core radius, h cr is the critical thickness, and m is the misfit strain.
In equation 2.9 the Burgers vectors of the dislocations are given by their Cartesian coordinates. How to obtain the Cartesian coordinates from Miller or Bravais-Miller indices of directions is explained in appendix C.
Six different Burgers vectors can exist in a hexagonal close-packed system. Four of them have the correct direction to relieve in-plane strain: 1/3 < 1120 >, 1/3 < 1123 >, 1/3 < 1100 >, and 1/6 < 2203 > [START_REF] Hull | Introduction to dislocations[END_REF]. Each of these four Burgers vectors include six equivalent directions, which can be obtained by the permutation of the first three indices. Of these, the ones where the second index is negative, can relieve compressive strain.
The misfit strain as the function of the critical thickness calculated using equation 2.9 is shown in figure 2.13 for the four Burgers vectors. The core radius of the dislocation was chosen to be half the magnitude of each Burgers vector [START_REF] Freund | Thin film materials: stress, defect formation and surface evolution[END_REF]. The horizontal line shows the misfit strain calculated for the temperature during deposition. According to the calculation, rhenium grows pseudomophically up to approximately 10 nm thickness onto the Al 2 O 3 substrate. The critical thickness obtained with Burgers vectors 1/3[1100] and 1/3 [1210] are very close to each other, 10 nm and 13 nm, respectively. These dislocations are expected to be present above the critical thickness.
We observe spirals on films that are thicker than 20 nm. Screw dislocations can cause spirals to grow by creating a step on the surface. To create a step, the Burgers vector needs to have a nonzero final (Bravais-)Miller index. Burgers vector 1/6[0223] gave a critical thickness slightly below 25 nm. These dislocation can be present in the relaxed film and can be responsible for spiral growth.
The fourth Burgers vector 1/3[1213] has too high energy cost to be expected in the films.
Thin film growth
The instruments mentioned below are described in detail in section 1.3.1.
During the deposition of rhenium the substrate has to be heated to provide kinetic energy for the adatoms. There are two ways to heat the substrate, either by infrared radiation or by electron bombardment. Al 2 O 3 is an insulator. If we applied electron bombardment, the sample would become charged because the excess electrons cannot be removed. The substrate cannot be heated with infrared radiation either, because it is transparent in that wavelength range. To overcome these issues 300 nm of tungsten is deposited onto the back side of the substrates.
After, the substrate is mounted on a sample holder with a hole in the middle. This way thermocouple is in contact with the back side, and also, the sample is heated directly, not through the sample holder. After the substrate is mounted, it is transferred to the deposition chamber, where it is degassed for a few hours at approximately 350 • C. Figure 2.14 was taken through a view port of the MBE setup. The sample holder with the sample is mounted on the manipulator inside the chamber, and it is in position for deposition.
Before starting the deposition, the temperature of the substrate is set using the furnace in the manipulator head.
A 10 kV electron gun is used for the evaporation of rhenium. To achieve a deposition rates between 0.1 Å/s and 0.2 Å/s, the electron emission current of the gun is slowly increased to about 150 mA -200 mA. When the deposition rate is stable, the shutter covering the substrate is opened, and the deposition begins.
The deposition rate is monitored, and kept constant by manually adjusting the emission current of the electron gun. The time required to deposit the desired thickness is calculated and measured with a stopwatch. When the thickness is reached, the shutter is closed, and the electron gun is turned off. The temperature of the sample is slowly decreased, the heating electronics are turned off, and the sample is left to cool to room temperature before removing it from the vacuum.
Influence of the growth temperature
The influence of the substrate temperature on the surface topography and crystallographic properties of the thin film was investigated. 7 samples were deposited: two with 25 nm, three with 50 nm, and two with 100 nm thickness. Temperatures during the deposition of all the samples from each thickness group were different. Thicknesses and temperatures are summarized in table 2.1.
At the lowest temperature the current running through the heating filament behind the sample was set to 7.5 A. The thermocouple that touched the back side of the sample measured 800 • C. Based on the model described in [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF], the actual surface temperature is estimated to be 700 • C. At the second temperature the current was set to 8.5 A, the thermocouple measured 900 • C, and according to the model the surface temperature was 770 • C. For the highest deposition temperature, which was only used for sample E, the sample was heated by electron bombardment. 600 V was applied between the heating filament and the sample, and the emission current was set to 50 mA. The temperature of the sample was approximately 1000 • C. Temperature ( • C) 800 900 800 900 1000 800 900
Table 2.1: Thicknesses and deposition temperatures of the samples discussed in this section.
Surfaces of these samples were investigated with AFM, and their crystallography was studied with X-ray diffraction. In this section, the results of these measurements are organized and discussed according to the thickness of the films.
Sample A and sample G was used to fabricate microwave resonators by Dumur et al. From the resonance frequency at a low temperature, the London penetration depth was determined [START_REF] Dumur | Epitaxial rhenium microwave resonators[END_REF]. The two surfaces are similar, both are covered with grains that have two distinct geometries: small ones, with approximately spherical shape, and larger ones with elongated, polygonial shape. The diameter of these grains are very similar also. Measuring 10 of both types, and averaging, it was found that the diameter of the larger ones is (96 ± 28) nm, and the smaller ones is (45 ± 12) nm on sample A. On sample B, the larger ones have diameter (73 ± 13) nm, and the smaller ones have (26 ± 4) nm. Grain sizes are more uniform, their standard deviations are smaller, on the sample which was deposited at higher temperature. This is visible in figure 2.15(b) in case of the small grains, which are almost identical, and appear to form a continuous, smooth layer.
25 nm thick films
Both samples are relatively flat, and aside from a few holes, there are no large deviations in height. However, the surface of sample A is jagged, it is rougher than sample B. The average roughness (R a ) can be measured by the arithmetical mean deviation. The Higher deposition temperature results in a smoother surface, with more uniform grain size. However, on the sample deposited at 900 • C there are still two distinct types of grains. They most likely have different orientations.
XRD θ-2θ measurements θ-2θ scans of both samples are shown in figure 2.16. Both curves were normalised with respect to the (002) peak of rhenium, so that the differences can be read more easily. Both graphs are dominated by the (001), epitaxial orientation, which is signalled by the large, higher order (002) and (004) peaks. There are 5 much lower intensity rhenium peaks, corresponding to 3 different orientations. Peak ( 110) is only present in the lower temperature sample. Orientation ( 101) is featured in both samples. Peaks from this orientation, ( 101) and ( 202), have a slightly lower intensity on the sample B. Finally, the intensity of ( 100) and ( 200) are higher on the sample B. It is confirmed that rhenium grown on the (001) plane of Al 2 O 3 prefers to grow along the (001) direction. Two orientations remain present in the sample grown at 900 • C: (100) and ( 101). This could either mean both these orientations are stable as well, or it could be an anomaly. It could have been caused by contamination on the substrate, which decreased the mean free path of rhenium adatoms, and caused nucleation and growth along these direction. Conclusion can only be drawn after looking at more samples.
The shape of the (002) diffraction peaks appear similar on both samples. To verify this, they were fitted, and the fitting procedure is described below.
A close up of the (002) peak of rhenium measured on sample A is shown in figure 2.17.
The sample was probed with the copper Kα radiation. The intensity of the incoming X-ray beam is composed of two parts Kα1 radiation with wavelength 1.540562 Å, and one part Kα2 with wavelength 1.544398 Å. The substrate peaks are double, which can be observed on the (00 12) reflection in figure 2. [START_REF] Daunt | Superconductivity of rhenium[END_REF]. A slight asymmetry can be observed in case of the rhenium peak as well. For this reason the sum of 2 functions was used to fit the data shown in figure 2.17: one corresponding to the copper Kα 1 radiation, the other to the Kα 2 . As mentioned, the intensity ratio of the two components of the incoming beam is Kα1:Kα2 = 2:1. Thus, the integrated intensity ratio of the respective diffraction peaks (I Kα 1 /I Kα 2 ) has to be 2:1. The integrated intensity of a peak depends on its amplitude and its full width half maximum (FWHM), both of which depend on the resolution of the diffractometer. The resolution of the diffractometer depends on the 2θ angle. The two peaks, corresponding to Kα 1 and Kα 2 , are so close to each other, that resolution can be considered constant in that range. Their full width half maxima are expected to be equal, and the ratio of their amplitudes is expected to be 2:1. In the fitting procedure the ratio of amplitudes were fixed at the expected value, the full with half maxima were set to be equal, and the position of the Kα 2 peak was calculated from the Kα 1 peak using Bragg's law, equation 1.19.
The angular separation between the two peaks increases with the Bragg angle. In the interval shown in figure 2.17 its value is around 0.1 • . A few degrees of misalignment in the experimental setup can cause variations in the second digit after the decimal point of the separation. A parameter ( ) was allowed to correct the position of the second peak in the fitting procedure.
Two model functions are used for the fit of diffraction peaks. These two functions are the Gauss function (G(x)) and the Cauchy or, as it is also known, the Lorentz function (L(x)). The copper Kα emission line of the X-ray tube have a Lorentzian shape. Broadening due to small crystal size is also associated with a Lorentzian shape. However, broadening due microstrains is described by a Gauss function, because microstrain fields often exhibit a normal distribution of lattice spacing values around an average d 0 value. Therefore, diffraction peaks are usually well described by a mixture of these two functions [START_REF] Birkholz | Thin film analysis by X-ray Scattering[END_REF].
The (002) rhenium peaks were fitted with the sum of two Voigt functions. The Voigt function is the convolution of a Gauss (G) and a Lorentz (L) function:
V (x) = ∞ -∞ G(x )L(x -x )dx where (2.10) G(x) = 1 σ √ 2π e -(x-x 0 ) 2 2σ 2 , F W HM G = 2 √ 2 ln 2σ L(x) = 1 πγ 1 + (x-x 0 ) 2 γ 2 , F W HM L = 2γ
The fit of the (002) rhenium peak of sample A is shown in figure 2.17. The individual Voigt curves corresponding to the two wavelengths are also shown. The results of the fits with their standard deviations are summarised in table 2.3.
XRD line profile analysis of epitaxial thin films is not straightforward. The film has a preferred orientation determined by the substrate that will reduce the number of diffraction peaks. A Williamson-Hall plot would allow us to separate size and strain contribution to peak broadening [START_REF] Williamson | X-ray line broadening from filed aluminium and wolfram[END_REF]. In this approach the parameters of each diffraction peak are plotted in a coordinate system with axis FWHM• cos θ and 2 sin θ/λ, and a straight line is fitted across the points. In our case, the film has a single dominant orientation, that results in two peaks which can reliably be fitted. This is not sufficient for the Williamson-Hall plot, this method is not suitable for epitaxial films.
The Warren-Averbach method makes use of the Fourier coefficients of at least two harmonic reflections [START_REF] Warren | The separation of coldâĂŘwork distortion and particle size broadening in xâĂŘray patterns[END_REF]. For this, first, the 2θ scattering angle has to be transformed to 25 nm (002) Voigt fit 800 the magnitude of the scattering vector (q) using the following expression: q = 4π sin θ/λ. The presence of a secondary wavelength makes this transformation uncertain. The analysis that relies on a single line takes advantage of the observation that was mentioned above: size broadening has Lorentzian shape, strain broadening has Gaussian shape. Based on this, the Lorentz and Gauss fractions in the fitted Voigt functions are interpreted to signal effects of size and strain, respectively [START_REF] Th | Use of the voigt function in a single-line method for the analysis of x-ray diffraction line broadening[END_REF].
• C (A) 900 • C (B) Peaks @ 40.462 • ± 0.001 • 40.474 • ± 0.004 • 40.654 • ± 0.007 • 40.64 • ± 0.02 • F W HM G 0.372 • ± 0.006 • 0.31 • ± 0.01 • F W HM L 0.122 • ± 0.004 • 0.127 • ± 0.009 •
Particle size can be determined from the FWHM using the Scherrer equation [START_REF] Scherrer | Bestimmung der größe und der inneren struktur von kolloidteilchen mittels röntgenstrahlen[END_REF]. This equation can be derived from the interference function, given in equation 1.23, and gives a lower limit to the size of cubic shaped crystallites in the direction perpendicular to the reflecting planes. In our case this size is the thickness (t). The Scherrer formula is the following:
t = Kλ FWHM • cos θ , (2.11)
where K is a geometrical factor approximately unity, and FWHM is taken in radians.
Thicknesses calculated from the Lorentzian width of the (002) peaks are 72 nm for sample A and 70 nm for samples B. This cannot be correct, because thickness was measured during deposition, and and is known to be approximately 25 nm.
The Gaussian contribution is thought to carry the strain broadening. Strain is a dimensionless quantity, that describes the variations of interplanar spacings in the crystal relative to the undistorted lattice parameter, d 0 : = ∆d/d 0 . The relationship between line broadening and strain can be obtained by differentiating Bragg's law:
∆d ∆(2θ) = - d 0 2 cot θ → 2 1/2 = ∆(2θ) 2 cot θ.
(2.12)
∆(2θ) is identified as the integral breadth of the Gaussian part of the Voigt function, which can be converted to the FWHM: FWHM G = ∆(2θ)/ √ 2π [START_REF] Birkholz | Thin film analysis by X-ray Scattering[END_REF]. Thus the root mean square of the strains calculated from the (002) reflections are:
2 A 1/2 = 0.004, 2 B 1/2 = 0.003.
(2.13)
The obtained strains are in the range where misfit strain is expected to be.
What we can safely conclude based on the θ-2θ scans presented above is that the shape of the main (002) diffraction peak, ie. arrangement of the lattice planes parallel to the surface, is not greatly affected by temperature during deposition at these temperature values. We can see a slight decrease in the Gaussian width with the temperature increasing (table 2.3), which can mean there are less defects present in sample B. The nature of these defects cannot be established based on these measurement.
The Lorentzian widths are equal within the error bar, which is good sign considering that this width is expected to carry the size component of the broadening, and the two films have the same thickness. However, the calculated thickness is not what we know it is. The Scherrer equation was derived for cubic materials and cubic shaped grains. It is possible that rhenium thin films fall outside of its limits.
Discrepancies could arise when one tries to deconvolve the effect of size and strain on X-ray diffraction peaks. Soleimanian et al. used several methods to extract the crystallite size and strain from the same set of lines. The values they obtained from different methods differed by a factor of 2 or 3, but were of the same order. They did not obtain the same strain value for harmonic reflections either. Voigt profile fitting, the one used above, gave them the largest values [START_REF] Soleimanian | Comparison methods of variance and line profile analysis for the evaluation of microstructures of materials[END_REF].
XRD rocking curve measurements
The rocking curves of the (002) rhenium peaks of sample A and B were measured. Rocking curve measurement probes the angle distribution of the reflecting lattice planes around the lattice normal. The lateral coherence length, ie. lateral grain size also contributes to the broadening.
To compare the widths of the rocking curves of the two samples, the data was fitted using a function which is also a mixture of a Gauss and a Lorentz function, but easier to compute than the Voigt function. This is the Pearson VII function (P (x)), or as also known, the modified Lorentz function, and given by the following equation:
P (x) = 1 1 + ( m √ 2 -1) (x-x 0 ) 2 w 2 m , F W HM P = 2w. (2.14)
This function is a Lorentz function in the m = 1 limit, and a Gauss function in the m → ∞ limit [START_REF] Birkholz | Thin film analysis by X-ray Scattering[END_REF]. The lower part of the rocking curve of the (002) rhenium reflection measured on sample B is shown in figure 2.18(a). The single Pearson VII function that was first fitted to the data is shown in blue, and it does not describe the data well. The tails of the experimental data exceed the tails of the model function. To account for the tail, a Gaussian contribution was added to the Pearson VII. This is shown in red in figure 2. 18(a). This curve describes the data well, however, it is still not perfect. The measured peak is slightly asymmetric due to the two wavelengths. To perfectly describe the rocking curve, the number of functions would need to be doubled. Since the parameters are only used to compare between samples, to describe growth qualitatively, and not to extract quantitative parameters, the blue and the red curves in figure 2.18(a) are both deemed satisfactory.
The rocking curves of both samples A and B are shown in figure 2 In figure 2.18(b) the G + P fit is shown. For both samples one of the functions has a smaller width and a larger amplitude contributing mainly to the peak (peak contribution), and the other function has a larger width and a smaller amplitude contributing to the tail (tail contribution). These functions are not the same for the two samples. Where one is a Pearson VII, the other is a Gaussian. This was taken into account when comparing the parameters. The widths of the tail contribution has decreased from 0.731 to 0.52, the widths of the peak-function decreased over two folds, from 0.384 to 0.178, when the deposition temperature was increased.
The integral of the two components were calculated for both samples, and the peakto-tail contribution ratios are listed in table 2.4. The ratios have a similar value for both samples. As it will be shown in the following sections, fitting of rocking curves measured on thicker samples does not require the addition of a second function. This means that the pronounced tails we see in case of these samples are due to the small thickness of the films. It is possible that it is caused by an intermediate layer of rhenium on the Al 2 O 3 . The volume fraction of this layer is reduced as the thickness grows, thus its effect cannot be observed on the rocking curves of thicker samples.
Conclusion on the 25 nm thick films
AFM study of the surfaces of the two samples revealed that higher deposition temperature results in a smoother surface with more uniform grains. XRD data shows that the dominant orientation is the epitaxial (001). There are three other orientations present in the films, intensities of two decrease with higher deposition temperature, rhenium favours the epitaxial orientation. The width of the rocking curve decreased by half on the film which was deposited at higher temperature. This means that the out-of-plane orientation of the grains are more uniform on sample B, it has lower mosaicity. The surface of sample D is covered with large, even spirals, that are connected to each other by ridges. There are deep holes in between them. The profile of the holes cannot be determined using AFM, as they are too steep. Only the shape of the probe is measured.
50 nm thick films
A double spiral is shown in figure 2.22(b). The profile was measured along its slope, shown with the white line, and is plotted in figure 2. 22(a). It shows several regular steps, and flat terraces. The average step height between consecutive turns extracted from this profile is 0.24 nm, which corresponds to the atomic spacing in the rhenium lattice along the c axis. The spirals on this sample are larger than on sample C, they measure up to 500 nm -600 nm across.
The surface of the third sample, sample E, is shown in figure 2.23, and is very different than the others discussed before. The sample has partially dewetted during the deposition. The film is not continuous, but composed of large islands. The discontinuous nature of the film has been confirmed by transport measurement [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF].
The surface of sample E appears to be very smooth, but in fact it is decorated by steps and terraces. An AFM image is shown in figure 2. 23(b), where the colour scale was set to highlight the topography of the topmost surface. It shows large flat terraces and shape of the AFM tip. The depth, however, can be measured, and it is 40 nm. This suggests that the whole thickness of the film took part in the dewetting process, and formed large islands. On all three samples there are steps with heights corresponding to the spacing of planes in the lattice of rhenium along the c axis, which suggests that these objects have (001) orientation. Higher deposition temperature resulted in larger spirals on sample D, and in the reduction of the uneven grains that were present on sample C. Also, deep holes appeared in between the spirals when deposition temperature was increased. Sample E, where the temperature was further increased partially dewetted. Islands formed that are not connected to each other. There are no signs of spirals, which suggests that dewetting eliminated the dislocations which, we assume, are responsible for spiral growth.
XRD θ -2θ measurements
Standard resolution data. The crystallographic properties of the films were studied with X-ray diffraction. θ-2θ scan of all three samples are shown in figure 2.25. Graphs were normalized to the (002) peak of rhenium to help comparison. The (001) orientation is the dominant in this case as well. Besides, there is a small peak corresponding to the (101) orientation on sample C which is not present in the spectrum of sample D or E. The (100) reflection, and its higher order (200) reflection are only featured on sample D. Finally, the (110) peak gradually decreases as the deposition temperature is increased. Samples E does not show any other orientation but the (001).
The (002) peak of rhenium was was fitted with the sum of two Voigt functions, The Lorentzian widths are increasing with the deposition temperature. The reason behind this increase is unknown. The Scherrer equation (eq. 2.11) used with the Lorentzian width does not give correct results for the film thickness.
The Gaussian widths decrease significantly with the increasing deposition temperature. This is expected, especially in case of the dewetted sample, sample E.
The root mean square of the strains can be calculated from the Gaussian widths using equation 2.12. The following strain values were obtained:
2 C 1/2 = 0.004, 2 D 1/2 = 0.003, 2 E
1/2 = 0.002.
(2.15)
All three of them are in the range where misfit strain is expected.
High-resolution data. Samples C and D were also measured using the Rigaku Smart-Lab high-resolution diffractometer.
In figure 2.26 the θ-2θ scan on the (002) rhenium peak of sample D is shown. The sharp, lower intensity peak at the higher angle side in figure 2.26 is the (006) reflection of the substrate.
On both sides of the largest, central peak oscillations can be observed. The frequency of these oscillations is inversely proportional to the number of lattice planes scattering in phase. The presence of these clear fringes indicates that the layer is highly crystalline, with a well-defined lattice spacing throughout the thickness.
The experimental data was first fitted with the interference function (I(q)), given in equation 1.23. This function describes the scattering by N number of parallel lattice planes with d spacing. To use equation 1.23, 2θ angles had to be converted to scattering vector q using the following formula:
q = 4π sin θ λ . (2.16)
The (006) peak of the substrate was included in the fit, a Lorenz function was used to describe it.
The fit of the interference function is shown with the blue line in figure 2.26. It looks almost perfect at the scale of the plot: the periodicity matches the data, the intensity of each peak looks correct, and also the shoulder that appear on the side of the substrate peak is well described. However, upon magnification, the shortcomings of this model appear. This is shown in figure 2.27. A definite broadening can be observed on the main peak, and on the fringes too. This is the reason that while the positions of the minima align well between the measurement and the blue curve, the maxima are slightly shifted. The intensities at the minima are at larger values on the measured data, while the model function minima go to zero.
To account for the broadening, disorder was introduced in the model lattice in the form of a Gaussian distribution of crystal plane spacings. This concept is shown in figure 2.28. The lattice is composed of N lattice planes, all with slightly different spacings. To achieve the Gaussian distribution of lattice parameters, a constant ∆d multiplied by a random number was added to the average d 0 . This random number was chosen from a Gaussian distribution centred on 0, with standard deviation 1.
The structure factor was then calculated by adding up the scattered plane wave from each lattice planes with a phase factor, which was calculated from the distance the radiation travels in the crystal. The scattered intensity was obtained by taking the absolute square of the structure factor. The formula describing this is the following:
I mod (q) = 2000 i=1 1 + e -iqd 1 + e -iq(d 1 +d 2 ) + • • • + e -iq(d 1 +d 2 +d 3 +•••+d N -1 ) 2 ,
(2.17)
where d i = d 0 + ∆d• rand(0, 1). Due to the random number generator, the resulting curves are not consistent. To improve this, a summation running from 1 to 2000 was introduced. This can have a physical interpretation as well: it can account for inhomogeneities that are inevitable in the sample. This aspect is not investigated further here, the number of sampling was increased until the resulting curves were consistent. Function 2.17 was fitted by hand, because it is a demanding calculation. Parameters were adjusted to the last digit until improvement could be observed on the fit and in the value of χ 2 . The errors listed in table 2.6 were taken as 1 on the last digit. Results and standard deviations of the parameters obtained from both fits are listed in table 2.6. 50 nm (002) fit by equation 2.17
800 • C (C) 900 • C (D) N 215 ± 0 204 ± 0 d 0 (0.22302 ± 1e-6) nm (0.22303 ± 1e-6) nm F W HM d (0.0093 ± 1e-4) nm (0.0065 ± 1e-4) nm Table 2
.6: Parameters of the fit of the high-resolution (002) rhenium peaks with the modified interference function, equation 2.17.
The fit is shown by the red curve in figure 2.26 and 2.27. It perfectly describes the intensity variations of the fringes, and the broadening as well.
In figure 2.29 the (002) peaks of both samples are shown. The fringes are denser on sample C, more lattice planes take part in the scattering process. Also, the intensity difference between the minima and the maxima is smaller on this sample, which is a sign of a more disordered film.
It is visible on figure 2.29 that the disorder introduced in the system does not destroy the fringes on the diffraction pattern. The agreement between the thus modified interference function and the experimental data is improved with the disorder. The widths and positions of the peaks and fringes, and the vertical positions of the minima and maxima are all well matched. The root mean square strains can be calculated from the full width half maxima using the following formula:
2 1/2 = FWHM d /d 0 • 1/(2 √ 2 ln 2).
The obtained strain values are the following: The strain values calculated here are a magnitude larger than the ones obtained from the low-resolution measurement, shown in equation 2.15. Thickness values confirm the thickness expected from the quartz balance measurement during deposition.
Simulation of the standard resolution data. To verify the validity of the modified interference function, the standard resolution data was simulated from the fitted modified interference function curves.
The substrate is a high quality single crystal, and its (006) peak is very close to the (002) peak of the rhenium. The (006) reflection of the Al 2 O 3 should only show the instrumental broadening. As an approximation, its Lorenzian fit shifted to the rhenium (002) peak position (θ 1 ) was used as the resolution function of the instrument.
The modified interference function fitted to the high-resolution (002) peak of sample D (I film(002) mod (θ, θ 1 )), and the Lorentz function fitted to the Cu Kα 1 (006) peak on the standard resolution data (L sub(006) (θ, θ 1 )) were convolved. The result of the convolution is what would have been measured with the standard resolution instrument, using only the Cu Kα 1 radiation (S λ 1 (θ, θ 1 )). The Cu Kα 2 contribution (S λ 2 (θ, θ 2 )) needs to be added. The result of the convolution was divided by two, as dictated by the intensity ratio of the two wavelengths, and then it was shifted by the angular difference corresponding to the secondary wavelength (∆θ). This is the Kα 2 contribution. The two parts were then added.
The following formulation attempts to summarise the procedure described above:
S(θ) = S λ 1 (θ, θ 1 ) + S λ 2 (θ, θ 2 )
, where (2.20)
S λ 1 (θ, θ 1 ) = ∞ -∞ I film(002) mod (θ, θ 1 )L sub(006) (θ -θ , θ 1 )dθ , and
S λ 2 (θ, θ 2 ) = 0.5 S λ 1 (θ, θ 1 + ∆θ).
All the functions above have two arguments, the first argument (θ) is a running parameter, the second argument refers to the position of the maximum. The thus computed curve (S(θ)) should look similar to the (002) reflection measured by the low-resolution instrument.
The result of this simulation is shown in figure 2.31. The fringes are still visible, but significantly damped. The shape of the simulated curve matches well the shape of the measured data. The width of the simulated (002) is narrower than the measured, but the difference is small. The shape of the low-resolution data is reproducible from the high-resolution data.
We see the signature of lattice distortions on the high-resolution data. These measurements do not tell what the source of the lattice distortion are, or where in the lattice they may be. The lattice mismatch can only account for some of the strain, but it only affects the bottom part of the film, in proximity of the substrate. Spirals cover the surface of the sample D, and some can be observed on sample C too. This suggests, according to the theory of spiral growth [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF], the presence of screw dislocations. The strain field of a screw dislocation is proportional to b/(4πr) with a cosine or sine coefficient depending on the component of the strain. Here b is the Burgers vector of the dislocation, which is in the range of the lattice parameter, and r is the distance measured from the core of the dislocation. The strain a few nanometers away from a dislocations can be in the range calculated in 2.18, thus dislocations could account for the at least some of the strain.
XRD rocking curve measurements
The rocking curves of samples C, D, and E were measured. All three of them are shown in figure 2.32. They were fitted with Pearson VII function, equation 2.14. The fits are also shown in figure 2.32.
A single Pearson VII function describes these rocking curves well, the addition of a second function was not necessary this time. The parameters of the fits are listed in table 2.7.
The rocking curves are becoming significantly narrower with the increasing deposition temperature. This is consistent with the larger objects observed on the AFM images of sample D and E. This result also suggests that the mosaicity of the film is decreased.
Conclusion on the 50 nm thick films
AFM study of the surfaces showed spirals on both C and D films. On the lowest temperature sample several grains could be observed also. These grains disappeared from sample D. The size of the spirals grew 5 fold when 900 • C temperature was applied, compared to 800 • C. The third sample, which was deposited at he highest temperature dewetted, and formed atomically flat islands, which are not connected to each other. XRD data shows that the dominant orientation is the epitaxial (001). The three other orientations, which were observed on the 25 nm thick samples also, had low intensities, which decreased with higher deposition temperature. The only exception was the (100) peak, which was present only on sample D. The high-resolution (002) curves measured on samples C and D evidenced less disorder in the higher temperature sample. The width of the rocking curves decreased with increasing deposition temperature.
100 nm thick films
AFM study of the surfaces The structure of the surfaces look very similar: they are both covered with spirals that have atomic step heights. Spirals are connected to each other by ridges, and there are deep holes around them. On sample F there are a few grains in between the spirals, and the spirals are slightly smaller. However, the difference in size is not as pronounced as it was on the 50 nm samples. While spirals on sample G measure about 200 nm -400 nm, on sample F they measure about 100 nm -300 nm.
XRD θ-2θ measurements
The θ-2θ scan of both samples are shown in figure 2.34. The graphs were normalized to the (002) peak of rhenium. The dominant orientation on both films is (001). Aside from that, there is a low intensity (101) peak on sample E that is not present on sample F. There is also a low intensity [START_REF] Matthias | Superconductivity and electron concentration[END_REF] peak on both samples. The broad unindexed peak on sample F between 60 • and 70 • does not belong to rhenium nor to the substrate. It is possible that that it is a reflection from ReO 2 .
In figure 2.35(a) a close up of the (002) Re and (006) Al 2 O 3 peaks of both samples are shown. The peaks look almost identical. Neither reflection can be described by the sum of two Voigt functions (equation 2.10) well because of the clear asymmetry of the peaks. Asymmetry is expected as they were measured using two wavelengths. However, on these samples, the angles of the higher and lower intensity contributions are reversed. It appears that they have a lower intensity contribution towards the lower angles. The unusual shape could be due to the relaxation of the film. As the film grows the lattice spacing gradually becomes what it is in bulk rhenium. Adjusting the function to describe such a situation would require the addition of several extra parameters, which would reduce the reliability of the fit.
Samples F and G were also measured using the Rigaku SmartLab high-resolution diffractometer. The measured curves are shown in figure 2. 35(b). The interference function, equation 1.23, was used to fit the data. Parameters are listed in table 2.8. This model has the same shortcomings as was seen before. The peaks are broad on both of these samples, and the minima are shallow. However, the number of scattering planes and average lattice parameter can accurately be determined.
The thickness of the films calculated from the fit parameters are:
t F = 99 nm, t G = 86 nm. (2.21)
The thicknesses are in agreement with what was measured during deposition. An attempt was made to fit the high-resolution data with the modified interference function. In this case, the model fails to accurately describe the intensity variations of the experimental data and the broadening of the main peak. The reason why this simple model fails could be that there are defects in the film that cannot be described by a single Gaussian distribution of lattice planes.
Even without the parameters to compare, it is visible in figure 2.35(b) that the fringes are more damped on sample F than on sample G. This suggests that the film which was deposited at lower temperature (sample F) is more disordered.
XRD rocking curve measurements
The rocking curves of the 100 nm samples were fitted with the Pearson VII function, equation 2.14. The parameters of the fit are summarised in table 2.8.
The rocking curve of the sample which was deposited at higher temperature is narrower. This is consistent with our previous observations, and with the presence of larger objects on the surface.
Conclusion on the 100 nm thick films
The trend observed in case of the 25 nm thick films and the 50 nm thick films continues with 100 nm thick films. AFM revealed that spirals decorate the surface of both films. The spirals grew in size with higher deposition temperature, however, the difference is not as significant as for the 50 nm thick film. XRD data shows that the dominant orientation is the epitaxial (001). Two additional orientations appear with low intensities. Only one of them persists on the higher temperature sample. On the lower temperature sample there is a broad, unidentified peak. It is possible that it comes from an oxide of rhenium. No difference can be observed between the standard resolution (002) peaks, however, they have a distinct asymmetric shape, which might be due to the relaxation of the films. Fringes of the high-resolution data are more pronounced on the higher temperature sample, which indicates less disorder. We could not confirm this with the modified interference function. The width of the rocking curves decreased with increasing deposition temperature for these films as well.
Conclusions on the effects of the temperature
Higher deposition temperature resulted in more uniform grain sizes, smoother surface on the 25 nm thick films, and larger spirals on the 50 nm and 100 nm thick films. Overall, surfaces deposited at 900 • C appear more homogeneous. However, with the spirals, holes appeared also. The presence of holes is explained in the following section.
The 50 nm sample that was deposited at 1000 • C dewetted, which resulted in a surface covered with large, atomically flat islands, comparable to a mesa landscape. Every sample had a single dominant orientation, (001), in accordance with the substrate lattice. Several other orientations appeared on the graphs, but for almost every thickness, their intensity decreased or vanished with increasing deposition temperature.
The detailed study of the (002) reflections revealed that for every thickness the higher temperature deposition resulted in less disorder in the lattices. Less disorder is indicated by the decreasing Gaussian FWHMs of the Voigt functions. These values are shown as the function of temperature in figure 2.36(a). Disorder was quantified for two of 50 nm thick films (samples C and D), by the introduction of a distribution of lattice parameters. As was shown in figure 2.30, the distribution was narrower for the higher temperature film.
For every thickness the rocking curves of the higher temperature samples were significantly narrower. The Pearson VII FWHM values obtained from the fits are shown as the function of temperature in figure 2.36(b). The improvement of rocking curves is consistent with the larger objects observed on the AFM images, and they also indicate that the mosaicity of the epitaxial grains is reduced by higher temperature deposition.
Thermal grooving of the surface
During the deposition of the rhenium thin films, the crystallography of the surface is monitored by reflection high energy electron diffraction. The technique is described in chapter 1.4.1. Prior to evaporation, Kikuchi lines corresponding to the lattice of the Al 2 O 3 can be observed on the screen. As the rhenium deposition starts, the Kikuchi pattern gradually changes to broad rings on a diffuse background, indicating the growth of crystalline islands with different orientations. Then spots appear, indicating 3D growth. When the thickness reaches approximately 10 nm, the RHEED pattern changes again, suddenly rods appear. This is a sign that electrons are diffracted by a volume with single orientation and flat surface. The depth and the true shape of the holes that interrupt the ridges cannot be determined. Due to the restrictions of AFM imaging, these holes reflect the shape of the probe. They may reach all the way to the substrate.
Many of the ridges show a slight increase in height around the holes, as shown by two examples in figure 2.39(b). Height profiles were extracted along the paths that are drawn on the insets of figure 2.39(a) for this plot.
The patterning that can be observed in figure 2.37, and is highlighted in figure 2.39 resembles curves of thermal grooving, where the transport of the matter was driven by surface diffusion. Mullins' theory of thermal grooving was described in chapter 1.5.4. The theoretical profile, that develops along a grain boundary in case of surface diffusion, also shows a maxima. The equation of the surface profile (y sd (x, t)) was given in equation 1.40, and was used to fit the height graphs extracted from the AFM data.
An example of a fit is shown in figure 2.40 with the parameters. The theoretical curve describes the shape of the measured profile well, our observation is consistent with the theory of thermal grooving. However, the atomic steps, that can be observed on the tail of the measured data cannot be reproduced. Mullins' model is a continuous model, and cannot account for discontinuities, such as steps, that develop on a low energy crystal surface at high temperatures.
One of the parameters of the fit was (Bt)
1 4 , which equals to the following:
(Bt)
1 4 = D s γΩ 2 νt k B T 1 4 , (2.22)
where D s is the surface diffusion coefficient, γ is the surface free energy, Ω is the molecular volume, ν is the number of atoms per unit area, t is the time, k B is the Boltzmann constant, and T is the temperature.
From the (Bt) 1 4 parameter, the surface diffusion coefficient of rhenium can be determined, and compared to the value found in reference [START_REF] Goldstein | Atom and cluster diffusion on re[END_REF]. t was taken as the time it took to deposit the sample, approximately 1000 seconds. The growth temperature was 1150 K. The molecular volume is the volume of a rhenium atom, which can be calculated from the atomic radius: Ω = 3/4 πr 3 = 3/4 π(0.137 nm) 3 = 6.059 • 10 -3 nm 3 . The number of atoms per unit area was calculated from the rhenium hexagonal closed packed unit cell. Considering (001) orientation, the surface is covered with hexagons, and rhenium atoms are placed in the corners, and in the middle of each hexagon. The area of one such unit can be obtained using the lattice parameter a:
A hexa = 3 √ 3/2 a 2 = 3 √ 3/2 (0.276 nm) 2 = 0.198 nm 2 .
All rhenium atoms on the corners are shared by three hexagons, so there are 6 • 1/3 + 1 = 3 atoms on the area calculated above. The number of atoms per unit area can be calculated by a division:
ν = 3/A hexa = 15.147 nm -2 .
The value of the surface free energy can be found in references [START_REF] Tyson | Surface free energies of solid metals: Estimation from liquid surface tension measurements[END_REF], [START_REF] De Boer | Cohesion in Metals[END_REF], and [START_REF] Vitos | The surface energy of metals[END_REF]. Tyson and Miller calculated the surface free energy from liquid surface tension measurement data. They obtained a value of 3.626 Jm -2 [START_REF] Tyson | Surface free energies of solid metals: Estimation from liquid surface tension measurements[END_REF]. Surface free energy found in reference [START_REF] De Boer | Cohesion in Metals[END_REF] is in good agreement with Tyson and Miller, 3.600 Jm -2 . In a more recent article, Vitos et al. determined the surface free energy of low-index surfaces of 60 metals, including rhenium, using density functional theory [START_REF] Vitos | The surface energy of metals[END_REF]. For the (001) surface of rhenium they found a value of 4.214 Jm -2 .
The value reported by Vitos et al. was used to determine the surface diffusion coefficient from the averaged (Bt) 1 4 parameters. We obtained the following value:
D s = 4.06 • 10 -12 cm 2 /s.
(2.23)
It can be determined from the surface diffusion coefficient how far an atom can travel in one second (λ = √ D s t). λ is approximately 20 nm/s. This is in good agreement with the width of the atomically flat terraces, which measure a few tens on nanometers across.
Temperature dependence of the surface diffusion coefficient of the Re(001) surface was measured by Goldstein and Ehrich in the 210 K -235 K temperature range [START_REF] Goldstein | Atom and cluster diffusion on re[END_REF]. Temperature dependence of the diffusion coefficient follows the Arrhenius law:
D s (T ) = D 0 e -Ea k B T , (2.24)
where E a is the activation energy. The parameters reported in reference [START_REF] Goldstein | Atom and cluster diffusion on re[END_REF] are the following: E a = 11.11 ± 0.43 kcal/mol and D 0 = 6.13(•2.6 ± 1) • 10 -6 cm 2 /s.
(2.25)
Using these values, the surface diffusion coefficient was calculated at the temperature of the deposition. It is plotted in the relevant temperature range in figure 2.41. According to this, surface diffusion coefficient should be in the order of 10 -8 cm 2 /s at 1000 K. This value corresponds to a λ of 1µm/s. 1µm is much larger than the size of the terraces on the film, and the reported surface diffusion coefficient is four orders of magnitude higher, than what we obtained.
The authors of reference [START_REF] Goldstein | Atom and cluster diffusion on re[END_REF] conducted their experiments at much lower temperatures, at a relatively small temperature range: 210 K -235 K. Extrapolated values in the region of 1000 K should be taken with caution.
In our experiment, the sample was an extremely thin film, 15 nm, and the full thickness dewetted. Holes that developed as a result probably reach the substrate. Their profile cannot be determined but it can be assumed, that they have a similar stepped structure as observed on the ridges. Steps provide an energy barrier known as the Schwoebel barrier against the atoms diffusing through them. The effect was described in section 1.5.3. This can lower the diffusion coefficient we obtain. Lastly, the theoretical curve cannot account for the steps observed on the surface, but describe the overall shape well. The results should be viewed as qualitative due to the restrictions of Mullins' theory. He based his paper on the observations in copper samples that were polycrystalline and bulk. Our rhenium samples, on the other hand, had thickness about 15 nm. One of his assumptions was that the properties of the interface are independent of the orientations of the crystals. The orientations of the rhenium grains are very close to a low energy surface (001), thus this assumption cannot be valid in our case. Recrystallisation of the whole sample competes with thermal grooving. Recrystallisation to a low-index orientation flattens the surface. On the low-index surface steps develop, which provide a diffusion barrier, and stop the process of thermal grooving. The step structure can be observed on the ridges, highlighted in figure 2 That the dewetting process is accompanied by recrystallisation is confirmed by the transformation of the RHEED pattern during growth. The initial concentric rings correspond to grains with random orientation, and the regularly spaced rods that appear after, to a single orientation and a flat surface. X-ray diffraction data acquired on the sample further confirms that the layer has a single orientation, which is the epitaxial (001) orientation. The high-resolution θ-2θ scan around the Re(002) peak is shown in figure 2.42. The sharp, unresolved peak between 41 and 42 degrees corresponds to the Al 2 O 3 (006) orientation, the broader peak between 40 and 41 degrees is the rhenium (002) peak. A few fringes can be observed on both sides of the rhenium peak which is a sign that the X-ray beam was diffracted by well-arranged, parallel lattice planes. The interference function, equation 1.23 was fitted, to determine the number of lattice planes (N ) and their spacing (d). N is related to the periodicity of the fringes, d is to the angular position of the central peak. The fit is not shown in figure 2.42, because it fails to describe the intensity ratio of fringes and the main peak. The film is very thin, so the fringes are damped. However, this does not affect the position of the peak, and the periodicity. Parameters obtained from the fit are also shown in figure 2.42. By multiplying N and d, the thickness of the layer can be determined. We obtain (15.5 ± 0.5) nm.
Conclusion thermal grooving
The surface which develops as the result of the dewetting can be described by Mullins' theory of thermal grooving, where the matter was driven by surface diffusion. Dimensions observed on the topography are consistent with the surface diffusion coefficient we obtain from Mullins' model. Changes in the RHEED pattern indicate the coalesce of initial islands and the full recrystallisation of the film. This was confirmed by high-resolution X-ray diffraction.
We believe that dewettting and recrystallisation happens on most rhenium samples when they reach approximately 10 nm -15 nm thickness. This leaves behind a flat surface with a single orientation, and deep holes and ridges. Initially holes observed on the samples with spirals were thought to be the result of impurities on the substrate [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. We now believe that the holes are the result of dewetting during the early stages of growth. Spiral will grow onto the terraced ridges, that can be observed in figure 2.37.
Thermal transfer during crystal growth
Measurement of temperatures in a vacuum chamber is not a trivial task. Its difficulties were discussed in chapter 1.3.1.
To estimate the temperature of the surface of the growing rhenium, a model was developed by Delsol [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. His model is outlined here, and modifications are introduced, that can explain the thermal grooving that occurs when thickness of 10 nm -15 nm is reached by an increase in temperature.
Theory outlined here is described in more detail in reference [START_REF] John | A Heat Transfer Textbook[END_REF].
Elements of the model
Definitions
Thermal radiation is modelled by the ideal radiator, the black body, which absorbs all and reflects none of the radiation arriving at its surface. Planck predicted the emitted power flux (monochromatic emissive power) black body at temperature T at wavelength λ:
M λ,bb (λ, T ) = 2πhc 2 λ 5 1 exp hc k B T λ -1 , (2.26)
where h is the Planck constant, c is the speed of light in vacuum, and k B is the Boltzmann constant.
The total emissive power is obtained by an integration over the wavelengths, and its temperature dependence is given by the Stephan-Boltzmann law:
M bb (T ) = ∞ 0 M λ,bb (λ, T )dλ = σT 4 , (2.27)
where σ is the Stephan-Boltzmann constant. Thermal radiation of a real body is given by a comparison to the black body. Emittance ( ) is the ratio of the emitted radiation by the real and the black body:
λ (T ) = M λ (λ, T ) M λ,bb (λ, T ) and (T ) = M (T ) M bb (T ) . ( 2
.28)
Thus the Stephan-Boltzmann law for a real body is modified as follows:
M (T ) = σT 4 . (2.29)
In general, emittance is the function of the wavelength. A body whose emittance is independent of the wavelength is a grey body ( = λ ).
Besides the radiations that is emitted by a body, it is also important to discuss how it interacts with radiation that arrives at its surface. This is shown in figure 2.43. To simplify the problem, let us consider the incoming radiation to be 1. The fraction of the radiation that is absorbed is called absorbance (α). ρ is reflectance, and it measures the fraction that is reflected, and τ is the transmittance, gives the portion that is transmitted. In equilibrium, the following condition is fulfilled:
1 = α + ρ + τ → α = 1 -τ -ρ.
(2.30)
Kirchhoff's law connects the absorbance and the emittance. It states that a body in equilibrium absorbs as much energy as it emits in every direction and at each wavelength:
λ (T, θ, φ) = α λ (T, θ, φ), (2.31)
where θ and φ are angular coordinates. When the surface is diffuse, emittance and absorbance does not depend on the direction. Furthermore, if the body is grey, wavelength dependence can be neglected as well. Kirchhoff's law then simplifies to the following:
(T ) = α(T ).
(2.32)
Oppenheim's electrical analogy
An analogy to electric circuits was developed to study heat exchange between grey diffuse bodies by Oppenheim. Two new quantities need to be defined. Irradiance (H) is the flux of energy that irradiates the surface, and radiosity (B) is the flux leaving a surface. The flux of energy leaving a surface is the sum of reflected irradiance and the emitted flux:
B = ρH + σT 4 .
(2.33)
The net flux leaving a surface can be expressed as
Q = B -H = ρ σT 4 - 1 -ρ ρ B. (2.34)
If the body is opaque (τ = 0) and grey, using equations 2.30 and 2.32, equation 2.34 takes the shape of Ohm's law:
Q = σT 4 -B 1- , (2.35)
where Q takes the place of the current, (σT 4 -B) acts as the potential difference, and 1-is the resistance. This analogy makes heat transfer problems easier to handle. For example, heat transfer between two planes can be described as two resistors connected in series.
Heat conduction
The above analogy is restricted to planes with diffuse, opaque, grey surface. Our system consists of 4 parts: furnace, tungsten, substrate-rhenium, and the chamber wall. The approximation can be valid to all but one part: the substrate. Al 2 O 3 is transparent not opaque, and thick relative to the tungsten and rhenium. Heat conduction through the substrate has to be considered.
Heat transfer between two surfaces with temperatures T 1 and T 2 can be expressed as follows:
Q C = T 1 -T 2 t/k , (2.36)
where t is the distance between the two surfaces (thickness), and k is the thermal conductivity of the material.
Heating of the sample in UHV
The sample is heated with a tungsten filament that is located behind it. The setup is shown in figure 1.9. It can be operated in two modes: either emits infrared radiation as a result of Joule heating, or a voltage is applied between the filament and the sample, and electrons are emitted and bombard the backside of the substrate.
When the filament is heated by a current, the dissipated power is the product of the resistance of the wire and the square of the current:
P = R(T )I 2 .
(2.37) Some of this heat is lost through the hooks that keep the wire in place. It is estimated that 30% of the power heat the sample.
When electron bombardment is applied, it is assumed the all the power carried by the electrons heat the sample, thus the power is the product of the voltage applied between the filament and the sample and the electron current that is extracted from the filament (I e ):
P = U I e .
(2.38)
Heat transfer during growth
Delsol used the elements discussed above to build a model to calculate the temperature of the rhenium surface during growth. The model is shown in figure 2.44. All the parts of the system was assumed to be an infinite plane. The plane noted with F is the furnace. Besides radiosity, which is the result of the hot filament, Q E has to be included in the equations for experiments when electron bombardment is applied. W refers to the tungsten backing on the substrate. SRe is the substrate and rhenium, which is considered as one unit, and conduction of heat through the substrate (Q C ) is included in the model. Finally, B denotes the wall of the vacuum chamber ('bâtiment'), which is at room temperature.
The problem has three unknowns: the temperature of the tungsten (T W ), the temperature of the substrate-rhenium (T SRe ), and the heat flux between the surfaces, which in equilibrium have to be equal (Q). Three equations can be be written down to define Q using the radiosities of the surfaces:
Q = B f -B f W + Q E , (2.39)
Q = B SRe W -B W SRe + Q C , (2.40)
Q = B b SRe -B b .
(2.41)
The radiosities have to be expressed as the function of only the three unknown. To do this further equations have to defined. The heat exchange on each surface also have to be equal to Q. Writing these down as the difference between radiosity and irradiance gives further 6 equations.
Radiosity of a surface is the sum of the thermal radiation due to its temperature ( σT 4 ), the reflected irradiance (ρH), and transmitted irradiance (τ H). All except the CHAPTER 2. GROWTH AND CHARACTERISATION substrate-rhenium is considered opaque, which means τ = 0. The radiosities can then be expressed as follows:
B W SRe = ρ SRe H W SRe + SRe σT 4 SRe + τ SRe H b SRe , (2.42)
B b SRe = ρ SRe H b SRe + SRe σT 4 SRe + τ SRe H W SRe , (2.43)
B f = ρ f H f + f σT 4 f , (2.44)
B b = ρ b H b + b σT 4 b , (2.45)
B f W = ρ W H f W + W σT 4 W , (2.46)
B SRe W = ρ W H SRe W + W σT 4 W .
(2.47)
Using the 6 heat exchange equations and equations (2.42) -(2.47) the equation system (2.39), (2.40), (2.41) can be expressed as the function of only the three unknowns. The derivation is long but not complicated. It is shown in detail in appendix D. The final form of the equation system is the following:
Q = σT 4 f -σT 4 W 1 + R f + R W + Q E , (2.48)
Q = σT 4 W -σT 4 SRe 1 + R W + R SRe + r SRe 1 + R W + R SRe + 1 Q C , (2.49)
Q = σT 4 SRe -σT 4 b 1 + R B + R SRe - r SRe 1 + R B + R SRe Q C , (2.50)
where the following notations were used:
R f = ρ f f , R b = ρ b b , R W = ρ W W , R SRe = ρ SRe -τ SRe SRe + 2τ SRe , r SRe = τ SRe SRe SRe + 2τ SRe .
Substrate and rhenium was is treated as a single object. The common transmittance and emittance was calculated as follows:
τ SRe = τ S τ Re , SRe = (1 -τ Re ) Re + τ Re S .
(2.51)
Thermal and optical properties of the materials
Thermal conductivity of the substrate Thermal conductivity of Al 2 O 3 is listed in reference [START_REF] Touloukian | Thermal Conductivity, Nonmetallic Solids[END_REF]. It was fitted with a 5th degree polynomial. The fit is given by the equation below:
k Al 2 O 3 (T ) = 97.155 -0.32723 • T + 5.3582e-4 • T 2 -4.8283e-7 • T 3 + +2.2971e-10 • T 4 -4.4808e-14 • T 5 ,
where the temperature is measured in Kelvin.
Optical properties of the substrate-rhenium plane and tungsten
Transmittance be calculated from the complex refraction index (κ), which can be found tabulated for rhenium in reference [START_REF] Edward | Handbook of Optical Constants of Solids[END_REF]. In the calculation the 0.1 eV -2 eV energy range was used which corresponds to a wavelength range of 0.62 µm -12.4 µm. This is the lower (in wavelength) end of the infrared range.
If we assume that the total emissive power (equation 2.27) falls on the surface of a material with complex refraction index κ, the intensity that persists down to thickness z is obtained as follows:
M λ,t (z) = M λ,bb e -4πκz λ , (2.52)
Transmittance of material with thickness d can then be expressed as the ratio of the total transmitted intensity through depth d and the total emissive power (equation 2.27):
τ = M λ,t (z = d) dλ M bb . (2.53)
Emittance of rhenium was measured by Marple [START_REF] Marple | Spectral emissivity of rhenium[END_REF], but the temperatures they worked at (>1500 • ) are much higher than what we can achieve in the vacuum chamber. Their range of wavelength was 0.4 µm -3 µm. In the model by Delsol the following expression was used to calculate the emittance of rhenium [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]:
Re (T ) = -0.18906 + 4.9151e-4 • T -1.5979e-7 • T 2 + 1.8357e-11 • T 3 , (2.54)
where the temperature is measured in Kelvin. The emittance curve is shown in figure 2.45 with the solid line. The three red dots are the emittance values calculated from the curves found in reference [START_REF] Marple | Spectral emissivity of rhenium[END_REF]. The two datasets are in agreement.
Emittance is related to the full surface area of a body. Our thin films are smooth on a nanometer scale, thus their total surface area is smaller than the sand blasted rhenium sheet which was used for emissivity measurements by Marple [START_REF] Marple | Spectral emissivity of rhenium[END_REF]. Thus, we expect that emission from our rhenium films is smaller than what was presented in reference [START_REF] Marple | Spectral emissivity of rhenium[END_REF]. For this reason, the Re (T ) curve was reduced by 25%.
Emittance and transmittance of sapphire were taken to be 0.077 and 0.93, based on references [START_REF] Kisler | Direct emissivity measurements of ir materials[END_REF][START_REF] Wittenberg | Total hemispherical emissivity of sapphire[END_REF][START_REF] Dobrovinskaya | Sapphire Material, Manufacturing, Applications[END_REF].
The common transmittance, emittance and reflectance can now be computed using equations 2.51. The results are shown in figure 2.46 as the function of the thickness of the rhenium film. The temperature was fixed at 800 • C.
Emittance of tungsten is given by the following expression:
W = -2.6875e-2 + 1.819696e-4 • T -2.1946163e-8 • T 2 , (2.55)
where the temperature is measured in Kelvin [START_REF] Lassner | Tungsten -Properties[END_REF]. This equation is valid between 400 K (127 • C) and 3600 K (3327 • C).
Results and discussion
The temperature of the rhenium was calculated for two cases: with or without tungsten backing on substrate. The current through the furnace filament was set to 8.5 A, and no electron bombardment was applied. This is a setting we used most frequently.
The temperature of the rhenium is shown in figure 2.47 as the function of its thickness. The effect of the tungsten is clear. The substrate is almost completely transparent for infrared radiation, thus it cannot be heated effectively. Rhenium heats as the thickness builds up, and its transmittance decreases. Compared that to the case when tungsten backing is applied to the substrate, the temperature is relatively stable.
An increase in temperature, that starts around 10 nm, can be observed in on the red curve. This is consistent with the dewetting described in section 2.3. The temperature of the rhenium surface is not constant during growth, but increases significantly, which induces changes in the growth process. To eliminate the holes, the dewetting process needs to be avoided. To achieve this, the tendency for the temperature increase has to be compensated by manually lowering the power of heating. This calculation allows us to deduce how much we have to reduce the thermal power to achieve a constant temperature during the growth.
Superconductivity
This chapter starts with the short history of the discovery of superconductivity, following which the basic theories that describe this phenomena are introduced. In the third section superconducting devices, namely Josephson junction and the superconducting quantum interference device are explained. Finally, a description is given on the two refrigerators that were used to measure our superconducting circuits.
History of superconductivity
In 1908 Heike Kamerlingh Onnes succeeded in liquefying a cup of helium for the first time, which opened the door for low temperature physics [START_REF] Van Delft | Little cup of helium, big science[END_REF]. It was also he, who, in order to study the conduction of metals at low temperatures, measured the abruptly vanishing resistivity of mercury at 4.2 K in 1911 [89].
The concept of phonons did not exist at the time, but it was accepted that electrons are responsible for electrical conductance and that scattering by ions causes resistivity. What was not known is how the electron-ion scattering amplitude and the mobility of electrons change upon approaching absolute zero. (The lowest temperature achieved before 1908 was 14 K using liquid hydrogen.) It was also observed that impurities have an effect on resistivity. The Leiden laboratory, where Onnes worked, had a lot of experience in 111 purification of mercury by distillation, therefore it was a convenient choice of a pure sample to measure. Onnes' cryostat was made out of glass, and was able to cool below 2 K. Interestingly, on the day when the first superconducting transition was measured, 8th of April 1911, in his notebook Onnes described the superfluid transition of helium as well, without realising that what he saw was also something brand new and equally as baffling as superconductivity [START_REF] Van Delft | The discovery of superconductivity[END_REF].
After the experiments on mercury, Onnes' team discovered that tin and lead are also superconductors [91], and that magnetic field destroys superconductivity [92].
Zero resistivity is the first, most obvious hallmark of superconductivity. The second, which is more important for today's applications, was discovered over 20 yeas later [START_REF] Meissner | Ein neuer effekt bei eintritt der supraleitfähigkeit[END_REF]. The magnetic induction is zero inside a superconductor as long as the magnetic field is below a certain critical field, regardless the order of the following procedures: cooling below T c , turning on magnetic field. This is called Meissner-Ochsenfeld effect. The expulsion of an applied field is what distinguishes the superconductor from a perfect conductor.
A few years after the discovery of the Meissner-Ochsenfeld effect, the London brothers described the electrodynamic properties of superconductors by introducing modifications in the Maxwell equations [START_REF] London | The electromagnetic equations of the supraconductor[END_REF]. They assumed the existence of superconducting electrons, that can move in the lattice without resistance and their density increases from zero as temperature decreases below T c . A "simple, but unsound derivation" [START_REF] Tinkham | Introduction to superconductivity[END_REF] of the London equations is presented in section 3.2.1.
In another 15 years phenomenological description of superconductors was developed based on Landau's theory of second order phase transitions [START_REF] Ginzburg | [END_REF]. This allowed the investigation of spatial variations in the superconducting electron density, among others, that led to the discovery of flux quantisation [97].
The understanding the underlying microscopic physics came in the '50s in the form of the Bardeen-Cooper-Schreiffer (BCS) theory [START_REF] Bardeen | Microscopic theory of superconductivity[END_REF][START_REF] Bardeen | Theory of superconductivity[END_REF]. Cooper showed that even a small attractive interaction between the electrons causes the Fermi sea to become unstable against the formation of two-electron bound states, Cooper pairs, which are responsible for superconductivity [START_REF] Cooper | Bound electron pairs in a degenerate fermi gas[END_REF].
Theories of superconductivity
London equations
The London brothers assumed that in superconductors, besides the normal electrons, there are also superconducting electrons with charge e * , mass m * , and density n * s [START_REF] London | The electromagnetic equations of the supraconductor[END_REF].
These superconducting electrons, unlike the normal electrons, are not scattered by the ions of the metal, accelerate freely in the electric field. Their equation of motion is the following:
m * dv dt = -e * E. (3.1)
Using the expression for current density, j s = -e * n * s v s , the first London equation is:
dj s dt = n * s e * 2 m * E, (3.2)
which shows infinite conductance.
The second London equation can be obtained by combining the first, 3.2, with the Maxwell equation ∇ × E = -∂B/∂t:
d dt ∇ × j s + n * s e * 2 m * B = 0, (3.3)
and assuming the expression in the bracket is not only independent on time but zero, we obtain
∇ × j s = - n * s e * 2 m * B. (3.4)
This is the second London equation. When the second London equation is combined with another one of the Maxwell equations,
1 µ 0 ∇ × B = j s , (3.5)
we get the following differential equation:
∆ × B = - 1 λ 2 L B, (3.6)
where λ L is called London penetration depth. When solving equation 3.6 in one dimension, where x = 0 in the boundary of a superconductor, and x > 0 is the inside of a superconductor, we get an exponentially decreasing function: B(x) = B 0 e -x λ L . This equation describes the Meissner-Ochsenfeld effect: magnetic field inside a superconductor exponentially decreases. Characteristic length of this screening is the London penetration depth. Knowing that the superconducting electrons are electron pairs with charge 2e, mass 2e and density n s /2, the London penetration depth is:
λ 2 L = m * n * s e * 2µ 0 = m n s e 2 µ 0 (3.7)
Ginzburg-Landau theory
In 1950 Ginzburg and Landau used Landau's previously developed theory for second order phase transitions [101] to describe the superconducting phase transition [START_REF] Ginzburg | [END_REF]. The theory assumes the existence of an order parameter, Ψ(r) = ψ 0 (r)e -iθ , that is zero in the normal state and increases to a finite value in the superconducting state. This order parameter describes the superconducting electrons, and their local density, n s (r) = |Ψ(r)| 2 . A further assumption is that, in the vicinity of the transition, free energy can be defined, and it can be expressed as the series expansion of the order parameter, as follows:
f s = f n + α(T )|Ψ(r)| 2 + 1 2 β(T )|Ψ(r)| 4 + + 1 2m * 2 i ∇ + e * A(r) Ψ(r) 2 + 1 2µ 0 B(r) 2 , (3.8)
where f n and f s are the free energy density in the normal and the superconducting state, respectively, α(T ) and β(T ) are coefficients of the expansion. The second to last term is the kinetic energy of superconducting electrons (with m * mass and e * charge) in magnetic field (A is the vector potential), and the last term is the energy of the magnetic field (B). It is a requirement that the energy minimum in the normal state be at Ψ(r) = 0, and below the transition temperature the energy has to reduce. Form this the coefficients are chosen as follows: α(T ) ∼ (T -T c ) and β(T ) = β > 0. The energy minimum is found by taking the variational derivatives with respect to Ψ * (r) and A(r), which results in two differential equations, respectively:
1 2m * i ∇ + e * A(r) 2 Ψ(r) + αΨ(r) + β|Ψ(r)| 2 Ψ(r) = 0, (3.9) j(r) = 1 µ 0 ∇ × B(r) = - e * 2m * Ψ * (r) i ∇ + e * A(r) Ψ(r) + c. c. (3.10)
Equation 3.9 is the non-linear Schrödinger equation of the superconducting electrons, where the non-linear term can be interpreted as a repulsive potential. 3.10 is the quantum mechanical current carried by the superconducting electrons.
Ginzburg-Landau theory was derived phenomenologically, 7 years before the microscopic origins of superconductivity were understood. In 1959 Gorkov derived the the Ginzburg-Landau equations from the microscopic theory [102], and showed, what is well known today, that e * and m * is the charge and mass of two electrons.
Bardeen-Cooper-Schrieffer theory
In 1950 it was experimentally shown by two groups that the critical temperature and field of mercury is sensitive to the isotope mass, which is today known as the isotope effect [START_REF] Maxwell | Isotope effect in the superconductivity of mercury[END_REF][START_REF] Reynolds | Superconductivity of isotopes of mercury[END_REF]. Shortly after, Fröhlich submitted his paper, where he proposed that superconductivity is the result of the interaction of ions and electrons [START_REF] Fröhlich | Theory of the superconducting state. i. the ground state at the absolute zero of temperature[END_REF], predicting the isotope effect, and claiming he came to this conclusion independently, without the knowledge of the experimental confirmation [START_REF] Fröhlich | Isotope effect in superconductivity[END_REF]. A stance that is today generally accepted [START_REF] Hirsch | Did herbert fröhlich predict or postdict the isotope effect in superconductors?[END_REF]. The idea that ions could be responsible for superconductivity, and the discovery of the isotope effect were the foundation stones of the first successful theory, the BCS theory.
The first step towards BCS theory was the realisation that the normal ground state of the electron gas, the Fermi sea, becomes unstable when an attractive interaction, however small it may be, acts between the electrons, and two-electron bound states appear [START_REF] Cooper | Bound electron pairs in a degenerate fermi gas[END_REF]. In the derivation of the Cooper instability the Schrödinger equation with an attractive potential is solved for two electrons that are added to the Fermi sea. When evaluating the two-electron wave function, an important observations can be made. The lowest energy state is expected to have zero momentum, which means the electrons have equal and opposite momenta. To ensure that the wave function is antisymmetric, the electron spins must be opposite. This is an s-wave state, and indeed all conventional superconductors were found to have s-wave Cooper-pairs. It might sound surprising to assume an attractive interaction between electrons at first. Coulomb interaction is repulsive, and even when considering screening that occurs in metals, the potential remains repulsive. Motions of the ions have to be considered to get an effective attractive interaction. An intuitive image could be the following: the first electron that passes through the lattice, polarises it by attracting the positive ions, then these positive ions attract the second electron. If this attraction can override the repulsion between the two electrons, an effective attractive potential occurs [START_REF] Tinkham | Introduction to superconductivity[END_REF].
Characteristic vibrational frequency of the ions, phonon frequency, is the Debye fre-quency, ω D . It is assumed, that only the electrons whose energies are in the 2 ω D interval around the Fermi level (E F ), experience the attractive potential.
The energy eigenvalue of the two-electron Schrödinger equations is the following:
E ≈ 2E F -2 ω D e -2 N (0)V , (3.11)
where N (0) is the density of states at the Fermi level, and -V is the attractive potential. The energy is negative with respect to the Fermi level, no matter how small V is. This means that there is a bound state of two electrons with lower energy than the ground state in the normal phase. This is called Cooper instability. The only conclusion that can be drawn from the calculation outlined above is that the Fermi sea is not a stable state anymore. To find what the new ground state of electrons is, the Schrödinger equation of all electrons in the material must be solved, and that is what the BCS theory does.
The most important results are the prediction of the energy gap (∆) that opens in the electron spectrum around the Fermi energy, and linking that to the transition temperature (T c ). Both depend on Debye frequency, which signals the isotope effect:
k B T c = 1.14 ω D e -1 N (0)V , (3.12)
∆(0K) ≈ 2 ω D e -1 N (0)V . (3.13)
From equations 3.12 and 3.13, the relation between T c and ∆(0):
∆(0) = 1.754k B T c . (3.14)
BCS theory was a ground breaking theory, since it was the first microscopic theory that described superconductivity. Nonetheless, there are several experimental situations that cannot be explained by it, and thus it needs to be generalised. Processes other than electron-phonon scattering need to be considered. The reason for this are the several assumptions that are made to simplify the already complicated derivation. A few of these are: Fermi surface is assumed to be spherical, electron-phonon interaction is constant for all energies around the Fermi energy, and only singlet Cooper pairs are considered. Superconductors that can be described by the BCS theory are called conventional superconductors. Rhenium, alongside other pure metal superconductors, belongs to this group.
Characteristic lengths
One of the important characteristic lengths of a superconductor has already been defined in equation 3.7, it is the London penetration depth, which gives how deep the magnetic field can penetrate into a superconductor. Due to the limitations of the theory behind the London equations, λ L is a theoretical limit of the effective penetration depth at T → 0. The effective penetration depth is always larger, and diverges close to T c .
A coherence length can be defined based on the uncertainty principle arguing that only electrons within ∼ k B T c interval of the Fermi energy can play a role. The momentum of these electrons are p ≈ k B T c /v F , where v F is the Fermi velocity. For this the uncertainty of the location can be expressed as
∆x ∆p ≈ v F k b T , (3.15)
which can be identified as the coherence length, and the numerical factor (α) can be obtained from the BCS theory:
ξ 0 = v F π∆(0) = α v F k b T , (3.16)
where α is about 0.18. Another characteristic length can be defined based on the Ginzburg-Landau theory [START_REF] Tinkham | Introduction to superconductivity[END_REF]. From equation 3.9 the equilibrium value of the order parameter (Ψ 0 ) can be determined:
0 + αΨ 0 + β|Ψ 0 | 2 Ψ 0 = 0 → |Ψ 0 | 2 = - α β (3.17)
Using the normalised wave function f = Ψ/Ψ 0 , equation 3.9 (A = 0) can be rewritten in the following form:
- 2 2m * α ∇ 2 f + f -|f | 2 f = 0. (3.18)
The coefficient of the gradient term is defined as the characteristic length. This is referred to as Ginzburg-Landau coherence length:
ξ 2 GL (T ) = - 2 2m * α(T ) ∝ 1 T c -T (3.19)
ξ GL is different from ξ 0 , however for pure materials, far below T c they are equal.
The Ginzburg-Landau theory introduces the order parameter (Ψ), that was said to be related to the density of superconducting electrons (n s = |Ψ| 2 ). It was shown in equation 3.17 that the equilibrium value of the order parameter is -α/β. Writing this in the expression of the London penetration depth (equation 3.7), we can see that the penetration depth has the same temperature dependence as the Ginzburg-Landau coherence length, and they both diverge upon approaching T c [START_REF] Tinkham | Introduction to superconductivity[END_REF].
λ 2 L (T ) = m * 4µ 0 e * 2 |Ψ 0 | 2 = - βm * 4µ 0 e * 2 α(T ) ∝ 1 T c -T . (3.20)
The dimensionless and temperature independent Ginzburg-Landau number is introduced, it shows the relation between the penetration depth and the coherence length [START_REF] Tinkham | Introduction to superconductivity[END_REF].
κ = λ L (T ) ξ GL (T ) . (3.21)
Its significance will be discussed in chapter 3.2.7.
Dirty and clean superconductors
The term 'dirty superconductor' was coined by P. W. Anderson in his 1959 paper, where he "sketched" a BCS type theory for "very dirty superconductors" [START_REF] Anderson | Theory of dirty superconductors[END_REF]. It has been observed that superconductivity is often insensitive to the amount of impurities present in the material. These impurities include crystal defects and non-magnetic chemical impurities: beryllium was shown to display superconductivity in amorphous state with T c twenty times higher than in crystalline state [109,[START_REF] Matthias | Superconductivity and electron concentration[END_REF]. He divided superconductivity in two regions: clean superconductors that are sensitive to the introduction of additional impurities; and dirty superconductor that are insensitive.
Impurities cause conduction electrons to scatter. This scattering quantitatively is described by the mean free path (l). In the clean limit the electrons are rarely scattered, the mean free path is longer than the superconducting coherence length (l ξ). In the dirty limit, however, electron scattering is strong, electron mean free path is shorter than the coherence length (l ξ). The effective coherence length in dirty superconductors is reduced. Its value near the transition temperature can be obtained from the BCS theory [START_REF] Tinkham | Introduction to superconductivity[END_REF]:
ξ clean = 0.74 ξ 0 1 -T Tc 1 2 , ξ dirty = 0.855 ξ 0 l 1 -T Tc 1 2
.
(3.22)
Fluxquatization
It has been observed that inside a superconductor ring the magnetic field cannot take an arbitrary value, but it must be an integer multiple of the so called flux quantum, Φ 0 = h/2e [START_REF] Bascom | Experimental evidence for quantized flux in superconducting cylinders[END_REF].
Flux passing through a surface area (S with normal n) can be calculated by integrating the magnetic induction vector (B) over that surface [START_REF] Sólyom | Fundamentals of the Physics of Solids[END_REF]. Using Stokes theorem the surface integral becomes a line integral running around the boundary of the area (l), and the induction vector is replaced by the vector potential (A):
Φ = B • n dS = A • dl.
(3.23)
The vector potential in the superconducting regime can be expressed from one of the Ginzburg-Landau differential equations, equation 3.10. Current is 0 deep inside the superconductor, where the integration is considered. The order parameter is assumed to have the form Ψ(r) = ψ 0 (r)e -iθ , where ψ 0 is thought to have the equilibrium value, and does not change along the integral. Only the phase can change. Using these, the vector potential, and the flux inside the superconducting ring, respectively, is:
A = - e * ∇θ, Φ = - e * ∇θ dl. (3.24)
Phase must be a single valued function of space, θ(r), so, along a closed loop it can only change by integer multiples of 2π. The solution of the integral is then
Φ = - e * n2π = n h 2e = nΦ 0 , where Φ 0 = h 2e = 2.067 • 10 -7 Gauss • cm 2 . (3.25)
Φ 0 is called the flux quantum.
Two types of superconductors
In chapter 3.2.4 the dimensionless Ginzburg-Landau number was introduced, and its importance was promised to be explained. To recap, κ is defined as the ratio of the penetration depth (λ) and the coherence length (ξ). The penetration depth describes the disappearance of the magnetic field at the boundary of the superconductor, and the coherence length is related to the decay of the order parameter. Therefore, their relative value describes the properties at the superconducting-normal interface. In figure 3.1 the magnetic field curve (h) and the order parameter (Ψ) are shown on the boundary of a superconductor for the two extreme cases of κ. When κ is smaller than 1, the coherence length is longer than the penetration depth, and there is a region where the magnetic field and the order parameter are both small. Expelling the magnetic field costs energy, and this energy is not compensated by the condensation to the superconducting phase, thus this interface has a positive energy. However, in the case when κ is larger than 1, following the same argument, interfacial energy can become negative, meaning that walls of this type can spontaneously appear. The crossover between positive and negative interfacial energy is at κ = 1/ √ 2. This value for κ was already mentioned in Ginzburg and Landau's original paper [START_REF] Ginzburg | [END_REF], but it was Abrikosov who predicted how the magnetic field behaves in such superconductors [97], which will be briefly discussed below [START_REF] Tinkham | Introduction to superconductivity[END_REF].
Since the penetration depth and the coherence length are material properties, we can talk about two types of superconductors. Type I are the ones with κ < 1/ √ 2. Superconductivity is completely destroyed, and magnetic field enters the material at a certain critical filed H c (T ).
Type II superconductors have κ > 1/ √ 2. These materials have two critical fields, H c1 and H c2 . Their phase diagram is shown in figure 3.2. The Meissner phase, is the homogeneous superconducting phase, where the magnetic field is fully expelled. When the magnetic field is increased exceeding H c1 , type II superconductors transit to a so called vortex phase, where magnetic field and superconductivity mix. Above H c2 , the superconductor transits to the normal phase [START_REF] Sólyom | Fundamentals of the Physics of Solids[END_REF]. In the vortex phase, that is unique to type II superconductors, magnetic flux is present in the sample in the form of tubes, vortices, around which the superconducting phase persists. The profile of the magnetic induction (B) and the order parameter around a core of a vortex (r=0) is shown in figure 3.3. The order parameter drops in the core of the vortex, while the magnetic induction reaches its maximum. The supercurrent is circulating around the core of the vortex. One vortex can only contain integer multiples of the flux quanta (φ 0 = h/2e), where the integer is energetically favoured to be 1. When the magnetic field is increased, the density of the vortices increases. Due to the repulsion that acts between the vortices, they arrange themselves in a regular array, which is has trigonal symmetry in most cases.
The vortex phase was predicted by Abrikosov in 1957 [97], and it was first directly observed in 1967 by electron microscope using, magnetic particles for contrast [START_REF] Essmann | The direct observation of individual flux lines in type ii superconductors[END_REF].
Superconducting devices
Josephson junction
In 1962 Josephson pointed out that Cooper pairs can tunnel between two superconductors separated by a thin (L < ξ) barrier [START_REF] Josephson | Possible new effects in superconductive tunnelling[END_REF]. This construction is called a Josephson junction. The thin barrier can be an insulating layer, as it was originally imagined by Josephson, or a thin metallic constriction with dimensions below the coherence length. The tunnelling current can be derived from the Ginzburg-Landau equations, and it is outlined below.
Recalling equation 3.18 in one dimension:
ξ 2 d 2 f dx 2 + f -f 3 = 0, (3.26)
where f = Ψ/Ψ 0 . The superconductors on the two sides of the bridge are assumed to be in equilibrium, giving |f | = 1, but the phase of the superconducting wave function can still differ. An absolute phase cannot be defined without loss of generality, it can be chosen to be 0 at one end of the bridge and ∆θ at the other, which then gives the boundary conditions: f (x = 0) = 1; f (x = L) = e i∆θ . When L ξ, equation 3.26 is dominated by the first term, in which case the problem is reduced to Laplace's equation:
ξ 2 d 2 f dx 2 = 0. (3.27)
The most general solution in one dimension is f = a + bx, which after satisfying the boundary conditions gives:
f = 1 - x L + x L e i∆θ . (3.28)
The current running through the bridge can be obtained by inserting 3.28 into the Ginzburg-Landau expression of current, equation 3.10:
I j = I c sin ∆θ, where I c = e * Ψ 2 0 m * A cs L . (3.29)
A cs is the cross-sectional area of the bridge. This means that the tunnelling superconducting current is the function of the relative phase of the wave function in the two superconducting regions, and is limited by the critical current, which is dependent on the bridge dimensions [START_REF] Tinkham | Introduction to superconductivity[END_REF].
Superconducting quantum interference device
A superconducting quantum interference device (SQUID) is superconducting ring that is interrupted by at least one Josephson junction, shown in figure 3.4. Their currentmagnetic flux characteristics are very sensitive to small changes of the magnetic field, therefore they are used as magnetometers. In scanning SQUID microscopy, SQUIDs are the scanning probes, and they map the magnetic field across a magnetic or superconducting sample. In most SQUIDs the superconducting ring is interrupted by two junctions, as shown in figure 3.4. The ring is put in magnetic field such that the flux through it is Φ e . As a result of the magnetic field, a current will circulate in the ring. From equation 3.29, the current through the junctions is defined by the phase drop across, θ 1 and θ 2 :
I 1 = I c1 sin θ 1 I 2 = -I c2 sin θ 2 , (3.30)
where I c1 and I c2 are the critical currents.
CHAPTER 3. SUPERCONDUCTIVITY
The total current through the ring is the sum I 2 and I 1 :
I total = I 1 + I 2 = I c1 sin θ 1 -I c2 sin θ 2 . (3.31)
The sum of the phase differences across the two junctions is the integral of the vector potential along the ring:
θ 1 + θ 2 = 2e Adl = 2π Φ e Φ 0 . (3.32)
Using this in equation 3.31, the total current in the ring is:
I total = I c1 sin θ 1 + I c2 sin θ 1 -2π Φ e Φ 0 . (3.33)
To find the maximum current through the ring, equation 3.33 has to be minimised with respect to θ 1 : Summing the squares of equations 3.33 and 3.34:
dI total dθ 1 = I c1 cos θ 1 + I c2 cos θ 1 -2π Φ e Φ 0 = 0 (3.34)
I 2 total = I 2 c1 + I 2 c2 + 2I c1 I c2 cos 2π Φ e Φ 0 . (3.35)
Using the identity cos δ = 2 cos 2 δ 2 -1, equation 3.35 can be rearranged as follows:
I total = (I c1 -I c2 ) 2 + 4I c1 I c2 cos 2 π Φ e Φ 0 . (3.36)
If the junctions are assumed to be identical with equal critical currents (I c1 = I c2 = I c ), the critical current takes the following form:
I total = 2I c cos π Φ e Φ 0 . (3.37)
The flux dependence of the current is shown in figure 3.5. The current changes periodically as the flux increases or decreases. These devices can be used to detect small magnetic fields [START_REF] Hasselbach | Microsquid magnetometry and magnetic imaging[END_REF].
The above calculation is only valid if the inductance of the SQUID (L SQUID ) is zero, which is not true in general. When L SQUID is not zero, the flux inside the ring is modified by the current running through it: Φ L = Φ e + IL SQUID . The critical current can be determined numerically. The inductance of the SQUID depends on its size.
To measure the critical current of a SQUID, one end is connected to a current source, the other to the ground, and the current is increased until it reaches the critical current.
In our setup, the SQUID is biased by a current which is ramped up from 0 with a given slope. At the start of the current ramp, a 40 MHz quartz clock is starts, simultaneously. When the critical current is reached, voltage appears across the SQUID, and the ∂V /∂t pulse is detected, the clock is stopped and the current is set back to zero. The time laps measured by the clock is registered, and the ramp starts again. The biasing current versus time is shown in figure 3.6. One critical current data point is plotted after averaging 30 critical current measurements.
SQUIDs have a hysteric V(I) characteristic due to Joule heating in the normal state. When the critical current is reached, the voltage pulse heats the SQUID, and after the bias current is switched off, it takes time to cool down, and return to the superconducting state. This limits the frequency at which the current ramp can be repeated [START_REF] Hasselbach | Microsquid magnetometry and magnetic imaging[END_REF].
Refrigerators
Temperature and magnetic field dependence of transport properties of complete films and nanostructures were measured. For the low temperature measurements two different refrigerators were used: a table top Helium-3 cryostat where only the temperature can be adjusted, and a dilution refrigerator where magnetic field can be applied as well.
Table-top helium-3 cryostat
Low temperature resistivity measurement on the rhenium wires were performed in a table-top helium-3 cryostat, which was designed and built at Institut Néel [117]. This cryostat, as its name suggests, is a compact, easy-to-use refrigerator, is able to cool down to 300 mK in about 3 hours. A simplified schematics, and a photograph of the inside is shown in figure 3.7, and in figure 3.8, respectively. It consist of a vacuum chamber that houses two interlocked helium circuits: one open circuit for helium-4 (blue line in figure 3.7), that runs through the cryostat from the reservoir to the recovery exhaust; and a closed circuit for helium-3 (orange line in figure 3 The fridge can be cooled down to approximately 1.2 K by circulating only helium-4, and pumping on it in the 1 K pot, shown in figures 3.7 and 3.8. At this temperature the vapour pressure of 4 He is too small, the rate of evaporation is too low to reduce the temperature further. Below 3.19 K helium-3 starts to condense in its reservoir. The condensation process can be monitored by measuring the pressure in the helium-3 circuit. When the condensation is complete, the helium-3 liquid is pumped by an internal sorption system, which is activated charcoal placed in the neighbouring container, labeled as charcoal in figures 3.7 and 3.8. This way the temperature of the reservoir can be further reduced to 300 mK, where we meet the same limitation: the vapour pressure is too low for evaporation to cool down further. This is the minimum temperature that can be achieved using pure helium-3. Once all the helium-3 is adsorbed, the charcoal can be heated to 40 K to release the gas, and the condensation-adsorption cycle can be repeated.
The temperature of the sample can be adjusted by ohmic heating, running current through a resistor placed on the sample stage.
Inverted dilution refrigerator
A dilution refrigerator contains several circuits, one of which is a closed circuit, containing a mixture of helium-4 and helium-3. Using the other, only helium-4 circuits, the fridge is able to reach the 1.2 K mentioned above, by strong pumping on the liquid 4 He. At this temperature the mixture has condensed, and by pumping on it, the temperature is decreased further. Upon reaching 800 mK, the mixture spontaneously undergoes a phase separation, as shown in the phase diagram in figure 3.9: the pink region is forbidden, the liquid separates into two phases. The one marked with yellow is still a mixture of 4 He and 3 He, but contains only a small amount of 3 He. This is the diluted phase. The other phase marked with green is also a mixture of the two isotopes, but it is rich in 3 He, this is the concentrated phase.
The volume of the mixture and the concentration of 3 He is set so that the phase boundary between the diluted phase and the concentrated phase occurs in the mixing chamber, and the liquid surface of the concentrated phase is in the still. A simplified mixture circuit is shown in figure 3.10, noting the mixing chamber and the still. Diluted phase is marked with blue, concentrated phase with orange. The lighter shade of orange corresponds to the gaseous phase, the darker to the liquid. After the mixture separates to a diluted (blue) and a concentrated (orange) phase, 3 He is extracted from the still and resupplied to the mixing chamber. Cooling power is provided by the diffusion of 3 He from the concentrated to the diluted phase in the mixing chamber.
In the mixing chamber the concentrated phase floats on top of the denser diluted phase. The vapour in the still contains higher concentration of the component that has a lower boiling point. This is the helium-3, and is continuously pumped. The concentration of 3 He in the diluted phase drops as a result. To compensate the decreasing concentration, 3 He diffuses from the concentrated phase into the diluted phase in the mixing chamber. Dilution of helium-3 in the superfluid helium-4 requires heat absorption, superfluid helium-4 acts like vacuum. Dilution can be compared to evaporation, since the they are further apart like in a gas. As a result, the mixing chamber, and everything thermally connected to it (sample) cools down.
The diagram in figure 3.10 is grossly simplified. The mixture circuit is interlocked with the helium-4 circuit. On the left the vapour leaving the still is warmed up by running through the incoming liquid helium-4, reaches room temperature by the time it gets to the pump. After the compressor the mixture runs through a liquid nitrogen bath, that removes the contamination that might be present, then in enters the fridge through the outgoing helium-4 pipes, which pre-cools it. On its way to the mixing chamber it runs in close contact with the vapour that is leaving the still, that aids further cooling. To liquify it, its pressure is increased by driving it through narrow pipe sections called impedance.
Dilution fridges are limited by the increasing viscosity and thermal conductivity of the circulating fluids as the temperature is lowered. For this reason the lower the temperature the larger the diameter of the tubes must be which increases the cost. The lowest temperature where they are still practical to use is 2 mK. In conventional dilution fridges, the coldest part, the mixing chamber is at the bottom. We used an inverted dilution fridge, SIONLUDI, where the mixing chamber is at the top. Its schematic is shown in figure 3.11, it was designed and built in Institut Néel. The sample is easy to load, it is placed under concentric bells which are, from the outside: vacuum seal, and temperature shields for 300 K, 80K, 20 K, and 4 K. Helium-4 is fed from the bottom, and the fridge is placed on a vibration free table. The magnetic field can be adjusted by changing the current in the copper coil, which is outside of the vacuum bell.
"The world's wealth would be won by the man who, out of the Rhinegold, fashioned the ring which measureless might would bestow." Richard Wagner, Das Rheingold, Der Ring des Nibelungen 4
Transport properties of rhenium wires and SQUIDs
Rhenium thin films have shown long superconducting coherence lengths (up to 170 nm) and electron mean free paths (up to 200 nm) [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. This makes them suitable to study the effect of the decreasing size on the superconducting properties by reducing the dimensions of the samples to the vicinity of those two length scales. To explore this, and to study the effect of lithography on the superconducting properties of rhenium thin films, wires with width from 100 nm to 3 µm were fabricated. We have also successfully fabricated SQUIDs.
In this chapter it is shown that superconductivity is unaffected by the lithography process. Sensitivity of T c to lattice imperfections and orientation is studied. Finally, low noise SQUIDs are presented. The third design is shown in figure 4.2. This pattern was also fabricated using electron beam lithography. Widths of the wires were reduced below 1 µm. The pattern included 2 SQUIDs as well, with different bridge widths: 50 nm and 20 nm. In this design each wire and SQUID is connected to two pads on the bottom, and there is a common electrode on the top with four pads. Two versions of the design were fabricated alternating on the film. The first version had wire widths 100 nm, 200 nm, and 400 nm; the second had 500 nm, 700 nm, and 900 nm.
Fabrication
Drawings shown in figures 4.1(a), 4.1(b), and 4.2 are not to scale.
Lithography
Steps of the lithography process are shown in figure 4
.3.
First, the sample is spincoated with the resist. For laser beam lithography, S1818 resist was used, which forms a layer with thickness in the range of 1 µm on the surface of the sample. For electron beam lithography, PMMA (polymethyl methacrylate) resist was used, with a thickness in the range of 100 nm. After spincoating, the sample is baked to evaporate the solvent from the resist.
In the next step, the surface was exposed to a laser or electron beam along the pattern lines. Both of the resists are so called positive resists. They consist of long polymer chains, which break up to smaller fragments when exposed to light/electron beam. The smaller fragments can be dissolved easily in the developer, leaving behind a positive imprint of the pattern in the resist [START_REF] Rai-Choudhury | Handbook of Microlithography, Micromachining, and Microfabrication[END_REF].
Laser beam
Electron beam In the case of electron beam lithography, the designs had to be patterned without moving the sample stage. This was done in two steps. In the first step, a 1 mm 2 writing field was chosen, and the pads and the electrodes were written using a beam current (I) of a few mA. In the second step, the writing field was reduced to 50 µm 2 , and the wires and the SQUIDs were patterned using a beam current of 12-14 pA. The dose (D), charge received by unit area, can be obtained from the beam current using the formula D = It/A, where t is the exposure time, and A is the area. For the bridges in the SQUIDs, the dose was increased, and they were scanned only once.
Resist
After the development, the resist was removed from the surface of the rhenium where it was exposed. In the next step, 20 nm of aluminium was deposited using a Plassys MEB 550S evaporator. During lift-off, the sample was placed in acetone, which dissolved the remaining resist. The aluminium on top of the resist was also removed. To protect the thin bridges of the SQUIDs, we did not use ultrasonic bath at this step, only a pipette to stir the liquid a few times. Lift-off took approximately 2 hours. After the lift-off, aluminium only covered the rhenium film along the pattern lines. In the penultimate step the rhenium was removed everywhere by reactive ion etching (SF 6 ) using an RIE Plassys, except where it was protected by the aluminium. Finally, the aluminium covering the structures was removed by first immersing the sample in a developer called MF-319 developer (app. 3 % solution of N(CH 3 ) + 4 OH -in water), then rinsing it in distilled water and drying under nitrogen flow.
Description of the samples
Three rhenium films with different thicknesses, surfaces and crystallographic structures were patterned as described above, and transport measurements were conducted at low temperature. Before patterning the samples were studied with AFM and XRD techniques. The details of the measurements and analysis are described in chapter 2.2. A summary of the results and the patterns fabricated are listed in table 4
Measured wires 100 nm -400 nm 3 µm 3 µm SQUID × × Table 4.2: Crystallography and surface information on the samples that were used for the fabrication.
50 nm thick sample
Ten copies of design II, shown in figure 4.1(b), was patterned onto the 50 nm thick sample using electron beam lithography, as described in section 4.1. An image taken with an optical microscope of a completed pattern unit is shown in figure 4.4. The pattern is not perfect, as some rhenium remained on the sides of some of the electrodes, and in some corners. This is due to wrong electron dose used during patterning. Electrons are scattered in the resist, are backscattered, and excite secondary electrons from the film. All these processes affect the size and shape of the patterned volume, this is called proximity effect. In the third step shown in figure 4.3, the ideal shape of the imprint in the resist is shown. It is a trapezoid. If the shape of the imprint is correct, the deposited aluminium is not continuous, and is removed with the resist during lift-off. However, if the dose is wrong, and the imprint does not have the correct shape, the aluminium layer can be continuous, thus harder or impossible to remove with the resist during lift-off. Where aluminium is left, rhenium cannot be etched. The remaining rhenium created shortcuts and thicker wires on some of the patterns. These were excluded from the measurement. The correct dose can only be found by experimentation.
To verify the crystallography of the patterned area, EBSD measurements were carried out in SiMaP on the circuits after the lithography process using a FEG Zeiss scanning electron microscope. Here the colour is uniformly blue.
The intensity of the colour red and blue in these figures scales with the IQ, which measures the quality of the Kikuchi pattern recorded at a position [START_REF]Electron Backscatter Diffraction in Materials Science[END_REF]. In the areas where the intensity of red/blue is lower, points with the non-uniform colour can also be found. The low IQ of this area suggests that the orientations of these points were wrongly identified by the software due to weak contrast or blur in the Kikuchi patterns.
The surface was also investigated with AFM after the patterning. A SQUID is shown in figure 4.6. On the surface of the SQUID, a few nanometers of height variation can be observed. It corresponds to the spirals. They were left intact during the lithography process.
Unfortunately, the thin bridges that are the weak links between the two forks of the SQUID are not present on this sample. The dose during electron beam exposure was too low. The resist at the nanowire level was removed during development. For later samples the dose was adjusted.
25 nm thick sample
The two versions of design III were fabricated onto the sample using electron beam lithography. Version one had wires widths 100 nm, 200 nm and 400 nm, version two had 500 nm, 700 nm and 900 nm. The two version were alternating, and were repeated 18 times all over the sample. As with the previous sample, the electron dose was not perfectly set, and it caused short cuts in some places. Because the pattern was repeated many times all over the sample, there were a sufficient number to choose from, where the left-over rhenium did not cause a short circuit.
100 nm thick sample
Design I was patterned onto the 100 nm thick film, using the laser lithography technique. An optical image of the fabricated pattern is shown in figure 4.9
Transport measurements of the wires
The experimental setup
The resistances of the wires were measured using four-terminals. The principle of the technique is shown in figure 4.10.
Current is supplied to terminals 1 and 4, shown in blue, that are connected to the ends of the wire. The voltage is measured between terminals 2 and 3, shown in green, placed between the current electrodes. The separation of the voltage and current terminals means that only the resistance of the part of the wire that falls between the voltage electrodes is measured. The resistances of the electrodes, and contacts are excluded. This allows accurate measurement of low resistance values. The sample is glued on a sample holder with copper leads deposited onto it, as shown in figure 4.11. Connections were made between the copper electrodes of the sample holder and the large pads of the sample using a West Bond ultrasonic wire bonder with 25 µm diameter aluminium wire. There are 4 sets of 4 connectors inside the fridge, one of which is visible in figure 3.8, labeled as 'electrical connections'. They lead to the current source and the voltage probe outside the fridge, which was a TRMC2 controller.
The resistances of all three films before patterning was measured by B. Delsol, and was presented in detail in reference [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. Some of his results, along with the results of the The resistance of the thinnest (3 µm) wires was measured on the 50 nm and the 100 nm thick films. An optical microscope image of one of the connections made on the 50 nm thick sample is shown in figure 4.12. The voltage probe was connected to the two ends of the 3 µm part of the pattern, while the current source and the ground to points further to the sides. The resistivity was only characterised along the green line in figure 4.12.
On the 25 nm samples the 400 nm, 200 nm and the 100 nm wires were measured. The connections made for one of these experiments is shown in figure 4.13. The current source and ground were connected to the pads marked by the blue circles, and the voltage was measured between the two green circles. This allows the measurement of the 200 nm wide wire.
Calculation of the resistivity
The setup detailed above measures the resistance (R). Resistance depends on the geometry: the thickness of the film (d), the width of the wire (w), and the length, where the path of the current overlaps with the path of the voltage probe (l). If the geometry is known, resistivity (ρ), which depends only on material, can be calculated, as follows:
ρ = R wd l . (4.1)
Calculating the resistivity of the 3 µm wire is easy. It is shown in figure 4.12 that only the resistance of the wire is probed, and all its dimensions are known.
Resistivities in case of design II (figure 4.13) are more difficult to obtain. Magnified images of the pattern in figure 4.8 show that the thin wires are connected to thicker wires. Resistances of these thick parts are included in the measurement:
R output = R wire1 + R wire2 = ρ d l wire1 w wire1 + l wire2 w wire2 . (4.2)
Thus the resistivity can computed as follows:
ρ = dR output l wire1 w wire1 + l wire2 w wire2 . (4.3)
There is a significant uncertainty (>10%) in the obtained resistivity values of the 100 nm, 200 nm and 400 nm wires, because their precise thicknesses were not verified with SEM. Due to the proximity effect, they can be thicker then designed.
Transport characteristics of rhenium wires
Normal state properties
Resistivity is the result of the scattering of the conduction electrons. According to Matthiessen's empirical rule, resistivities corresponding to independent sources of scatter-ing add up. Dominant sources in metals are scattering on impurities and electron-phonon scattering, thus at room temperature:
ρ RT = ρ el-ph + ρ imp . (4.4)
Resistivity due to electron-phonon scattering is ∝ T at temperatures above the Debye temperature, and is ∝ T 5 below [START_REF] Sólyom | Fundamentals of the Physics of Solids[END_REF]. At low temperatures, electron-phonon scattering becomes negligible, and impurity scattering dominates, which is independent of the temperature [START_REF] Sólyom | Fundamentals of the Physics of Solids[END_REF]:
ρ res = ρ imp . (4.5)
Resistivity settles at a constant value, the residual resistivity.
The residual resistivity ratio (RRR) is defined as the ratio of the room temperature and the residual resistivity (RRR = ρ RT /ρ res ). As equations 4.4 and 4.5 show, it strongly depends on the magnitude of impurity scattering. It is used as the measure of sample quality. Higher RRR indicates, less impurity. In the case of rhenium, when lowering the temperature further, below the critical temperature (T c ) the resistivity vanishes. This corresponds to the superconducting transition. The resistivity of the wires measured at temperatures ranging from 250 K down to 300 mK is plotted in figure 4.14.
Film
As the Debye temperature of rhenium is relatively high, 413 K [START_REF] Purushotham | Mean square amplitudes of vibration and associated debye temperatures of rhenium, osmium and thallium[END_REF], the graphs shown in figure 4.14 should display a T 5 temperature dependence down to about 30 K. This could not be verified, because the thermocouples in the fridge are not calibrated for this temperature range.
The curves of the 200 nm and the 100 nm wires are not smooth around 50 K. Those are artefacts of the measurement, not real effects.
Below approximately 30 K, all the curves settle on a constant resistivity value, but these values are different for each wire. Between 1 K and 2 K the wires become superconducting.
The critical temperature, the width of the transition, the resistivity, and other calculated values are summarised in tables 4.3 and 4.4 for all the samples. Room temperature was considered 300K, and the residual resistivity was measured at 2.4 K. The critical temperature is taken where the resistivity decreases to half of the residual resistivity. The width of the transition was defined as the temperature interval between 90% and 10% of the residual resistivity.
Resistivity values measured at room temperature and at low temperature are in a similar range between the wires and the films they were fabricated on. Values of RRR are also very close for film and wire of the 50 nm and 25 nm thick sample. There is, however, a roughly 40% drop in the RRR value measured on the 100 nm film and the wire. The reason for this drop is unclear. One possible explanation is that the wire was fabricated on spot where impurity concentration was high. Another possibility is that the wire was damaged by the fabrication process. To determine the cause of the drop in the RRR value requires more experiments.
For the majority of samples, however, the fabrication did not alter their transport properties.
Properties of the superconducting transition
In this section the critical temperatures and the widths of the superconducting transitions of the films and the wires are compared to each other. The films were measured in a different cryostat by Delsol [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. Figure 4.15(a) shows the superconducting transitions of the 3 µm wide wire on the 50 nm film in green. In red, the transition of the full film is shown. The superconducting to normal transitions start at about the same temperature in both cases, however, the transition of the film is over ten times broader than that of the wire. In the case of the wire, the transport measurement probes only an area 3 µm wide and 120 µm long. The sharp transition of the wire indicates that the film is homogeneous in that small region. In comparison, in case of the film, the area probed is several millimetres long and wide. The transition reflects the lowest resistivity along a filament. The observed broadening The 3 µm wide wire that was fabricated on the 100 nm thick sample was measured and its transition is shown in figure 4.15(b) in red, along with the previously discussed 3 µm wide wire on the 50 nm thick film in green. To be able to compare the two curves, the resistivities were normalised with respect to their residual resistivities. The temperature scales are different for the two curves. The transition of the 100 nm thick film is 10 times broader than that of the 3 µm wide wire fabricated on it (from table 4.3). The transition of the wire made of the 100 nm thick film is also significantly narrower and smoother than the wire made of the 50 nm thick film. This can be observed in figure 4. 15(b).
The critical temperature of the wire fabricated onto the 100 nm film is the closest to what is expected for a bulk sample.
On the third, 25 nm thick sample, the 400 nm, 200 nm, and 100 nm wide wires were measured, their transitions, and the transition of the film is shown in figure 4. [START_REF] Daunt | Superconductivity of rhenium[END_REF]. The transition curves were normalised with respect to their residual resistivity. The temperature axis was left unchanged. About half way through the transition curve of the film a step can be observed. This suggests that the volume measured has two distinct transition temperatures. Similar steps can be observed in the same temperature region on the 100 nm wide wire. This curve has a third step also at a lower temperature. The 200 nm wide wire appears smoother compared to the film, and to the 100 nm wide wire, however, it is broad and perfectly envelopes both curves. The transition of the 400 nm wide wire is the sharpest of the four and relatively featureless.
Conclusions on the ρ(T ) curves
In the literature, previously reported critical temperatures of rhenium vary in a wide range. They can be found anywhere between 0.9 K and 5.5 K. After characterising several samples prepared with different methods, Hull and Goodman concluded that the correct critical temperature of a strain free rhenium single crystal is 1.699 K. They reported an increase of 1 K in the critical temperature after surface grinding [START_REF] Hulm | Superconducting properties of rhenium, ruthenium, and osmium[END_REF].
Alekseevskii et al. reported an increase in the critical temperature, and the κ parameter (defined in equation 3.21) of deformed, bulk rhenium samples. Increase of the κ parameter signals that bulk rhenium changes from a type-I to a type-II superconductor upon deformation [START_REF] Alekseevskii | The superconducting properties of rhenium[END_REF]. Kopetskii et al. suggested that dislocations increase the critical temperature, but do not affect the residual resistivity after studying deformed, annealed and quenched rhenium single crystals. Point defects, however, do affect both T c and the residual resistivity. They stated that planar defects, such as twin boundaries have an insignificant effect on both parameters [START_REF] Kopetskii | Critical temperature of superconducting transition in plastically deformed rhenium single crystals[END_REF]. Their last statement is in contrast with observations of localised superconductivity along twin boundaries, with critical temperatures higher than that of bulk [START_REF] Khlyustikov | Localized superconductivity of twin metal crystals[END_REF].
Haq et al. conducted annealing experiments on rhenium thin films, and, consistently with the previous authors, concluded that vacancies and dislocations are responsible for the increased residual resistivity and transition temperature, and that grain boundaries do not contribute to this effect [START_REF] Haq | Electrical and superconducting properties of rhenium thin films[END_REF]. Chu et al. studied the effects of pressure on the critical temperature of rhenium. They observed a non-linear dependence: the critical temperature decreases steeply initially, passes through a minimum, then levels off, as shown in figure 4.17 [START_REF] Chu | Study of fermi-surface topology changes in rhenium and dilute re solid solutions from T c measurements at high pressure[END_REF]. Here, the pressure directly translates to strain, and thus the expected transition temperature can be calculated from the strain measured in our films.
In figure 4.17, instead of strain the relative volume change is given. Relative change of the hexagonal unit cell volume can be calculated from the strain as follows:
∆V V 0 = 3 √ 3 2 (a 0 + ∆a) 2 (c + ∆c) -a 2 0 c 3 √ 3 2 a 2 0 c = 2 a + 2 a c + 2 a c + 2 a + c , where a = ∆a a 0 , c = ∆c c 0 = - 2ν 1 -ν a .
The misfit strain results in a relative change in volume of 0.5%. The strain obtained from the high resolution X-ray scans give a much larger value, ∼ 1%. As can be read from figure 4.17, the critical temperature only changes 0.1 K before it levels off on a value close to the bulk. If only strain were present in our samples, we would measure critical temperatures closer to the bulk value. Thus, we can conclude, in agreement with previous authors and our previous measurements, that dislocations (and possibly vacancies, though this cannot be confirmed at this point) are present in our films, which cause an increase in the residual resistivity and the critical temperature.
In figure 4.18 the residual resistivity ratios of the wires are shown as the function of their critical temperatures. The two are inversely proportional which further confirms, that the increase of the critical temperature is caused by crystallographic defects in the sample. The widths of the transition in case of the 100 nm and 50 nm thick samples are broader on the film. This is a sign that the films are composed of domains, that transit to the superconducting phase at slightly different temperatures. The samples are not homogeneous on the mm 2 scale, which is the size of the region probed by the transport measurement on the full film. The areas of the wires are 360 µm 2 , however. The narrower transition suggests that the domains are larger than the size of the wires. Inhomogeneity was reduced with the reduction of the size.
The transition curves of the 25 nm thick sample (film and wires) display a very clear structure. It was shown by the AFM images and by the XRD measurements that this film has a grainy structure, with two distinct grains that likely have two different orientations. Figure 4.17(a) and figure 4.17(b) shows that while the critical temperature of two polycrystalline samples behave slightly differently under applied pressure, the T c of two single crystal samples can be described by a single curve. This could indicate that crystal properties, such as orientation, could affect the critical temperature. The possible cause of the steps in figure 4.16 is thus that the two types of grains have slightly different transition temperatures. When fabricating the wires, their volume ratio changes, which affects the 'height' of the step in the transition curve. The sizes of the grains are much smaller than the width of the narrowest wire (see table 4.2), therefore the measured volume remains inhomogeneous. The widths of all the transitions measured on the 25 nm thick sample, listed in table 4.4, are in close agreement.
The critical temperature of a superconductor calculated from the BCS theory is given in equation 3.12, the important parameters are the density of states at the Fermi level and the pairing potential. The band structure of rhenium was calculated for the first time by Mattheiss [START_REF] Mattheiss | Band structure and fermi surface for rhenium[END_REF]. The introduction of crystal defects can lead to an increased unit cell volume and thus to increased density of states at the Fermi level in Re [START_REF] Mito | Large enhancement of superconducting transition temperature in single-element superconducting rhenium by shear strain[END_REF], as well as a change in the pairing potential which affects the critical temperature. Mito and collaborators observed T c as high as 3.2 K for Re polycrystals submitted to shear strain leading to a volume expansion of 0.7%.
The critical temperature of the 3 µm wire on the 50 nm film stands out. It is over 0.2 K lower than the bulk critical temperature, 1.7 K, which is tantalising. This film is decorated by large spirals and deep holes. We can only speculate how these affect T c : is there a Re wetting layer underneath, is the stress the same over the thickness of the film and in the spirals? These questions need further investigation.
Mean free path and coherence length
The electron mean free path and the superconducting coherence length can be obtained from the measured residual resistivity and critical temperature values.
According to the Drude model, the expression to calculate resistivity is the following:
ρ = m e n e e 2 τ = m e v F n e e 2 l , (4.6)
where n e is the density, e is the charge, m e is the weight of electrons. τ is the time between collisions, which is equal to l/v F , where v F is the Fermi velocity and l is the mean free path of the electrons [START_REF] Sólyom | Fundamentals of the Physics of Solids, volume I. Structure and Dynamics[END_REF]. Multiplying both sides with l shows that the product of the resistivity and the mean free path is nominally a constant:
ρl = m e v F n e e 2 . (4.7)
This product was measured by several authors for rhenium, yielding the following values: 4.5e-5 µΩcm 2 [START_REF] Haq | Electrical and superconducting properties of rhenium thin films[END_REF], 2.16e-5 µΩcm 2 [START_REF] Tulina | Point-contact spectroscopy of renium single crystal[END_REF], and an average of 2.01e-5 µΩcm 2 [START_REF] Tulina | Influence of purity on the magnetic properties of rhenium in the superconducting state[END_REF].
The last two values reported by Tulina et al. are in good agreement, but the first is very different. In the expression 4.7 the density of electrons is the only parameter that can change between samples. In reference [START_REF] Tulina | Influence of purity on the magnetic properties of rhenium in the superconducting state[END_REF], the resistivities and mean free paths of several rhenium samples were measured separately. Their RRRs are also listed. From reference [START_REF] Tulina | Influence of purity on the magnetic properties of rhenium in the superconducting state[END_REF], samples with the closest RRR values to our samples were chosen, and their ρl product was used to calculate the mean free paths of our samples. These values are listed in tables 4.3 and 4.4.
To calculate coherence length, the superconducting energy gap (∆(0)) needs to be obtained first using equation 3.14. Values for all the samples are listed in tables 4.3 and 4.4.
From the energy gap, the coherence length of the superconducting electrons (ξ 0 ) can be calculated using equation 3.16. A Fermi velocity was obtained by averaging the values published in reference [START_REF] Tulina | Influence of purity on the magnetic properties of rhenium in the superconducting state[END_REF], which gave 2. • 10 5 m/s. The coherence lengths are listed in tables 4.3 and 4.4.
The coherence length can be compared to the mean free path of the electrons to determine whether the sample is in the clean or in the dirty limit. In the clean limit, the electrons can travel the characteristic distance of superconductivity without scattering.
The 50 nm and the 100 nm thick film were in the clean limit, but after fabrication, the 3 µm wires are in the dirty limit. Whether this is due to the fabrication process or to the ageing of the sample is not known. However, the three characteristic lengths of both wires and both films are all larger or equal to 100 nm, except one.
The 25 nm thick film and its wires are in the dirty limit.
The effective coherence length is obtained from equations 3.22, which takes into account the effect of the electron mean free path. It is shorter than the coherence length.
In our study, we reached the clean limit with the 100 nm and the 50 nm thick film. In addition, the mean free path is larger than the thickness, conditioning the entrance in the ballistic regime along the thickness direction. Unfortunately, we did not pattern the narrowest wires on these films. We could not yet reach the ballistic regime in the width direction of the wire.
Critical current fluctuations in SQUIDs
Nanobridge SQUIDs were successfully fabricated on the 25 nm thick rhenium film using electron beam lithography. An example of a SQUID is shown in figure 4.19. The two forks of the SQUID are connected by two narrow bridges, with widths smaller than the superconducting coherence length (ξ 0 ). The fabricated pattern corresponds to figure 4.2 and included 2 SQUIDs. The bridges were designed to be 20 and 50 nm wide. Their dimensions were measured by SEM after fabrication: the narrower bridges were about 40 nm, the wider ones were about 70 nm. The SQUID shown in figure 4.19 has the wider bridges.
Considering that in the superconducting state the current flows in the middle of the the SQUID arms, which measure 200 nm across, the effective area of the SQUID loop is 1.2 µm 2 . A single electrode is used to bias the SQUID and to detect the ∂V /∂t pulse. One of the connections is schematically indicated in figure 4.20. The measurements were performed using a two-terminal method. In the case of this sample, the ground was connected to the pad marked on the top, and the current bias was connected to pad on the left. In this figure, an SEM image taken of the SQUID is also shown. The period of the oscillation is defined by the area (S) enclosed by the SQUID:
I c = 2i c cos πΦ Φ 0 = 2i c cos πBS Φ 0 , (4.8)
where i c is the critical current through one junction of the SQUID, and Φ = BS is the applied flux, given by the dot product of the magnetic induction vector and the vector loop area of the surface. Since these two vectors are parallel, the dot product is simply the product of the magnitudes in equation 4.8.
From the period, the area enclosed by the SQIUD is obtained as follows:
S = Φ 0 ∆B . (4.9)
The periodicity of the fast component corresponds to an area of 40.5 µm 2 , which is much larger than the 1.2 µm 2 size of the SQUID. However, the slow oscillation gives an area of 1.4 µm 2 , which is in good agreement with the SQUID loop. We are measuring two SQUIDs with different areas. This is shown in figure 4.22. The small SQUID (S 1 , Φ 1 ) is the one that was intentionally fabricated and wired. It is connected to either a Josephson junction or a second SQUID, and together they form a larger SQUID loop (S 2 , Φ 2 ).
On design III, there are two SQUIDs next to each other. They share the ground node, shown in figure 4.20. Their current pads and wires are very close together. It is possible, that there is an electrical connection between either the pads or the wires, that the optical microscope did not reveal. This way, the supplied current biases both SQUIDs, leading to this double SQUID. A second SQUID (SQUID2) was also measured. The critical current is shown in figure 4.23 as the function of the applied magnetic field. The maximum critical current modulation is approximately 20 µA, which is in agreement with SQUID1. The graph shows no obvious periodicity, we suspect a beating of several frequencies. There is a local maximum centred at zero applied field, as expected, and there are maxima on both sides of this peak. Some of these maxima have a clear period, and some do not. To have a clearer picture whether this data shown in this graph have the periodicity corresponding to the size of the SQUID, Fourier transform of the critical current was computed, and is shown in figure 4.24.
The most dominant peak is at 0.385 mT -1 , which corresponds to a period of 2.6 mT. The area enclosed by the loop calculated from the period is 0.8 µm 2 . This value is consistent with the 1.2 µm 2 SQUID loop area.
4.25(b
). An increasing DC current is injected into the SQUID. As the value of the current approaches the critical current, the probability that the SQUID switches to the normal state increases. The number of switching events exponentially increases with the current until I c is reached, where the probability of switching is almost 1. As a result, for currents close to the critical current the number of events rapidly decreases [START_REF] Rabaud | Courants permanents dans des anneaux mésoscopiques connectés[END_REF]. Switching current histograms are asymmetric, exhibit a tail towards the lower currents. In our case the number of switching events measured were low, so the tails are not well defined.
The mean critical current ( I c ) and its standard deviation (σ Ic ) were extracted from the histograms, and they are shown in figures 4.25(a) and 4.25(b).
From σ Ic the flux noise (∆Φ) of the SQUID can be determined using the following expression:
∆Φ = ∆I c ( ∆t Ic dI dt ) 1/2 1 dIc dΦ , (4.10)
where ∆t is the measurement time interval, dI/dt is the slope of the current ramp used in the measurement [START_REF] Rabaud | Courants permanents dans des anneaux mésoscopiques connectés[END_REF][START_REF] Hazra | Nano-superconducting quantum interference devices with suspended junctions[END_REF]. dIc dΦ was determined from the slopes of the I c (B) curves. For SQUID1 the high frequency oscillations were used. The slope is 1.03 ± 5 mA/mT, which corresponds to (using the area obtained in equation 4.9) 51 µA/Φ 0 . For SQUID2, on the positive side of the 0 T peak the slope is 36 µA/mT, and on the negative it is 50µA/mT. Using the obtained loop area, these correspond to 90 µA/Φ 0 and 124 µA/Φ 0 , respectively.
The obtained flux noise values are 2.6e-5 Φ 0 /Hz 1/2 for SQUID1 and 2.0e-4 Φ 0 /Hz 1/2 for the SQUID2.
Theoretically, the highest current a superconductor can carry without dissipation is defined by the depairing mechanism. Superconductivity vanishes when the kinetic energy associated with the supercurrent exceeds the condensation energy (binding energy of the Cooper pairs) [START_REF] Tinkham | Introduction to superconductivity[END_REF].
The depairing current density is given by the following expression:
j dp = e * Ψ 2 0 2 3 2 3 |α(T )| m * ,
where e * = 2e and m * = 2m are the charge and mass of the superconducting electron pairs, Ψ 0 is the equilibrium value of the superconducting order parameter, and α(T ) is a coefficient from the Ginzburg-Landau theory (see equation 3.8) [START_REF] Tinkham | Introduction to superconductivity[END_REF].
Using equations 3.19 and 3.20, the depairing current can be expressed as follows:
j dp = 1 3 √ 3 1 π Φ 0 µ 0 λ 2 L (T )ξ GL (T ) , (4.11)
where µ 0 = 4π • 10 -7 Wb/(Am) is the vacuum permeability.
The sample is in the dirty limit, so in equation 4.11, instead of the London penetration depth the effective penetration depth was used, which is obtained by the following equation:
λ ef f = λ L ξ 0 l . (4.12)
The London penetration depth of rhenium thin film was measured by Hykel [START_REF] Hykel | Microscopie à micro-SQUID: étude de la coexistence de la supraconductivité et du ferromagnétisme dans le composé UCoGe[END_REF] and Wang [START_REF] Wang | Superconducting properties of iron-based Ba-122 by transport measurements and scanning nano-SQUID microscopy[END_REF] by studying vortices, and by Dumur et al. [START_REF] Dumur | Epitaxial rhenium microwave resonators[END_REF] by studying rhenium microwave resonators. They obtained values of 79 nm, 103 nm, and 85 nm for λ L , respectively. Here, λ L was assumed to be 90 nm.
Instead of ξ GL , ξ ef f was used (dirty limit sample). The coherence lengths, effective coherence lengths and mean free paths of the wires fabricated from this sample was calculated. We took the average values of l, ξ 0 and ξ ef f to estimate depairing current density.
A theoretical maximum current density of 4e10 A/m 2 was obtained.
The maximum critical current we measured was about 72 µA. The current flows through the two arms of the SQUID. The cross sectional area of the SQUID arm can be calculated by multiplying the sample thickness (25 nm) with the width of the bridge (70 nm). The critical current density (j c ) is we measured is then
j c = 72 µA 2 • 25 nm • 70 nm = 2e10 A/m 2 . (4.13)
The measured current density is half of what an ideal rhenium wire could theoretically carry. This is not surprising, as achieving the theoretical critical current in superconductors is subject of active research. Attempts have been made to reach the depairing current either by reducing the dimensions of the superconductor below the characteristic lengths or by introducing artificial pinning sites to stop the motion of vortices [START_REF] Dinner | Depairing critical current achieved in superconducting thin films with through-thickness arrays of artificial pinning centers[END_REF], [START_REF] Nawaz | Approaching the theoretical depairing current in yba2cu3o7âĹŠx nanowires[END_REF], [START_REF] Xu | Achieving the theoretical depairing current limit in superconducting nanomesh films[END_REF].
Conclusion on the SQUID measurements
We have successfully fabricated SQUIDs from rhenium thin films. The critical current oscillations measured on two SQUIDs were imperfect, probably due to contaminations on the surface (left-over rhenium from fabrication or other superconductive particle). The width of the critical current histogram could reach values which make rhenium a promising candidate for low noise µSQUIDs. Rhenium has been shown to have long coherence length and electron mean free path, therefore it would be promising to continue the study of rhenium SQUIDs on samples that have better crystallographic properties, as the presented sample was a dose test to develop Re SQUID fabrication. Future patterns should include only SQUIDs, with all other superconducting structures as far away as possible. Before these preliminary experiments, the measurement of the wires reduced the number of SQUIDs available, and the sample was handled many times before we could undertake the SQUID experiments.
Conclusion and outlook
Conclusion
In this work, the epitaxial growth of rhenium thin films onto single crystal Al 2 O 3 using molecular beam epitaxy was realized and is discussed. An epitaxial relationship was found with orientations (0001)Al 2 O 3 //(0001)Re and <2110>Al 2 O 3 //<0110>Re. This was confirmed using X-ray diffraction. The misfit strain between the lattices is -0.43% at room temperature, which gives a critical thickness of about 15 nm. The substrates were heated during growth using either a Joule-heated tungsten filament located behind the sample or electron bombardment. An AFM study comparing films grown at temperatures of either 800 • C or 900 • C revealed that the higher deposition temperature results in a more homogeneous surface. On samples with thicknesses 50 nm and 100 nm, spirals are frequently observed. The diameter of these spirals grew over two fold when the higher deposition temperature was used. An XRD study of the sample films showed that they are all dominated by the epitaxial (0001) orientation. The few secondary orientations have low intensities which in almost all cases decrease with increasing deposition temperature. Deposition at a temperature of 1000 • C leads to dewetting of the 50 nm thick sample, and islands with atomically flat surfaces are formed.
The spirals that are often observed on thicker films are most likely the result of steps on the surface caused by screw dislocations. Among the spirals there are deep holes, whose origin is suspected to be partial dewetting and recrystallisation of the film. It was shown by a theoretical model that the temperature of the film starts to increase when the thickness of approximately 10 nm is reached, as the film becomes more opaque. Around this thickness a transformation of the RHEED pattern indicates a crystallographic change, and the observed surface shows signs of dewetting. The surface profile was modeled using Mullins' theory of dewetting, which allowed the determination of the surface diffusion coefficient, 4 × 10 -12 cm 2 /s. Wires with widths between 100 nm and 3 µm, and SQUIDs were fabricated on the films using the lithography process. Low temperature transport measurements showed that the fabrication did not affect the superconducting properties. The critical tempera-163 ture of the wires was found to vary in a wide range, between 1.43 K and 1.96 K. We found that this correlates with the crystallography and topography of the films. The mean free paths and the superconducting coherence lengths were determined. Two films were in the clean limit, but the wires fabricated on them were in the dirty limit. The mean free paths and the coherence lengths were larger than the thickness of the films for almost all the films and wires conditioning the ballistic regime in the thickness direction. The ballistic regime was not yet obtained in the width direction.
Critical current oscillations of two SQUIDs were measured using a dilution refrigerator. The lowest flux noise value obtained was 2.6 × 10 -5 Φ 0 /Hz 1/2 .
Outlook
The initial intention of this project was to grow epitaxial Re-Al 2 O 3 -Re junctions. Rhenium is a promising candidate for such junctions as it is known to resist oxidation. To manufacture a junction, a rhenium film with flat surface needs to be deposited, followed by the deposition of an aluminium layer which is subsequently oxidised. However, the epitaxial rhenium films are found experimentally not to have a flat surface. They are covered with spirals and deep holes. This topography is not adequate for the deposition of a second layer.
Welander grew rhenium films onto thick epitaxial niobium layers [START_REF] Paul | Structural evolution of re (0001) thin films grown on nb (110) surfaces by molecular beam epitaxy[END_REF]. His films were relatively flat and smooth, however, displayed several in-plane orientations, and rhenium mixed with niobium at the interface. Our films were flat in the case of grainy structure. The growth of grainy rhenium films is one possible way to avoid holes, and achieve a flat surface. Of course, in this case the aim of fully epitaxial, single crystal junction has to be sacrificed. A second line of investigation could be the use of a seed layer, that prevents the dewetting of rhenium, thus the formation of holes.
We obtained larger mean free path and coherence length values than the thickness of the films, and the width of the thinnest wire, 100 nm. Unfortunately, the 100 nm wide wires were fabricated on a sample which was in the dirty limit, and displayed small λ and ξ values. Wires that are in the ballistic regime in both the thickness and width directions are feasible on a clean limit film, which we can routinely deposit.
The SQUIDs fabricated on a thin film of rhenium showed a flux noise in the range of 10 -5 Φ 0 /Hz 1/2 and 10 -4 Φ 0 /Hz 1/2 . Flux noise values reported in literature for low noise µ-SQUIDs and nano-SQUIDs range between 10 -4 Φ 0 /Hz 1/2 and 10 -6 Φ 0 /Hz 1/2 [START_REF] Wang | Superconducting properties of iron-based Ba-122 by transport measurements and scanning nano-SQUID microscopy[END_REF][START_REF] Hasselbach | Microsquid magnetometry and magnetic imaging[END_REF][START_REF] Hazra | Nano-superconducting quantum interference devices with suspended junctions[END_REF][START_REF] Ketchen | Low noise dc squids fabricated in nb-al/sub 2/o/sub 3/-nb trilayer technology[END_REF][START_REF] Finkler | Self-aligned nanoscale squid on a tip[END_REF][START_REF] Hao | Measurement and noise performance of nano-superconducting-quantum-interference devices fabricated by focused ion beam[END_REF][START_REF] Hazra | Nano-superconducting quantum interference devices with continuous read out at millikelvin temperatures[END_REF][START_REF] Hykel | Microsquid force microscopy in a dilution refrigerator[END_REF][START_REF] Hazra | Retrapping current in bridge-type nano-squids[END_REF]. It is very encouraging that our preliminary results fall in this range. By using clean limit films, and refining the lithography parameters, state of art rhenium SQUIDs should be achievable.
A Ptychography
A.1 Phase problem in crystallography
In a scattering experiment, a sample is subjected to a parallel monochromatic beam with a known wave vector (k i ), therefore known energy and propagation direction. The angle (elastic scattering) and/or the energy distribution (inelastic scattering) of the scattered wave is then studied to draw conclusions regarding the crystallographic (or magnetic or dynamic) properties of the sample.
When the scattering is dominantly elastic, the energy of the incoming wave does not change during the interaction with the sample. This means, the outgoing wave vector has the same length as the incoming wave vector (|k i | = |k f |). Momentum transfer does occur, however, resulting in a directional change. The difference is called the scattering vector:
q = k f -k i . (A.1)
The amplitude of scattered X-ray waves is given by the Fourier transform of the electron density (f (r)), where the integral is taken across the illuminated volume (V ):
165 ξ L = λ ∆λ -1 λ 2 ≈ λ 2 2∆λ (A.4)
Equation A.4 tells us, that higher degree of monochromaticity results in a longer coherence length. Similarly, longer wavelength, therefore lower energy gives longer coherence length. The I13-1 beamline in Diamond Light source uses X-rays in the energy range of 6-20 keV, which corresponds to a wavelength in the range of 2-0.6 Å. λ = 1 Å midrange. Monochromatic beam is achieved by diffraction through a series of perfect single crystals. A double pass Si [START_REF] Bascom | Experimental evidence for quantized flux in superconducting cylinders[END_REF] monochromator can achieve a bandwidth of ∆λ/λ = 10 -4 , resulting in a longitudinal coherence length of 0.5 µm [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF][START_REF] Dupraz | Coherent X-ray diffraction applied to metal physics[END_REF]. A, until A and B have opposite phases. Within distance 2ξ T they have the same phase again. Considering the above, the transverse coherence length is ξ T = 1/2 λ/ tan ∆θ.
Beam divergence can be caused by the finite size of the source. In figure A.2, D denotes the size of the source, wavefronts A and B are emitted at either ends. R denotes the distance from the source. Using this, tan ∆θ can be expressed as D/2R, and substituting to the previously obtained expression, the transverse coherence length is the following:
ξ T = λR D (A.5)
The transverse coherence length increases with wavelength, lower energy X-rays are favoured. It also increases with distance from the source. This is why the I13-1 experimental hall in Diamond Light Source is located in a separate building about 130 m away from the main building. Lastly, ξ T is inversely proportional to the size of the source. This motivates reducing the spread of electron bunches in the storage ring, that produce the probing X-ray beam. Alternatively, a slit can be placed close to the source to create a virtual source reducing the size [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF][START_REF] Dupraz | Coherent X-ray diffraction applied to metal physics[END_REF][START_REF] Robinson | Optimisation of coherent x-ray diffraction imaging at ultrabright synchrotron sources[END_REF]. This is a major motivation for synchrotron facilities to upgrade their ring lattice, such as MAX-IV in Lund and ESRF in Grenoble.
A synchrotron X-ray source measures about 100 µm vertically and 10 µm horizontally. If the wavelength is approximately 1 Å, and the experiment is carried out 100 m away from from the source, the transverse coherence length is 100 µm horizontally, and 1 mm vertically. Specifically for the I13-1 beamline, coherence lengths of 200 µm (horizontal) and 350 µm (vertical) were demonstrated [START_REF] Wagner | Characterising the large coherence length at diamond's beamline i13l[END_REF].
Synchrotron radiation is only partially coherent. Individual electrons emit coherently, but the batch does not. Coherent beam is produced by inserting a slit of the size of the transverse coherence length in the beam. This only allows the coherent portion through, part of the flux has to be sacrificed [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF][START_REF] Dupraz | Coherent X-ray diffraction applied to metal physics[END_REF]. In case of the I13-1 beamline, the initial flux of 7 × 10 14 photon per second per 0.1% bandwidth (Ph/s/0.1%BW) is reduced to a coherent flux of about 10 10 Ph/s/0.1%BW [START_REF] Rau | The imaging and coherence beamline i13 at diamond[END_REF].
The diffraction pattern produced by a coherent beam differs from one created by an incoherent beam. A diffraction peak obtained using an incoherent beam is the incoherent sum of the scattering by different domains (n) in the sample (I(q) = n |F n (q)| 2 ). This results in a diffuse pattern. When using a coherent beam, the peak is the coherent sum of the scattering by different domains (I(q) = | n F n (q)| 2 ). The pattern then shows sharp intensity fluctuations, known as speckles [START_REF] Dupraz | Coherent X-ray diffraction applied to metal physics[END_REF][START_REF] Sutton | Observation of speckle by diffraction with coherent x-rays[END_REF]. If the scattering object is smaller than the footprint of the beam, fringes related to the shape of the object also appear.
A.3 Coherent diffraction imaging and ptychography
Coherent diffraction imaging (CDI) and ptychography are lensless imaging techniques that allow the reconstruction of the phase information. They are used with electrons and with X-rays as well. In both techniques a coherent X-ray beam is scattered by an object, and the scattered intensity is collected by a 2D detector in the far-field. In case of ptychography the complex amplitude of both the probing and the scattered wavefront can be reconstructed with iterative algorithms. In case of CDI one is assumed to be known (usually the probing wavefront) and the other (usually the scattered wavefront) is recovered.
CDI is used on samples smaller than the footprint of the coherent beam. This way the whole volume of the sample takes part in the scattering process.
Ptychography is the scanning version of CDI. The concept of ptychography was first put forward by Hoppe [START_REF] Hoppe | Beugung im inhomogenen Primärstrahlwellenfeld. I. Prinzip einer Phasenmessung von Elektronenbeungungsinterferenzen[END_REF][START_REF] Hoppe | Beugung in inhomogenen Primärstrahlenwellenfeld. II. Lichtoptische Analogieversuche zur Phasenmessung von Gitterinterferenzen[END_REF] to be used with scanning transmission microscopy (STEM), and the proof of the concept was demonstrated by Hoppe and Strube [START_REF] Hoppe | Beugung im inhomogenen Primärstrahlwellenfeld. III. Amplituden-und Phasenbestimmung bei unperiodischen Objekten[END_REF] using visible light. It was not developed for STEM at the time, because the instrumentation was not sufficiently developed [START_REF] Rodenburg | Ptychography and related diffractive imaging methods[END_REF]. Thanks to the advances that were made since in X-ray optics and computation, the advantages of ptychography are being discovered.
During the course of a ptychography measurement a large sample is scanned along a predefined path by a coherent beam. The diffraction patterns are collected at each point of the path. The complex amplitude of the scattered and the probing beam is reconstructed.
Redundancy is introduced in the data by partially overlapping the footprint of the beam between the steps in the scan. This overlap is then used as a constraint in the reconstruction algorithm. The ideal degree of overlap was determined to be 60% by Bunk et al. [START_REF] Bunk | Influence of the overlap parameter on the convergence of the ptychographical iterative engine[END_REF], with overlap (o) defined as o = 2r -a, where r is the radius of the footprint and a is the centre-to-centre distance.
A.3.1 Oversampling criterion
Retrieval of the phase relies on the concept of oversampling the diffraction pattern.
Nyquist-Shannon theorem states that a continuous function can be completely determined with a sampling frequency twice the highest frequency component of the signal [START_REF] Shannon | Communication in the presence of noise[END_REF]. This frequency is called the Nyquist sampling frequency. It is important to note that we are talking about the probing periodicity of the diffraction plane, which, considering a 2D detector, translates to spatial frequency. This minimum spatial sampling frequency determines the number of detector pixels required per fringe or speckle. When sampling at the Nyquist frequency, the amplitude can be recovered, but half the information, the phase, cannot. Sayre showed that if the diffraction pattern is sampled at at least twice the Nyquist sampling frequency, the phase of the scattered wave can also be recovered [START_REF] Sayre | Some implications of a theorem due to Shannon[END_REF].
A.3.2 Phase retrieval methods
The phase problem is solved by applying inverse Fourier transform to the diffraction patterns to recover the complex scattered wavefront ('object') via the use of the convolution theorem.
The first algorithm for ptychography was put forward by Rodenburg and was termed the ptychographic iterative engine (PIE) [START_REF] Rodenburg | A phase retrieval algorithm for shifting illumination[END_REF]. It was successful in solving the phase problem, however, it required an accurate knowledge of the incident probing wavefront ('probe') and the stage positions during the scan. The necessity of the accurate knowledge of the probe was removed by the development of the extended ptychographic iterative engine (ePIE), which can recover the probe from a rough estimate as well as the object [START_REF] Maiden | An improved ptychographical phase retrieval algorithm for diffractive imaging[END_REF]. Independently of Rodenburg and his coworkers, around the same time Thibault et al. developed their own algorithm based on the difference map algorithm (DM), which is also capable of reconstructing both the probe and the object [START_REF] Thibault | Probe retrieval in ptychographic coherent diffractive imaging[END_REF]. Later, it was demonstrated (using ePIE) that errors in samples positions can be corrected for in the algorithm [START_REF] Zhang | Translation position determination in ptychographic coherent diffraction imaging[END_REF]. Furthermore, the DM method was extended by Thibault et al. to take into account partial coherence in both the longitudinal and the transverse directions [START_REF] Thibault | Reconstructing state mixtures from diffraction measurements[END_REF]. This is useful, as it removes the strict restriction on coherence, which limits the flux [START_REF] Parsons | Coherent Diffraction Imaging using a High Harmonic Source at 40 eV[END_REF].
Below the two most commonly used iterative reconstruction algorithms, Rodenburg's ePIE and Thibault's DM method, are described.
Both algorithms rely on two assumptions: the interaction between the object function (O(r)) and the probe function (P (r)) can be modelled by a complex multiplication, and the scattered wavefront can be modelled by the Fourier transform (F).
The extended ptychographic iterative engine
Below the ePIE method is introduced following references [START_REF] Maiden | An improved ptychographical phase retrieval algorithm for diffractive imaging[END_REF]164,[START_REF] Stephan | High-Resolution 3D Ptychography[END_REF].
Based on the assumptions above the exit wave is given by the following:
ψ(r) = O(r)P (r -R), (A.6)
where R refers to the position of the beam on the sample along the path. The exit wave observed in the far field is:
where |P j (r -R j )| 2 max refers to the maximum value of |P j (r -R j )| 2 , P * j (r -R j ) is the complex conjugate, and the same stands for O j (r). α and β are constants that adjusts the step size of the update.
One iteration is complete when the algorithm ran through all J number of diffraction patterns. The updated object and probe functions are the new guesses in the next iteration.
The convergence is monitored by the following metric:
E = j q | I j (q) -|ψ j (q)|| 2 j q I j (q) , (A.14)
The aim is to minimise E.
The difference map algorithm
The DM algorithm is detailed below. The discussion here adheres to references [START_REF] Thibault | Probe retrieval in ptychographic coherent diffractive imaging[END_REF]164,[START_REF] Johannes | Ptychographic X-ray Microscopy and Tomography[END_REF][START_REF] Thibault | Algorimethric methods in diffraction microscopy[END_REF].
The DM method also iterates between real and Fourier space using the object and probe functions, but addresses all J diffraction patterns in the same time. It is parallel rather than sequential. The DM algorithm solves the phase problem by searching the intersection point of two constraint sets, one defined in real space, the other in Fourier space. Both constraint sets are associated with a projection operator, that map the iterations onto the constraint sets.
The first constraint set is the Fourier constraint, which relates the observed intensities to the scattered waves via the Fourier transform:
I j (q) = |F[ψ j (r)]| 2 , ∀j. (A.15)
The second is the overlap constraints, which states that the each scattered wave in the ptychographic scan can be factorised as a probe and an object function:
ψ j (r) = P (r -R j )O(r), ∀j. (A.16)
The task of the algorithm is to find the series of O and P that satisfy these two constraints.
A state vector is defined as Ψ(r) = {ψ 1 (r), ψ 2 (r), ψ 3 (r), ..., ψ J (r)} using the initial guesses for the the probe and the object functions.
The modified atomic form factor in case of X-ray scattering is the complex electron density [START_REF] Dupraz | Coherent X-ray diffraction applied to metal physics[END_REF][START_REF] Takagi | A dynamical theory of diffraction for a distorted crystal[END_REF].
Non-symmetric Bragg peaks can be decomposed into the symmetric and asymmetric contributions. The symmetric part can be considered to come from the average electron density, and the antisymmetric part is associated with a phase that equals to the projection of the local displacement along the scattering vector g. This displacement can be imaged as a real-space map of phase values across the illuminated or scanned area [START_REF] Robinson | Coherent x-ray diffraction imaging of strain at the nanoscale[END_REF].
Over the course of the Bragg ptychography measurement we only recorded the symmetric (002) reflection of rhenium. This reflection only carries information on the displacements along the direction of the surface normal.
It is possible to recover the complex amplitude along the depth of the scattering volume of the scanned area. This technique is known as 3D ptychography, and gives 3 dimensional maps of the modulus and the phase. To achieve this, the ptychographic scans are repeated at and around the Bragg angle along the rocking curve. The technique was successfully demonstrated by Godard et al [START_REF] Godard | Three-dimensional high-resolution quantitative microscopy of extended crystals[END_REF]. This is demanding measurement as it requires precise alignment and stability of the setup over several hours.
Hruszkewycz et al. recently demonstrated that 3D reconstruction is possible form the collection of lateral scans at a single angle [START_REF] Hruszkewycz | High resolution three dimensional structural microscopy by single angle bragg ptychography[END_REF]. Rocking the sample around the Bragg angle is not necessary.
A.4 I13-1 beamline in Diamond Light Source
Ptychography experiments were carried out on the I13-1 beamline in Diamond Light Source on rhenium thin films.
An areal photograph of the Diamond Light Source is shown in figure A.5(a). The schematics of a beamline is shown in figure A.5(b). The dimensions are specific to the Diamond Light Source and the I13 beamline. The electrons are traveling in the storage ring at an energy of 3 GeV. The ring is not a true circle but a 48 sided polygon. Undulators, shown in figure A.5(c), are composed of a series of dipole magnets with alternating polarity. These are placed in the straight sections of the ring to make the electrons oscillate, and emit X-ray radiation. A specific energy of this radiation is chosen by monochromators, and focused onto the sample under investigation. Most beamlines are located inside the main building, around the storage ring. However, on the I13-1 beamline coherent X-ray radiations is used, which is achieved by placing the experiment far away from the source of the X-ray. The experimental hutch is located in a building separate from the synchrotron storage ring. The total length of the beamline measured [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF] (c) Undulators are composed of a series of dipole magnets, that make electrons oscillate to generate X-ray radiation [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF].
from the source is 250 m.
The experimental setup is shown in figure A.6. The sample stage has a 30 • pitch by default with respect to the beam, and can be further tilted by ±15 • around the two in-plane axes. Lateral and vertical movement is achieved by two sets XYZ motors, one allows the rough alignment of the sample, another is for fine movements (5 nm resolution) [START_REF] Z D Pešić | Experimental stations at i13 beamline at diamond light source[END_REF][START_REF] Rau | Imaging in real and reciprocal space at the diamond beamline i13[END_REF].
The detector is placed on an industrial robot arm, part of which is visible in figure A.6. Including this arm, the setup is an 3+2 circle diffractometer, and is able to cover a wide range of hkl positions. To achieve sufficient sampling of the diffraction peak, the distance between the sample and the detector can be adjusted. Depending on the Bragg angle, it can be increased up to 5 m, the only limit is the ceiling. Using 9.4 keV X-rays, the Bragg angle for the (002) reflection of rhenium is 17.2 • , which allowed a distance of 2 m. Penetration depth of the X-rays can be obtained as the reciprocal of the attenuation coefficient, which can be found in tables [175]. At this energy for rhenium the penetration Spirals decorate the surface of all the rhenium samples that feature a single orientation. An AFM image of a spiral is shown in figure 2.22. Their sizes vary, and are thought to be related to the temperature of deposition, as was shown in section 2.2. Burton et al. explained the growth of spirals by the presence of dislocations with an edge component. This creates a step on the surface, which provides nucleation sites, and allows the spiral to grow [START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF]. Whether this theory is valid or not in case of our rhenium films could be verified by the technique of ptychography.
Ptychography was preformed on features that were patterned onto the rhenium films using lithography. These clear cut features would allow us to check the resolution and the validity of the measurement.
The features included a square with size 2 µm x 2 µm on the 50 nm sample, which was described in section 2.2.2 (sample D deposited at 900 • C). An AFM image taken on one of the squares is shown in figure A.7. A square was scanned with the X-ray beam in a spiral fashion. Spiral path is often used, as it provides good overlap between spots, and eliminates artefacts associated with raster scan. in a 2 µm region of the starting point. Here, the complete footprint of the beam is on the rhenium. As the beam gets further away from the centre, the volume of rhenium that takes part in the diffraction process is reduced, so is the intensity recorded on the detector.
The distance between the consecutive points in the spiral was 0.4 µm, which (with spot size 1µm) gives the ideal overlap of 60%. Two examples of the diffraction patterns recorded in two different point of the spiral path are shown in figure A.9. The speckle pattern changes throughout the scan. The centre peak also shows some structure, it appears to be a double peak with small angular separation (approximately 0.002 • ). The direction and the magnitude of the separation was observed to change along the scan.
We recored scans for 3D ptychography on a 3 µm thick line by repeating the spiral scan at angles around the Bragg angle along the rocking curve. This is a demanding measurement, as it can take several hours, and the sample is required to be stationary, bar the rocking and the scanning motions. To ensure nothing moves, the experimental hutch is kept at a constant temperature by air conditioning, and the robot arm keeps the detector stable. Furthermore, the beam, the centre of the rotation of the stage, and the sample has to be aligned. A camera placed over the sample was used for the alignment. We did not manage to correct all the movements. Figure A.10 shows the position of the sample for each angle. Two of the intensity maps are also shown as a demonstration.
The centre of the line was obtained from the centre-of-mass of the modified intensity map. To correctly detect the middle of the line its weight in the centre-of-mass calculation had to be increased, so intensities below a fixed value were set to zero. This might indicate that there is some noise in the data, which would prevent the ptychographic reconstruction.
The position of the sample changes consistently with increasing angles in both negative and positive directions. The measurement was repeated latter for the same line, and the points are in close agreement. This suggest that the alignment was the best we could achieve with the setup available.
Reconstruction of the datasets is not a trivial task. To our knowledge, two reconstruction packages are available, both developed for Python environment. Reconstruction was attempted using the ptypy package [184] with no success yet. The pynx package currently only works for the small angle geometry and is being developed for the Bragg geometry [185,[START_REF] Vincent Favre-Nicolin | Fast computation of scattering maps of nanostructures using graphical processing units[END_REF]. Analysis of this data is a work in progress.
B Determination of surface coverage from XPS data
The technique of XPS was introduced in detail in section 1.4.1. The principle of the technique is the following: the sample is irradiated with a known energy X-ray beam, and the electrons (mostly photoelectrons) that escape the material are sorted by their kinetic energies, and counted. From their spectrum the chemical composition of the surface can be determined.
The surface monolayer coverage (σ = covered surface uncovered surface ) of an element (contamination or deposit) can be calculated from the intensity of the corresponding XPS peak. The intensity of a peak arising from the substrate can be expressed as follows:
I substrate ∝ (1 -σ)I ∞ substrate + σI ∞ substrate e -1 λ , (B.1)
where I ∞ substrate is the intensity that would be detected form the pure material. The first part on the left side of the equation B.1 is reduced by a factor of (1-σ), which corresponds to the reduced surface area which is not covered by contamination or deposit. The second part is the intensity that is transmitted through the monolayer coverage, therefore, it is reduced by e -1 λ , where λ is the inelastic mean free path of the photoelectrons, and can be found in tables.
Intensity that would be detected from a full monolayer can be calculated by integrating the exponential shown below:
I 1ML ∝ I ∞ bulk 1 0 e -z λ dz ∝ I ∞ bulk (1 -e -1 λ ),
where I ∞ bulk is the signal that would arise from a pure bulk of the same material. If only a fraction of the surface is covered by the contamination or deposit, the expression above is simply multiplied by σ:
I coverage ∝ σI ∞ bulk (1 -e -1 λ ).
From the ratio of the two intensities (R) the surface coverage can be calculated:
R = I substrate I coverage = I ∞ substrate I ∞ bulk 1 -σ(1 -e -1 λ ) σ(1 -e -1 λ )
.
C Transformation of the Bravais-Miller indices to Cartesian coordinates
Rhenium is a hexagonal closed-packed material, and the coordinate system of the hexagonal crystal system, shown in figure C.1(a), is not orthogonal. Transformation of the Bravais-Miller indices of a direction to Cartesian coordinates is not trivial, involves trigonometry. Directions in a hexagonal system are most often given by their four Bravais-Miller indices. In the first step of this transformation, the four Bravais-Miller indices, [U V T W ], have to be converted to the three Miller indices, [uvw], using the following formula:
u = 2U + V, v = 2V + U, w = W. (C.1)
Now we can develop the formula to convert between the two coordinate systems. The orientation of the Cartesian coordinate system shown in figure C.1(b) was chosen according to the critical thickness calculation detailed in section 1.5.2. Axes x and z are the in-plane orthogonal directions, and axis y is perpendicular to the surface. In the hexagonal crystal system the axis c, of length c lattice parameter, has six fold symmetry. The rhenium films grow along this direction, and therefore the crystal axis c is parallel to axis y. To convert from c to y involves a multiplication by c lattice parameter.
The transformation is demonstrated below, using the indices of a Burgers vector, which is expected to occur in our rhenium films: Bravais-Miller → Miller → Cartesian Radiosity of a surface is the sum of the thermal radiation due to its temperature ( σT 4 ), the reflected irradiance (ρH), and transmitted irradiance (τ H). According to this the radiosities on the two sides of the substrate-rhenium plane are the following:
B W SRe =
= 1 -ρ -τ.
The chamber wall and the furnace are opaque, their transmittances are 0. The tungsten layer on the backside of the substrate is also thick enough to be considered opaque. Thus, the radiosity (and the irradiance) of the furnace, chamber wall and the two sides of the tungsten layer are given as follows:
B f = ρ f H f + f σT 4 f → H f = 1 ρ f B f - f ρ f σT 4 f , (D.
B f W = ρ W H f W + W σT 4 W → H f W = 1 ρ W B f W - W ρ W σT 4 W , (D.14) B SRe W = ρ W H SRe W + W σT 4 W → H SRe W = 1 ρ W B SRe W - W ρ W σT 4 W . (D.15)
When τ = 0, equation = 1 -ρ -τ is modified as follows:
= 1 -ρ.
As was shown with equation (2.35), a quantity analogue to electric resistance can be defined for opaque objects:
R = 1 -= ρ .
This definition of resistance is used for the furnace, the chamber wall and the tungsten layer in the derivation.
Equations (D.4) -(D.15) have to be manipulated to obtain expressions for the radiosities that depend only on the 3 unknowns (Q, T W , T SRe ). This can be easily done for the surfaces of the opaque planes.
Using equations (D.4) and (D.12):
Q = B f -H f + Q E = 1 - 1 ρ f ρ f -1 ρ f =-f ρ f =-1 R f B f + f ρ f σT 4 f + Q E . B f = σT 4 f + R f (Q E -Q) (D.16)
Using equations (D.9) and (D.13):
Q = H b -B b = 1 ρ b - 1
Q = H f W -B f W + Q E = 1 ρ W -1 1-ρ W ρ W = W ρ W = 1 R W B f W - 1 R W σT 4 W + Q E . B f W = σT 4 W + R W (Q -Q E ) (D.18)
All the radiosities are expressed as the function of only the unknown parameters in equations (D.16), (D.17), (D.18), (D.19), (D.23) and (D.24). The final form of the equations system (D.1), (D.2), (D.3) can be computed.
Rewriting equation (D.1) using (D. [START_REF] Daunt | Superconductivity of rhenium[END_REF]) and (D.18):
Q = B f -B f W + Q E = = σT 4 f + R f (Q E -Q) -σT 4 W -R W (Q -Q E ) + Q E . Q = σT 4 f -σT 4 W 1 + R f + R W + Q E (D.25)
Rewriting equation (D.2) using (D.19) and (D.23):
Q = B SRe W -B W SRe + Q C = = σT 4 W + R W (Q C -Q) -σT 4 SRe -R SRe (Q -Q C ) + r SRe Q C + Q C . Q = σT 4 W -σT 4 SRe 1 + R W + R SRe + r SRe 1 + R W + R SRe + 1 Q C (D.26)
Finally, rewriting equation (D.3) using (D.17 ] ) ] # a n g u l a r p o s i t i o n s b e f o r e t h e s c a n data [START_REF] Wildes | The growth and structure of epitaxial niobium on sapphire[END_REF] '#L Two Theta Theta H K L Epoch Seconds D e t e c t o r \n ' # columns o f t h e data data [START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF] a r r a y ( [ [ 3 . 2 0 0 0 0 0 0 0 e +01 , 1 . 6 0 0 0 0 0 0 0 e +01 , 8 . 5 0 8 1 7 0 0 0 e -06 , . . . , 3 . 4 6 7 0 0 0 0 0 e +03 , 3 . 0 0 0 0 0 0 0 0 e +00 , 7 . 0 0 0 0 0 0 0 0 e +00] , . . . # t h e data E.1.2 Functions used to fit X-ray data
Voigt function
The Voigt function was used to fit the θ-2θ diffraction peaks. It is the convolution of a Gauss function and a Lorentz function, and is given in equation 2.10. Both the Gauss and the Lorentz functions are centred on a peak. The discrete convolution only preserves the position of the peak, if that is in the middle of the two arrays. This was considered when defining the Voigt function, which is given below. import numpy a s np import s c i p y . s i g n a l a s s i g n a l import s c i p y . i n t e r p o l a t e a s i n t e r p # nm s i n t h e t a 2 = ( CuKalpha2/CuKalpha1 ) * np . s i n ( t w o t h e t a 1 /2 * np . p i / 1 8 0 . ) t w o t h e t a 2 = 2 * np . a r c s i n ( s i n t h e t a 2 ) * 180/ np . p i r e t u r n t w o t h e t a 2 d e f f u n c ( x ) :
" Returns t h e h i g h r e s o l u t i o n f i t o f t h e Re ( 0 0 2 ) and t h e Al2O3 ( 0 0 6 ) peaks . " f , d_dist = mod_interf ( x , 1 1 3 0 0 . , 2 0 4 , 2 . 2 3 0 2 6 , 2 . 8 e -2) f = f + l o r e n t z ( x , 8 . 9 5 2 7 7 4 9 5 , 4 1 . 6 9 1 0 3 2 9 , 1 . l i n e s = meta . r e a d l i n e s ( ) a l l = l i n e s [ n : n+8] s t e p _ s t a r t = i n t ( r e . s e a r c h ( r ' \ d + ' , l i n e s [ n + 1 ] ) . group ( ) ) ramp_start = i n t ( r e . s e a r c h ( r ' \ d+ ' , l i n e s [ n + 2 ] ) . group ( ) ) s t e p _ h e i g h t = i n t ( r e . s e a r c h ( r ' \ d+ ' , l i n e s [ n + 3 ] ) . group ( ) ) ramp_slope = i n t ( r e . s e a r c h ( r ' \ d + ' , l i n e s [ n + 4 ] ) . group ( ) ) t h r e s h o l d _ v o l t = i n t ( r e . s e a r c h ( r ' \ d + ' , l i n e s [ n + 5 ] ) . group ( ) ) r e s i s t a n c e = i n t ( r e . s e a r c h ( r ' \ d+ ' , l i n e s [ n + 6 ] ) . group ( ) ) g a i n = i n t ( r e . s e a r c h ( r ' \ d+ ' , l i n e s [ n + 7 ] ) . group ( ) ) meta . c l o s e r e t u r n [ a l l , s t e p _ s t a r t , ramp_start , s t e p _ h e i g h t , ramp_slope , r e s i s t a n c e ]
Example:
In
Figure 1 . 1 :
11 Figure 1.1: Hexagonal closed-pack structure.
Figure 1 . 2 :
12 Figure 1.2: Lattice of Al 2 O 3 . (a) and (b) Six oxygen ions form a slightly distorted octahedron around an aluminium ion. (c) Stacking of the octahedra. The size of the atoms on the figure corresponds to their atomic radii [19].
Figure 1 . 3 :
13 Figure 1.3: Lattice of Al 2 O 3 . (a) Alteration of one oxygen and two aluminium layers along the c axis. (b) View of the c plane. The size of the atoms on the figure corresponds to their atomic radii [19].
Figure 1 . 4 :
14 Figure 1.4: Epitaxy refers to the growth of a crystalline layer onto a crystalline substrate following its lattice.
Figure 1 . 5 :
15 Figure 1.5: (a) Mean free path and number of collisions as the function of pressure.(b) Time required for the formation of a monolayer as the functions of pressure. In ultrahigh vacuum, the mean free path is so long that collisions can be neglected, and it takes several hours for a monolayer to form from the residual molecules.
Figure 1 . 6 :
16 Figure 1.6: MBE setup in SIMaP. It consists of four chambers: 1 -introduction chamber, 2 -intermediate chamber, Dep. chamber -deposition chamber, Char. chambercharacterisation chamber.
Figure 1 . 7 :
17 Figure 1.7: Schematics of the quadrupole mass spectrometer: four electrodes placed parallel, with voltage applied between them. Depending on the voltage, only the particles with the set mass-to-charge ratio will reach the detector.
Figure 1.10: (a) Schematics of the XPS measurement: the sample is irradiated with a monochromatic X-ray beam. As a result electrons escape from the surface region, their energy is measured by the detector. (b) Auger effect: a vacancy left by a photoelectron is filled up by an other electron from a higher energy level, the excess energy is carried away by a second emitted electron, called Auger electron.
Figure 1 .
1 Figure 1.11: A typical XPS spectra recorded on a rhenium thin film.
Figure 1 . 13 :
113 Figure 1.13: Electron diffraction from an uneven surface. (a) Diffraction happens in transmission through an island, a three dimensional object. (b) Sections of the Ewald sphere are shown in the reciprocal lattice, which consists of points. Constructive interference occurs in directions where the Ewald sphere intersects a reciprocal lattice point. (c) Cross section of the Ewald sphere is shown with the reciprocal lattice points and the wave vectors of the incoming and the outgoing, forward scattered waves. (d) The intersection of a reciprocal lattice point and the electron beam is projected onto the phosphor screen, which results in a spherical spot.
Figure 1 . 14 :
114 Figure 1.14: Electron diffraction from an even surface. (a) Diffraction happens in reflection, the third dimension is reduced. (b) Sections of the Ewald sphere are shown in the reciprocal lattice, which consists of rods. Constructive interference occurs where the Ewald sphere intersects the reciprocal lattice rods. (c) Cross section of the Ewald sphere is shown with the reciprocal lattice rods and the wave vectors of the incoming and the outgoing beams. (d) The intersection of a reciprocal lattice rod and the electron beam is projected onto the phosphor screen, which results in an elongated rod perpendicular to the surface.
Figure 1 . 15 :
115 Figure 1.15: Schematics of Tapping Mode AFM and the feedback loop. (a) An oscillating cantilever is attached to a piezoelectric ceramic tube. Movement of the tip is detected by a split photodiode. The amplitude of the oscillation is kept constant by a feedback loop [41]. (b) Cross section of the modules of the piezoelectric tube, and the applied voltage. (c) The feedback signal is converted to height, phase or amplitude, and are plotted as the function of the coordinates of the scanned area.
Figure 1 .
1 Figure 1.16: 3 µm x 3 µm height and amplitude image taken of the same area of a rhenium thin film: variation of height shows on the height image, but it is easier to observe the edges on the amplitude image.
Figure 1 .
1 Figure1.17: 6 µm x 6 µm height and phase image taken of the same area of a sapphire substrate: the phase image shows a contrast that is thought to be due to different chemical composition on the surface. Contrast is not visible on the height image.
Figure 1 . 18 :
118 Figure 1.18: Effect of the planefit algorithm illustrated by a simulated stepped surface: the average slope of the image is subtracted, stepped surface appears to be jagged.
Figure 1 .
1 Figure 1.19: Miller indices and Bravais-Miller indices of the hexagonal system.
Figure 1 . 20 :
120 Figure 1.20: Bragg's and Laue's conditions of diffraction.
Figure 1 . 21 :
121 Figure 1.21: Circles and angels of a 4-cycle diffractometer.
Figure 1 .
1 Figure 1.22: (a) Schematics of the θ-2θ scan. (b) Rocking curve measurement.
Figure 1 .
1 Figure 1.23: (a) Asymmetric reflection. (b) Φ scan.
Figure 1 .
1 Figure 1.24: Φ and χ scan on the (103) equivalent reflections of rhenium grains with (002) orientation.
Figure 1 . 25 :
125 Figure 1.25: Scattering from N planes with equal distances.
Figure 1 .
1 Figure 1.26: (a) Processes that can occur when an atom reaches the surface of the growing crystal or the substrate. (b) Surface tensions that act between the adsorbate island (A), substrate (S), and vapour (V).
Figure 1 . 27 :
127 Figure 1.27: Three growth modes: (a) Frank-van der Merve layer-by layer growth mode, (b) Vollmer -Weber island growth mode, (c) Stranski-Krastanov layer-plus-island growth mode.
Figure 1 .
1 Figure 1.28: (a) Schematic representation of an edge dislocation with the Burgers circuit. A closed loop (M N OP Q) is drawn in the crystal that encloses the dislocation. (b) Burgers circuit is copied into a perfect crystal, where it is not closed. Burgers vector connects the starting point (M ) and the final point (Q) of the Burgers circuit [45].
Figure 1 .
1 Figure 1.29: (a) Schematic representation of a screw dislocation with the Burgers circuit. A closed loop (M N OP Q) is drawn in the crystal that encloses the dislocation. (b) Burgers circuit is copied into a perfect crystal, where it is not closed. Burgers vector connects the starting point (M ) and the final point (Q) of the Burgers circuit [45].
Figure 1 .
1 Figure 1.30: (a) Misfit of two hexagonal crystal lattices. (b) Due to the misfit, dislocations spontaneously appear in the film [44].
Figure 1 .
1 Figure 1.31: (a) A dislocation lies η distance away from the surface, along axis z. Its glide plane divides the crystal in two: (+) and (-). (b) Sideview of (a): surfaces Γ + , Γ -are on the two sides of the glide plane. A volume around the dislocation line, with radius r 0 is excluded from the calculation [46].
Figure 1 .
1 Figure 1.32: A low energy crystal surface with miscut of α tends to rearrange itself into a stepped structure.
Figure 1 . 33 :
133 Figure 1.33: Step flow growth mode: steps are providing nucleation sites for the adatom, and the growth happens only at the step edges.
Figure 1 . 34 :
134 Figure 1.34: Schwoebel barrier: theoretical potential felt by an adatom (shown in grey) on a step edge [47].
Figure 1 .
1 Figure 1.35:An edge dislocation produces a slanting step on the surface, which will act as a nucleation site for the arriving atoms. Nucleating adatoms keep creating steps, as a result the surface will grow in a spiral manner[START_REF] Burton | The growth of crystals and the equilibrium structure of their surfaces[END_REF].
Figure 1 .
1 Figure 1.36: (a) Single spiral grown around a single dislocation. (b) Double spiral that grew around a dislocation pair of opposite signs [48].
Figure 1 . 37 :
137 Figure 1.37: Growth of spirals initiated by a pair of screw dislocations of like sign [48].
Figure 1 . 38 :
138 Figure 1.38: Growth of spirals initiated by dislocations within a 2πρ c distance [48].
Figure 1 . 39 :
139 Figure 1.39: Grain boundary groove with angle β [49].
Figure 1 . 40 :
140 Figure 1.40: Surface profile when shaped by evaporation-condensation plotted for different time intervals. The units along the x, and y axis are a measure of length, the numbers in the legend are a measure of time.
Figure 1 . 41 :
141 Figure 1.41: Surface profile when shaped by surface diffusion plotted for different time intervals.. The units along the x, and y axis are a measure of length, the numbers in the legend are a measure of time.
Figure 2 . 1 :
21 Figure 2.1: (a) 3 µm x 3 µm AFM height image taken of the Al 2 O 3 substrate as received. The surface is covered with particles of various sizes. (b) 3 µm x 3 µm AFM height image taken of the Al 2 O 3 substrate after cleaning, before annealing. A few larger particles are still visible, but their density is significantly reduced.
Figure 2 . 2 :Figure 2 . 3 : 3
22233 Figure 2.2: Schematics of the tube used for the heat treatment of the substrates: the quartz tube is supported by an outer alumina tube to prevent deformation.
Figure 2 . 4 :
24 Figure 2.4: Surface profile extracted from the AFM height image showing the step structure.
2 O 3
23 along the c axis two Al layers and an O layer alternate. This means three different surface terminations are possible: single Al, double Al, or O. Which dominates in single crystal substrates, has been the subject of
Figure 2 . 5 :
25 Figure 2.5: (a) Surface after annealing at 1100 • C for 30 minutes. (b) Surface after annealing at 1000 • C for an hour. The planefit procedure was not applied to the data.
Figure 2 . 6 :
26 Figure 2.6: Vapour pressure of rhenium found in references[START_REF] Lide | CRC Handbook of Chemistry and Physics[END_REF],[START_REF] Carl | Handbook of Vapor Pressure, volume 4: Inorganic compounds and elements[END_REF], and[START_REF] Plante | Vapor pressure and heat of sublimation of rhenium[END_REF].
Figure 2 . 7 :
27 Figure 2.7: Geometry of the source and the substrate.
4, and it is shown in figure 2.8. The deposition rate reaches the frequently observed values at around 3000 • C.
Figure 2 . 8 :
28 Figure 2.8: Deposition rate of rhenium. The observed deposition rate is shown with the blue dashed line.
Figure 2 . 9 :
29 Figure 2.9: (a) Rhenium droplets ejected from the charge. (b) SEM image of a droplet found in the chamber.
Figure 2 .
2 10(a) shows a single layer of rhenium on top of a single layer of Al -O octahedra viewed along the c axis. The rhenium atoms can be
Figure 2 .
2 Figure 2.10: (a) A single layer of rhenium on top of a single layer of Al -O octahedra viewed along the c axis. (b) The view of the Re and Al 2 O 3 lattices along the a axis of the substrate [19].
Figure 2 .
2 Figure 2.11: XRD Φ scans on the (102) equivalent reflections of rhenium, and (104) equivalent reflections of Al 2 O 3 show the 30 • rotation between the two lattices.
7 )Figure 2 . 12 :
7212 Figure 2.12: Thermal expansion coefficient of rhenium and Al 2 O 3 and the obtained misfit as the function of temperature.
Figure 2 . 13 :
213 Figure 2.13: Misfit as the function of the critical thickness calculated using equation 2.9.
Figure 2 .
2 Figure 2.14: Sample inside the chamber, shutter and the quartz balance are also shown.
Two samples with 25 nm thickness were prepared: sample A at 800 • C and sample B at 900 • C. AFM study of the surfaces AFM images taken of sample A and sample B are shown in figures 2.15(a) and 2.15(b), respectively.
Figure 2 .
2 Figure 2.15: (a) 3 µm x 3 µm AFM height image shows the surface of the 25 nm thick sample deposited at 800 • C, sample A. (b) 3 µm x 3 µm AFM height image shows the surface of the 25 nm thick sample deposited at 900 • C, sample B.
Figure 2 .
2 Figure 2.16: θ-2θ of the two 25 nm thick samples.
Figure 2 .
2 Figure 2.17: (002) reflection of sample A fitted with the sum of two Voigt functions.
Figure 2 .
2 Figure 2.18: (a) Rocking curve of the (002) peak of rhenium on sample B fitted with a Pearson VII (blue) and Pearson VII plus Gaussian (red). The red curve describes the wide tails of the data better. (b) Rocking curves of the 25 nm samples fitted with the sum of a Pearson VII and a Gaussian.
AFM study of the surfacesThree samples with thickness 50 nm were investigated. Sample C, was deposited at 800 • C, sample D at 900 • C, and sample E at 1000 • C.
Figure 2 .
2 Figure 2.19: 1 µm x 1 µm AFM height image shows the surface of the 50 nm thick sample deposited at 800 • C (sample C).
Figure 2 .
2 Figure 2.20: Sample C: 350 nm x 400 nm AFM height image showing an irregular spiral with the profile measured along the blue line.
Figure 2 .
2 Figure 2.21: 1 µm x 1 µm AFM height image shows the surface of the 50 nm thick sample deposited at 900 • C (sample D).
Figure 2 .
2 Figure 2.22: Sample D: 600 nm x 250 nm AFM height image showing a double spiral with the profile measured along the white line.
Figure 2 .
2 Figure 2.23: (a) 12 µm x 6 µm AFM height image showing the surface of the 50 nm thick sample deposited at 900 • C (sample E). (b) 3 µm x 1.5 µm AFM height image. The colour scale was set to highlight the terrace structure of the topmost surface.
Figure 2 .
2 Figure 2.24: (b) 1 µm x 0.5 µm AFM image, a magnification of the area marked by the blue square in figure 2.23(b). The colour scale was set to highlight the bottom of the channels. (a) Surface profiles were measured along the coloured lines, and are plotted in corresponding colours.
Figure 2 .
2 Figure 2.25: θ-2θ of the three 50 nm thick samples.
Figure 2 .
2 Figure 2.26: High-resolution X-ray scan of the (002) peak of rhenium with the (006) peak of the substrate (sample D). Only every fifth datapoint is shown.
Figure 2 .
2 Figure 2.27: A magnification of figure 2.26, the (002) peak of rhenium on sample D. Only every third datapoint is shown.
Figure 2 . 28 :
228 Figure 2.28: Scattering of X-ray wave from parallel planes that are slightly disordered.
The distributions of d lattice parameters used in the calculation are shown in a histogram in figure2.30. The full width half maximum of the distribution used for sample C is indeed wider.
Figure 2 . 29 :
229 Figure 2.29: The high-resolution (002) peaks of rhenium measured on the 50 nm samples and fitted with the modified interference function, equation 2.17. Only every sixth datapoint of both datasets is shown.
Figure 2 . 30 :
230 Figure 2.30: The distributions of lattice plane spacings used for the fits shown in figure 2.29, to describe the intensity variation of the fringes, and the slight broadening of the main peak.
Figure 2 . 31 :
231 Figure 2.31: Standard resolution data simulated from the high-resolution data of the (002) peak of rhenium (sample D). As an approximation, the substrate peak was used as the resolution function. The red curve is the result of the convolution of the fitted high resolution curve, and the approximate resolution function (equation 2.20).
Figure 2 . 32 :Table 2 . 7 :
23227 Figure 2.32: Rocking curves measured on the (002) peaks of the 50 nm samples.
Figure 2 .
2 Figure 2.33: (a) 3 µm x 3 µm AFM height image shows the surface of the 100 nm thick sample deposited at 800 • C (sample F). (b) 3 µm x 3 µm AFM height image shows the surface of the 50 nm thick sample deposited at 900 • C (sample G).
Figure 2 .
2 Figure 2.34: θ-2θ scan of the 100 nm samples.
Figure 2 .
2 Figure 2.35: (a) The (002) peak of rhenium measured on the 100 nm samples. The curves measured on the two films are asymmetric, and almost identical. (b) High-resolution scan of the same (002) peaks of both 100 nm samples. The fringes on the curve of sample G are more pronounced.
Figure 2 .
2 Figure 2.36: (a) Gaussian widths of the θ-2θ peaks as the function of temperature. (b) Pearson VII widths of the rocking curves as the function of temperature.
Figure 2 .
2 Figure 2.37: 2 µm x 2 µm AFM hight image of the rhenium thin film with approximately 15 nm thickness, showing signs of dewtting.
Figure 2 .
2 Figure 2.38: (a) 2 µm x 2 µm AFM hight image taken on the same sample as shown in figure 2.37 with two insets. The colour scale of the insets was adjusted to show the step structure of the ridges. (b) Profiles extracted from the AFM hight image, showing singe and double steps. Minima of both curves were set to 0.
Figure 2 .
2 Figure 2.39: (a) 2 µm x 2 µm AFM hight image taken on the same sample as shown in figure 2.37 with two insets. The colour scale of the insets was adjusted to show the slight maximum in height around the holes. (b) Profiles extracted from the AFM hight image. The slight bump that appears around the holes resembles curves of thermal grooving driven by surface diffusion.
Figure 2 . 40 :
240 Figure 2.40: Extracted height profile fitted by Mullins' theoretical curve. Parameters of the fit are shown above the plot.
Figure 2 . 41 :
241 Figure 2.41: Temperature dependence of the diffusion coefficient according to the measurements of Goldstein and Ehrich [76].
. 38 .
38 It is also visible on the tail of the measured profile in figure 2.40. Recrystallisation also causes the maxima after the hole to flatten, most visible on the blue profile in figure 2.39(b).
Figure 2 .
2 Figure 2.42: θ-2θ scan of the Re(002) peak measured on the 15 nm thick rhenium film. Several fringes on both sides of peak show that the X-ray beam was diffracted by regularly arranged lattice planes.
Figure 2 . 43 :
243 Figure 2.43: Radiation arriving on a surface (1). Proportions of it are reflected (ρ), transmitted (τ ) or absorbed (α).
Figure 2 . 44 :
244 Figure 2.44: The model consists of a series of planes: furnace (F ), tungsten (W ), substrate-rhenium (SRe), and chamber (B). Irradiance and radiosity of the planes is considered.
Figure 2 . 45 :
245 Figure 2.45: Emittance of rhenium: values calculated from reference [83] (red points) compared to curve used by Delsol [7] (blue line).
Figure 2 . 46 :
246 Figure 2.46: Optical coefficients of the substrate-rhenium plane as function of the rhenium thickness.
Figure 2 . 47 :
247 Figure 2.47: Temperature of the rhenium thin film with and without tungsten backing as the function of its thickness calculated from the model.
Figure 3 . 1 :
31 Figure 3.1: The superconducting order parameter and the magnetic field at a normal(left)superconducting(right) interphase. [95].
Figure 3 . 2 :
32 Figure 3.2: H-T phase diagram of a type II superconductor. [112].
Figure 3 . 3 :
33 Figure 3.3: Order parameter and magnetic induction across a vortex. [112].
Figure 3 . 4 :
34 Figure 3.4: Schematic illustration of a SQUID containing two Josephson junctions. I 1 and I 2 currents running through the two junctions are modulated by the phase drops θ 1 and θ 2 [115].
Figure 3 . 5 :
35 Figure 3.5: I(Φ e ) characteristic of a symmetric SQUID with zero inductance.
Figure 3 . 6 :
36 Figure 3.6: Oscilloscope reading of the current ramp repetitions applied to a Nb SQUID [116].
.7). The volume of the fridge is divided by two radiation shields: an 80 K one (red in figure 3.7), and a 4 K one (green in figure 3.7). The coldest point is at the top, this is where the sample is placed, shown by purple in figure 3.7, and by an arrow in figure 3.8.
Figure 3 . 7 :
37 Figure 3.7: Schematic diagram of the table-top Helium-3 cryostat. It contains two circuits: 4 He shown in blue, 3 He shown in orange. Coldest temperature, 300 mK is achieved by internal pumping using charcoal on the 3 He reservoir.
Figure 3 . 8 :
38 Figure 3.8: Photograph of the inside of the table-top helium-3 cryostat.
Figure 3 . 9 :
39 Figure 3.9: Phase diagram of the 3 He and 4 He mixture. Below 800 mK the mixture spontaneously separates into a diluted (yellow) and a concentrated in 3 He phase (green) Image is from reference [118].
Figure 3 . 10 :
310 Figure 3.10: Simplified diagram of the mixture circuit of a dilution refrigerator. After the mixture separates to a diluted (blue) and a concentrated (orange) phase,3 He is extracted from the still and resupplied to the mixing chamber. Cooling power is provided by the diffusion of 3 He from the concentrated to the diluted phase in the mixing chamber.
Figure 3 . 11 :
311 Figure 3.11: Diagram of the SIONLUDI inverted dilution fridge. The coldest part is on the top, the sample is loaded there, placed under concentric bells. Helium-4 is supplied from the bottom. Modified image from reference [115].
4. 1 . 1
11 Circuit designs 3 different patterns were fabricated using laser and electron beam lithography. The first design is shown in figure 4.1(a). It features a long wire that has 7 parts with different widths, ranging from 50 µm to 3 µm. This is shown in blue. The electrodes 133 that allow 2-point or 4-point resistivity measurements are shown in pink. Electrodes are connected to the ends of all the 7 parts, thus they can be measured independently of each other. External electrodes can be connected to the large pads shown in purple. This design was fabricated with laser beam lithography. The smallest object it included was the 3 µm wire, due to the resolution limitations of laser light (∼ 1µm).
Figure 4 . 1 :
41 Figure 4.1: (a)Lithography design I: long wire with parts that have different widths (blue). Drawing is not to scale. (b) Lithography design II: long wire with parts that have different widths (blue), and 3 SQUIDs (green). Drawing is not to scale.
Figure 4 . 2 :
42 Figure 4.2: Lithography design III: 2 SQUIDs and 3 wires, each have two electrodes at the bottom, and one common electrode on top. Drawing is not to scale.
Figure 4 . 3 :
43 Figure 4.3: Steps of the lithography process used to fabricate wires and SQUIDs on rhenium thin films.
Figure 4 . 4 :
44 Figure 4.4: Optical microscopy image taken on the completed pattern fabricated from the 50 nm thick sample.
Figure 4 . 5 :
45 Figure 4.5: (a), (c) Out-of-plane (OP) orientation maps show that the orientation of rhenium along the 3 µm line and the SQUID is (001). (b), (d) In-plane orientation maps taken on the same areas as (a) and (c) shows that the in-plane orientations are also uniform.
Figure 4 . 6 :
46 Figure 4.6: AFM height image taken on a SQUID. The spirals were left intact, however the bridges connecting the two forks of the SQUID are missing.
Figures 4 .
4 Figures 4.5(a) and 4.5(c) show the out-of-plane (OP) orientation maps of a SQUID and a 3 µm line. The colour red is uniform along the measured area except in a few points. The red colour corresponds to the orientation (001), which is consistent with the results of the XRD.
Figures 4 .
4 Figures 4.5(b) and 4.5(d) show the in-plane (IP) orientation maps of the same areas.Here the colour is uniformly blue.
Figure 4 . 7 :
47 Figure 4.7: Optical and electron microscopy image taken on the completed design patterned onto the 25 nm thick sample.
Figure 4 . 8 :
48 Figure 4.8: Optical microscopy images taken on the central parts of the two versions of the completed design on the 25 nm thick film.
Figure 4 .
4 Figure 4.7 shows an optical microscopy image of one of the full, completed patterns. In figure 4.8, the wires and the SQUIDs are shown.
Figure 4 . 9 :
49 Figure 4.9: Optical microscopy image of the completed design patterned onto the 100 nm thick sample.
Figure 4 .
4 Figure 4.10: Four-terminal resistivity measurement.
Figure 4 . 11 :
411 Figure 4.11: Sample is glued on the sample holder, and is connected by 25 µm diameter aluminium wires.
Figure 4 . 12 :
412 Figure 4.12: Connection for resistance measurement of the 3 µm wide rhenium wire on the 50 nm thick sample.
Figure 4 . 13 :
413 Figure 4.13: Connection for resistivity measurement of the 200 nm wide rhenium wire on the 25 nm thick sample.
Figure 4 . 14 :
414 Figure 4.14: Resistivity the wires measured with decreasing temperature.
Figure 4 .
4 Figure 4.15: (a) Superconducting transitions of the 3 µm wide wire, and the 50 nm thick film it was fabricated on (Sample D). (b) Superconducting transitions of the 3 µm wide wires fabricated on the 50 nm (D) and the 100 nm (F) thick samples with normalised resistivities
Figure 4 . 16 :
416 Figure 4.16: Superconducting transitions of the 100 nm, 200 nm, and the 400 nm wide wires, and the 25 nm thick film they were fabricated on (Sample B).
Figure 4 . 17 :
417 Figure 4.17: Critical temperature as the function of pressure/relative change of volume measured by Chu et al. (a) for two polycrystalline samples and (b) for two single crystal samples [128].
Figure 4 . 18 :
418 Figure 4.18: Residual resistivity ratio as the function of the critical temperature.
Figure 4 .
4 Figure 4.19: SEM image of a SQUID fabricated on the 25 nm thick rhenium thin film.
The critical current as a function of the magnetic field measured on one of the SQUIDs (SQUID1) are shown in figures 4.21. A low frequency oscillation envelopes a fast critical
Figure 4 . 20 :
420 Figure 4.20: The schematics of the electrical connections made to SQUID1 shown on the optical image of the pattern.
Figure 4 . 21 :
421 Figure 4.21: The critical current oscillations as a function of the applied field measured on SQUID1. A low frequency component (a) envelops a fast oscillation (b).
Figure 4 . 22 :
422 Figure 4.22: Schematics consistent with the data shown in figure 4.21. The small SQUID is connected to either a junction or another SQUID, forming a large SQUID loop.
Figure 4 . 23 :
423 Figure 4.23: The critical current oscillations as a function of the applied field measured on SQUID2.
That the I c (B) graph of SQUID2 in figure 4.23 appears irregular could be explained by left-over rhenium on the surface, that provides alternate path for the current. This has been a problem on other samples. The switching histogram of the measured SQUIDs are shown in figures 4.25(a) and
Figure 4 .
4 Figure 4.24: Fast Fourier Transform of the I c (B) curve of SQUID2.
Figure 4 . 25 :
425 Figure 4.25: Critical current histogram of (a) SQUID1 (b) SQUID2
Figure A. 1 :
1 Figure A.1: Longitudinal coherence length.
Figure A. 2 :
2 Figure A.2: Transverse coherence length (modified figure from reference [43]).
Figure A. 3 :
3 Figure A.3: Nyquist sampling frequency demonstrated on a square function (corresponds to a slit) (a). The square of the Fourier transform is seen by the detector. To fully determine it, it has to be sampled at least once per fringe, shown in red (b).
Figure A. 5 :
5 Figure A.5: (a) An areal photograph of Diamond Light Source (source: Science and Technology Facilities Council). (b) Schematics of a synchrotron beamline (dimensions are specific to I13-1 in Diamond Light Source)[START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF] (c) Undulators are composed of a series of dipole magnets, that make electrons oscillate to generate X-ray radiation[START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF].
Figure A. 7 : 4
74 Figure A.7: 4 µm x 4 µm AFM image taken on one of the 2 µm x 2 µm squares fabricated for the ptychography experiments on the a 50 nm sample (sample D).
The spiral scan is shown in figure A.8. At each point of the path a single slice of the (002) reflection of rhenium was recorded on the 2D detector. The colour of the points shown in figure A.8 correspond to the total, summed intensity measured on the detector. The scan started in the middle of the square. Maximum intensity is indeed observed
Figure A. 8 :
8 Figure A.8: Spiral ptychography scan on a 2 µm x 2 µm rhenium square: the colour of each point corresponds to the total, summed intensity recorded on the detector.
Figure A. 9 :
9 Figure A.9: Two examples of the diffraction pattern recorded on the detector at different points of the scan.
Figure A. 10 :
10 Figure A.10: Movement of the sample during the 3D ptychography scan.
D
Derivation of the equation system for the heat transferTo estimate the temperature of the surface of the growing rhenium, a model was developed by Delsol[START_REF] Delsol | Elaboration et caractérisation de couches minces supraconductrices épitaxiées de rhénium sur saphir[END_REF]. The complete derivation to obtain the equation system given in chapter 2.4 is given below.The model is shown in figure D.1. All the parts of the system was assumed to be an infinite plane. The plane noted with F is the furnace. Besides radiosity which is the
Figure D. 1 :
1 Figure D.1: The model consist of a series of planes: furnace (F ), tungsten (W ), substraterhenium (SRe), and chamber (B). Irradiance and radiosity of the planes is considered.
12 )B b = ρ b H b + b σT 4 b→
124
σT 4 b + R b Q (D.17)Using equations (D.5) and (D.14):
= σT 4 SRe -σT 4 b 1 +.
441 ) and (D.24):Q = B b SRe -B b = = σT 4 SRe -R SRe Q -r SRe Q C -σT 4 b -R b Q. Q R B + R SRe -r SRe 1 + R B + R SRe Q C (D.27)Notations used throughout this derivation are summarised below:Substrate and rhenium was is treated as a single object. The common transmittance and emittance was calculated as follows:τ SRe = τ S τ Re , SRe = (1 -τ Re ) Re + τ Re S .f o r k i n r a n g e ( n u m o f i n t e r v a l s +1): d a t a s t r = l i n e s [ n u m o f l i n e+11+k ] d a t a a r r = np . a r r a y ( d a t a s t r . s p l i t ( ' ' ) ) data [ k , : ] = d a t a a r r . a s t y p e ( np . f l o a t ) break e l s e : c o n t i n u e r e t u r n [ command , n u m o f i n t e r v a l s , [ s t a r t _ s c a n n e d 1 , stop_scanned1 ] , [ s t a r t _ s c a n n e d 2 , stop_scanned2 ] , time , [ anglename , a n g l e s ] , names , data ] Example: In : f i l e = open ( ' EJM216 ' ) l i n e s = f i l e . r e a d l i n e s ( ) f i l e . c l o s e scannum = 10 data = s p e c . get_scan_data ( l i n e s , scannum ) Out : data [ 0 ] '#S 10 a 2 s c a
d e f l o r e n t z ( x , c , x0 , FWHM) : " Returns t h e L o r e n t z f u n c t i o n c e n t r e d on x0 . " gamma = FWHM/2 l o r e n t z f u n c = np . d i v i d e ( 1 , np . p i * gamma * ( 1 + np . d i v i d e ( np . s q u a r e ( x -x0 ) , gamma * * 2 ) ) ) r e t u r n c * l o r e n t z f u n c d e f g a u s s ( x , c , x0 , FWHM) : " Returns t h e Gauss f u n c t i o n c e n t r e d on x0 . " sigma = np . d i v i d e (FWHM, 2 * np . s q r t ( 2 * np . l o g ( 2 ) ) ) g a u s s f u n c = np . d i v i d e ( 1 , sigma * np . s q r t ( 2 * np . p i ) ) * np . exp (-1 * np . d i v i d e ( np . s q u a r e ( x -x0 ) , 2 * sigma * * 2 ) ) r e t u r n c * g a u s s f u n c d e f v o i g t ( x , c , x0 , FWHM_G, FWHM_L) : " Returns t h e Voigt f u n c t i o n c e n t r e d on x0 . " l o w e r = -100 upper = 100 nop = 8 e3+1 x _ t o f i t = np . add ( np . l i n s p a c e ( lower , upper , nop ) , x0 ) y l = l o r e n t z ( x _ t o f i t , 1 , x0 , FWHM_L) yg = g a u s s ( x _ t o f i t , 1 , x0 , FWHM_G) vv = s i g n a l . f f t c o n v o l v e ( yl , yg , ' same ' ) yv = c * vv * ( upperl o w e r ) / ( nop ) f v = i n t e r p . i n t e r p 1 d ( x _ t o f i t , yv , bounds_error = ' F a l s e ' , f i l l _ v a l u e = 1 e7 ) yyv = f v ( x ) # r e t u r n [ x _ t o f i t , yyv , yg , yl , yv ] r e t u r n yyv CuKalpha2 = 0 . 1 5 4 4 3 9 8
1 5 3 1 6 1 1 e 1 #
11 -02) r e t u r n [ f , d_dist ] d e f r e s o l u t i o n ( x , c , x0 , FWHM) : " Returns an a p p r o x i m a t i o n o f t h e r e s o l u t i o n o f t h e s t a n d a r d XRD s e t u p . " f = l o r e n t z ( x , c , x0 , FWHM) r e t u r n f # c a l c u l a t i n g t h e peak p o s i t i o n s from t h e l a t t i c e parameter t wo t h e t a = 2 * np . a r c s i n ( con . CuKalpha1 / ( 2 * 2 . 2 3 0 2 2 ) ) * 180/ np . p i tw o t h e t a 2 = secondpeak ( t w o t h e t a ) d t h e t a = twotheta2-t w o t h e t a # high-r e s o l u t i o n peak n o r m a l i s e d f_hr = f u n c ( xx ) [ 0 ] f_hr = f_hr /np . max( f_hr ) # s t a n d a r d r e s o l u t i o n f u n c t i o n n o r m a l i s e d f _ r e s = r e s o l u t i o n ( xx , 1 , twotheta , 0 . 0 4 ) f _ r e s = f _ r e s /np . max( f _ r e s ) # s t a n d a r d r e s o l u t i o n peak from CuKalpha1 l r _ f u n c 1 = s i g n a l . f f t c o n v o l v e ( f_hr , f_res , ' same ' ) * 0 . uu i s t h e argument o f t h e i n t e g r a l e r r o r f u n c t i o n uu = np . z e r o s ( [ num_of_x , np . shape ( At2 ) [ 0 ] ] ) f o r i i n r a n g e ( np . shape ( At2 ) [ 0 ] ) : uu [ : , i ] = np . d i v i d e ( datax , At2 [ i ] ) # C a l c u l a t i n g t h e i n t e g r a l e r r o r f u n c t i o n i e r r f = np . z e r o s ( [ num_of_x , np . shape ( At2 ) [ 0 ] ] ) f o r i i n r a n g e ( np . shape ( At2 ) [ 0 ] ) : f o r j i n r a n g e ( num_of_x ) : i e r r f [ j , i ] = i e r f c ( uu [ j , i ] ) # C a l c u l a t i n g t h e p r o f i l e f u n c t i o n yy = np . z e r o s ( [ num_of_x , np . shape ( At2 ) [ 0 ] ] ) f o r i i n r a n g e ( np . shape ( At2 ) [ 0 ] ) : yy [ : , i ] = np . m u l t i p l y (-1 * mm * At2 [ i ] , i e r r f [ : , i ] ) E.2.2 Surface diffusion import numpy a s np import math import s c i p y . s p e c i a l # D e f i n i n g t h e a_n c o e f f i c i e n t s o f t h e Z f u n c t i o n nn = 51 # number o f e l e m e n t s i n t h e a_n s e r i e s coef_an = np . z e r o s ( [ nn ] ) # c o n t a i n s t h e a_n c o e f f i c i e n t s coef_an [ 0 ] = -1/(np . s q r t ( 2 ) * math . gamma ( 5 . / 4 ) ) coef_an [ 1 ] = 1 coef_an [ 2 ] = -1/(np . s q r t ( 2 * * 3 ) * math . gamma ( 3 . / 4 ) ) coef_an [ 3 ] = 0 f o r i i n r a n g e ( nn -4): coef_an [ i +4] = coef_an [ i ] * ( i -1)/(4 * ( i +1) * ( i +2) * ( i +3) * ( i +4)) num_of_x = 50000 datax = np . l i n s p a c e ( 0 , 1 8 0 0 0 , num_of_x ) # p r o f i l e l i n e t t = [ 5 . , 3 0 . , 9 0 . , 1 2 0 . , 1 8 0 . ] * 60 # time b e t a = math . r a d i a n s ( 5 ) # b e t a i n r a d i a n s mm = np . tan ( b e t a ) BB = 1 e10 Bt = np . power ( np . m u l t i p l y (BB, t t ) , 0 . 2 5 ) # uu i s t h e argument o f t h e Z f u n c t i o n uu = np . z e r o s ( [ num_of_x , np . shape ( Bt ) [ 0 ] ] ) p r i n t np . shape ( uu ) f o r i i n r a n g e ( np . shape ( Bt ) [ 0 ] ) : uu [ : , i ] = np . d i v i d e ( datax , Bt [ i ] ) # C a l c u l a t i n g t h e Z f u n c t i o n ZZ = np . z e r o s ( [ num_of_x , np . shape ( Bt ) [ 0 ] ] ) f o r j i n r a n g e ( np . shape ( Bt ) [ 0 ] ) : f o r i i n r a n g e ( nn ) : aa_n = coef_an [ i ] uu_n = np . power ( uu [ : , j ] , i ) ZZ [ : , j ] = ZZ [ : , j ] + np . m u l t i p l y ( aa_n , uu_n) # C a l c u l a t i n g t h e p r o f i l e f u n c t i o n yy = np . z e r o s ( [ num_of_x , np . shape ( Bt ) [ 0 ] ] ) f o r i i n r a n g e ( np . shape ( Bt ) [ 0 ] ) : yy [ : , i ] = np . m u l t i p l y (mm * Bt [ i ] , ZZ [ : , i ] ) E.3 Preparation of the SQUID data E.3.1 SQUIDbox function The following function retrieves the parameters of the SQUID critical current measurements. These parameters are used to calculate the critical current. import r e d e f SQUIDbox ( meta_name , n ) : " Returns t h e p a r a m e t e r s o f t h e SQUID box . I n p u t s : f i l e name , nnumber o f t h e l i n e ' P e r i o d e du s q u i d (SQUID P e r i o d ) " meta = open ( meta_name )
: name = '2016 -03 -18_15-18-38_250p00mK . IcH_cleaned ' meta_name = name [ 0 : l e n ( name) -11] + ' meta ' p a r a m e t e r s = SQUIDbox ( meta_name , 6 1 ) a l l t h e l i n e s c o r r e s p o n d i n g t o t h e SQUID box Out : p a r a m e t e r s [ 0 ] [ ' P e r i o d e du s q u i d (SQUID P e r i o d ) : 255\ r \n ' , 'T du d e p a r t p a l i e r ( s t e p s t a r t time ) : 35\ r \n ' , 'T du d e p a r t rampe ( ramp s t a r t time ) : 103\ r \n ' , ' h a u t e u r du p a l i e r ( s t e p h e i g h t ) : 8815\ r \n ' , ' p e n t e de l a rampe ( s l o p e o f t h e ramp ) : 2047\ r \n ' , ' t e n s i o n de s e u i l ( t h r e s h o l d v o l t a g e ) : 1538\ r \n ' , ' R e s i s t a n c e [Ohm ] : 50000\ r \n ' , ' g a i n du p r e ampli ( A m p l i f i e r g a i n ) : 1\ r \n ' ] # a l l t h e l i n e s c o r r e s p o n d i n g t o t h e SQUID box p a r a m e t e r s [ 1 ] 35 # T du d e p a r t p a l i e r ( s t e p s t a r t time ) p a r a m e t e r s [ 2 ] 103 # T du d e p a r t rampe ( ramp s t a r t time )
Table 2 .
2
25 nm
800 • C (A) 900 • C (B)
Diameter of larger grains (nm) 96 ± 28 (29 %) 73 ± 13 (18 %)
Diameter smaller grains (nm) 45 ± 12 (27 %) 26 ± 4 (15 %)
Average roughness (nm) 1.27 0.98
N j=1 |r j |) is calculated. This value for sample A is 1.27 nm, for sample B it is smaller, 0.98 nm. Measurements on figures 2.15(a) and 2.15(b) are summarised in table 2.2. 2: Surface features measured in figures 2.15(a) and 2.15(b).
Table 2 .
2 3: Parameters of the Voigt functions fitted to the (002) rhenium peaks of the 25 nm samples
Table 2 .
2 .18(b), with the sum of a Pearson VII and a Gaussian fitted to each. The data was normalised to help com-parison. Parameters of the fits (both single Pearson VII and Pearson VII plus Gaussian) are listed in table 2.4. 4: Parameters of the rocking curves measured on the 25 nm thick films.It is immediately apparent from figure2.18(b) that the rocking curve of sample B is significantly narrower. The single Pearson VII full width half maximum of this sample is less than half than that of sample A. This signals that the grains have lower mosaicity.
25 nm (002) Rocking curves
800 • C (A) 900 • C (B)
P P + G P P + G
χ 2 0.03 0.006 0.06 0.03
I ratio - I P /I G = 0.95 - I G /I P = 0.75
Peak @ ( • ) 20.2797 20.2799 20.2340 20.2338
± 5e-4 ± 2e-4 ± 4e-4 ± 3e-4
F W HM G ( • ) - 0.731 ± 0.004 - 0.178 ± 0.002
F W HM P ( • ) 0.531 ± 0.002 0.384 ± 0.003 0.225 ± 0.002 0.52 ± 0.02
m 1.84 ± 0.03 1.21 ± 0.02 1.20 ± 0.02 5 ± 1
Table 2 .
2 equation 2.10. The same fitting procedure was applied, as described previously for the 25 nm thick films. Parameters of the fits are summarized in table 2.5. 5: Parameters of the Voigt fits of the (002) rhenium peaks measured on the 50 nm thick samples.
50 nm (002) Voigt fit
800 • C (C) 900 • C (D) 1000 • C (E)
Peaks @ ( • ) 40.454 ± 0.001 40.478 ± 0.001 40.4185 ± 0.0004
40.640 ± 0.007 40.625 ± 0.006 40.624 ± 0.002
F W HM G ( • ) 0.386 ± 0.006 0.364 ± 0.004 0.246 ± 0.003
F W HM L ( • ) 0.028 ± 0.004 0.043 ± 0.003 0.092 ± 0.003
Table 2 .
2 8: Parameters of the fit of the high-resolution (002) rhenium peaks with the interference function, equation 1.23, and the parameters of the fit of the rocking curves with the Pearson VII function, equation 2.14.
100 nm (002) High-resolution data
800 • C (F) 900 • C (G)
N 448 ± 0 387 ± 0
d 0 (0.2230084 ± 6e-7) nm (0.222972 ± 7e-6) nm
100 nm (002) Rocking curves
Peaks @ ( • ) 20.28330 ± 6e-5 20.2668 ± 3e-5
F W HM ( • ) 0.3384 ± 0.0002 0.2794 ± 0.0001
m 3.15 ± 0.02 3.83 ± 0.02
Table 4 .
4 1: Resists, instruments and developers used for lithography.
S1818 PMMA
Thickness ∼ 1 µm ∼ 100 nm
Bake 115 • C 180 • C
Instrument Heidelberg DWL66FS Léo 1530
Developer MF-319 1:3 MIBK:IPA
methyl isobutyl ketone:isopropyl alcohol
.2.
Sample B Sample D Sample F
Film thickness 25 nm 50 nm 100 nm
Deposition temp. 900 • C 900 • C 800 • C
AFM image figure 2.15(b) figure 2.21 figure 2.33(a)
Features grains spirals spirals
Diameter < 100 nm ∼ 500 nm ∼ 200 nm
XRD figure 2.16 figure 2.25 figure 2.34
red curve red curve blue curve
Orientations
Table 4 .
4 3: Values obtained for the 50 nm and the 100 nm films, and the 3 µm wide wires fabricated onto them.
thickness 50 nm (D) 100 nm (F)
film wire: 3 µm film wire: 3 µm
ρ RT (µΩcm) 15.0 [7] 24.1 19.75 [7] 25.68
ρ res (µΩcm) 1.1 1.4 0.915 [7] 2.14
RRR 15.0 [7] 17.2 21.6 [7] 12.0
T c (K) 1.66 1.48 1.85 [7] 1.77
∆T (K) 0.43 0.03 0.10 [7] 0.01
∆(0) (meV) 0.25 0.22 0.28 0.27
l (nm) [121] 172 142 217 88
ξ 0 (nm) 167 187 150 157
clean dirty clean dirty
ξ ef f (0) (nm) 124 139 111 100
Film thickness 25 nm (B)
film wire: 400 nm wire: 200 nm wire: 100 nm
ρ RT (µΩcm) 15.0 [7] 27.98 (extrapol.) 30.99 41.76
ρ res (µΩcm) 3.8 7.16 5.82 8.57
RRR 4.0 [7] 3.9 5.3 4.9
T c (K) 1.89 1.95 1.89 1.96
∆T (K) 0.17 0.13 0.15 0.17
∆(0) (meV) 0.29 0.29 0.29 0.3
l (nm) [121] 47 25 31 21
ξ 0 (nm) 147 142 147 141
dirty dirty dirty dirty
ξ ef f (0) (nm) 71 51 58 47
Table 4 .
4 4: Values obtained for the 25 nm film, and the wires fabricated on it.
ρ SRe H W SRe + SRe σT 4 SRe + τ SRe H b SRe , (D.10) B b SRe = ρ SRe H b SRe + SRe σT 4 SRe + τ SRe H W SRe . (D.11) Furthermore, from equations (2.30) and (2.31) the following is true, and is used in the derivation:
He leaves a phase where the atoms are densely packed, like in a liquid, for a phase where
A(q) = V f (r)e iqr dr.
(A.
2)
The Fourier transform is reversible, the inverse Fourier transform of the amplitude could recover the electron density distribution. However, it is the intensity that is recorded by the detectors, which is the absolute modulus of the structure factor:
(A.3) Thus, the phase information of the structure factor, which is required for the inverse Fourier transform, is lost upon the measurement. Only the amplitude of the structure factor can be recovered. This is known as the phase problem in crystallography [START_REF] Als-Nielsen | Elements of Modern X-ray Physics[END_REF].
The consequence of the phase problem is that the electron density, and thus the atomic positions cannot be directly retrieved from the diffraction data. Of course, this has not prevented scientist from trying and succeeding reconstructing structures of materials. Many of the approaches rely on a priori information regarding the chemistry of the material, rather than directly recovering the phase [START_REF] Rodenburg | Ptychography and related diffractive imaging methods[END_REF]. Recovery of both phase and amplitude information is possible by holography, where a reference wave is used to interfere with the scattered wave [START_REF] Gabor | A new microscopic principle[END_REF][START_REF] Gabor | Microscopy by reconstructed wave-fronts[END_REF].
The recovery of the phase of the scattered wave is also possible with iterative phaseretrial algorithms. For this, the probing beam needs to be coherent, otherwise the phase is not well-defined. This technique is outlined in this chapter.
A.2 Coherence of the probing beam
Coherence of waves means that there is a known phase relationship. An incoherent beam consists of many coherent waves, between which the phase relationship cannot be defined.
Coherence lengths can be defined in the framework of geometrical optics.
Longitudinal coherence length (ξ L ) is related to the monochromaticity of the beam, this concept is illustrated in figure A.1 Two waves with slightly different wavelengths (λ and (λ -∆λ)) are emitted from a point source. They have the same phase at the point of emission. Longitudinal coherence length is defined as the distance it takes for the the two waves to have opposite phases. At a distance twice the coherence length, they will be in phase again. From this criteria longitudinal coherence length can be computed:
(A.7)
The algorithm starts with the initial guesses for the probe (P 0 (r)) and the object (O 0 (r)) functions. Both guesses get updated through subsequent iterations that move between the real and the Fourier space.
The J number of diffraction patterns that were collected during the ptychography scan are addressed in a random sequence in the algorithm.
In the first step the guessed scattered wave is calculated from the (updated or initial) shifted probe and the object functions at iteration j:
Then Fourier transform is applied:
In the next step, the modulus of the scattered wave in the Fourier space (ψ j (q)) is replaced by the modulus obtained from the measured, corresponding diffraction pattern ( I j (q)):
The updated scattered wave is calculated with the inverse Fourier transform:
Finally the object and probe functions are updated by adding the weighted correction of the wavefront to the guess wavefront. This is expressed by the following two equations:
(A.12)
The Fourier constraint is applied where (just as in the ePIE), the modulus of the Fourier transformed wave (ψ j (q) = F[ψ j (r)]) is replaced with the modulus obtained from the measured diffraction patterns. The associated projection (Π F ) takes the following form: Π F (Ψ(q)) : ψ j (q) → ψ F j (q) = I j (q) ψ j (q) |ψ j (q)| , ∀j.
(A.17)
The overlap projection is determined from the minimisation of distance |Ψ(r) -Ψ O (r)| 2 , where Ψ O (r) = { P (r -R j ) Ô(r)}. Thus, the following equation needs to be minimised with respect to P and Ô:
which defines the overlap projection (Π O ):
Setting the derivative of equation A.18 to 0, the solution for the minimum is the following equation system:
(A.20)
O and P cannot be uncoupled analytically. When both the object and the probe are unknowns, the above equations (A.20) are applied in turn to update Ψ.
Using the projections defined in equations A.17 and A.19, the reconstruction is implemented using the the following update rule [START_REF] Veit Elser | Phase retrieval by iterated projections[END_REF]:
The convergence is monitored by the difference map error:
The aim is the minimise the difference map error.
A.3.3 Sensitivity of the phase to atomic displacement
Ptychography on rhenium was executed in the Bragg geometry, as shown in figure A.4. This technique is known as Bragg Projection Ptychography (BPP). Intensity distribution from a perfect crystal, where all the atoms are in their ideal positions (r 0 ), is a periodic function of the reciprocal space coordinates, with Bragg peaks at positions defined by the crystal. The intensity distributions are also symmetric and identical around each Bragg peak. Most crystals, however, are not perfect, and non-symmetric Bragg peaks are often observed.
In a strained crystal atoms are displaced from their ideal positions. A new position is given as r = r 0 + u(r 0 ), where u(r 0 ) is the displacement. Substituting r in the amplitude in equation A.2, the phase in the vicinity of a scattering vector g becomes the following:
For small displacements (q -g) • u(r 0 ) 2π and the third term can be neglected [START_REF] Dupraz | Coherent X-ray diffraction applied to metal physics[END_REF]. Thus the scattered amplitude is:
where the modified atomic form factor is
depth is approximately 4 µm, which, taking the angle of incidence into account, gives a probing depth of 1 µm. The detector consists of 3 modules arranged horizontally. Each module contains 2x8 chips, and each chip has 256x256 pixels. This results in a 1536x2048 image. The size of one pixel is 55x55 µm 2 [START_REF] Rau | Micro-and nanoimaging at the diamond beamline i13l-imaging and coherence[END_REF].
The X-ray beam was focused onto the sample using a Fresnel-Zone Plate. The spot size was approximately 1 µm horizontally and 0.5 µm vertically. The the horizontal size of the footprint of the beam on the sample is unchanged, the vertical size is increased to 0.5µm/ sin(17.2 • ) = 1.7µm.
A.5 Bragg ptychography on rhenium
Ptychography has been successfully used to study displacement fields in a wide range of materials. Dzhigaev et al. combined finite element method simulations with ptychography to obtain the 3D strain distribution in InGaN/GaN core-shell nanowires, and showed asymmetry in the strain relaxation [START_REF] Dzhigaev | X-ray bragg ptychography on a single InGaN/GaN core-shell nanowire[END_REF]. Using 3D ptychography Yau et al. observed grain boundary and dislocation dynamics in individual gold grains of a polycrystalline thin film while the sample was subject to heating [START_REF] Yau | Bragg coherent diffractive imaging of single-grain defect dynamics in polycrystalline films[END_REF]. Hruszkewycz
result of the hot filament, Q E has to be included in the equations for experiments when electron bombardment is applied. W refers to the tungsten backing on the substrate. SRe is the substrate and rhenium, which is considered as one unit, and conduction of heat through the substrate (Q C ) is included in the model. Finally, B denotes the wall of the vacuum chamber ('bâtiment'), which is at room temperature.
In equilibrium the heat exchange between the surfaces (Q) are equal, and can be expressed using the radiosities:
These three equation give the equation system that needs to be solved. The three unknowns are Q, the temperature of the tungsten layer (T W ), and the temperature of the substrate-rhenium (T SRe ). Radiosities need to be expressed as only the function of the unknowns and material parameters.
In equilibrium the heat exchange on the surfaces are also Q, and can be expressed using the irradiance and the radiosity:
Using equations (D.6) and (D.15):
Obtaining the radiosity of the substrate-rhenium is a little more difficult, because due to the non-zero transmittance, the irradiances of the two surfaces mix.
An equation that describes the relationship between the irradiances on the two sides can be obtained by subtracting (D.8) from (D.7) and substituting (D.10) and (D.11):
Now the irradiance of one side of the substrate-rhenium can be expressed with the radiosity of the same side.
Expressing H b SRe from (D.20) and substituting it in equation (D.10):
where, analogous to opaque objects, the notation R SRe = ρ SRe -τ SRe SRe +2τ SRe was used.
Next H W SRe is expressed from (D.20) and substituted in equation (D.11):
Using (D.7) and substituting (D.21) for H W SRe , the final expression for B W SRe is obtained:
E
Python scripts
In this appendix a selection of scripts and functions that were written in the Python environment to calculate, treat, or simulate the data detailed in this work are presented. The modified interference function
E.1 X-ray diffraction
The interference function, given in equation 1.23, was modified to account for the disorder in the film. A Gaussian distribution of lattice parameters was introduced. The mathematical formula of the thus modified interference function is given in equation 2.17, and is defined below. d_sum = 0 f o r n i n r a n g e ( i n t ( round (N, 0 ) ) ) : d_act = d0 + dw * np . random . normal ( 0 . 0 , 1 , 1 ) d_sum = d_sum + d_act d_dist . append ( d_act ) f = f + np . exp(-1 j * xq * d_sum) i n t e n s i t y = f * np . ma . c o n j u g a t e ( f ) i n t e n = c * i n t e n s i t y /np . max( i n t e n s i t y ) r e t u r n [ np . a r r a y ( np . r e a l ( i n t e n ) ) , d_dist ]
E.1.3 Simulation of the high resolution data
In section 2.2.2 the standard resolution data was simulated from the fit of the high resolution data. Equation 2.20 describes this operation mathematically, and the script is given below. import numpy a s np import s c i p y . s i g n a l a s s i g n a l
E.2 Functions for Mullins' thermal grooving
Below, the script that was used to describe the surface profile that develops during thermal grooving are presented. The theory was detailed in section 1.5.4, and it was applied to a rhenium film in section 2.3.
E.2.1 Evaporation-condenstation
import numpy a s np import math import s c i p y . s p e c i a l # D e f i n i n g t h e i n t e g r a l e r r o r f u n c t i o n : d e f i e r f c ( x ) : " Returns t h e i n t e g r a l e r r o r f u n c t i o n . " r = np . m u l t i p l y ( x , s c i p y . s p e c i a l . e r f ( x ) ) + np . m u l t i p l y ( 1 . / np . s q r t ( np . p i ) , np . exp (-1 * np . s q u a r e ( x ) ) ) -x r e t u r n r num_of_x = 50000 datax = np . l i n s p a c e ( 0 , 1 8 0 0 0 , num_of_x ) # p r o f i l e l i n e t t = [ 5 . , 3 0 . , 9 0 . , 1 2 0 . , 1 8 0 . ] * 60
# time b e t a = math . r a d i a n s [START_REF] Paul | Structural evolution of re (0001) thin films grown on nb (110) surfaces by molecular beam epitaxy[END_REF] # b e t a i n r a d i a n s mm = np . tan ( b e t a ) AA = 5 e3 At2 = 2 * np . s q r t ( np . m u l t i p l y (AA, t t ) ) p a r a m e t e r s [START_REF] Grigorij | Strain tuning of individual atomic tunneling systems detected by a superconducting qubit[END_REF] 8815 # h a u t e u r du p a l i e r ( s t e p h e i g h t )
p a r a m e t e r s [START_REF] Oh | Epitaxial growth of rhenium with sputtering[END_REF] |
01759245 | en | [
"phys.qphy"
] | 2024/03/05 22:32:10 | 2017 | https://theses.hal.science/tel-01759245/file/KAWASAKI_2017_diffusion.pdf | We start with a brief introduction to artificial gauge fields in ultracold atomic gases, motivating our theoretical study. Then, we provide a description of the classical field Monte Carlo methods, used to obtain the main, new results of the thesis, presented in chapter three and four. Our conclusions together with perspectives for future work are summarized in chapter five. CHAPTER 0. OVERVIEW Nous commençons ce manuscrit avec une brève introduction aux champs de jauge artificiels dans les atomes froids qui appuient notre étude théorique. Nous fournissons ensuite une description des méthodes Monte Carlo basées sur champs classiques utilisées pour obtenir les résultats nouveaux et principaux de cette thèse qui seront présentés durant les Chapitre III et IV. Nos conclusions ainsi que des perspectives pour de prochaines études sont résumées Chapitre V. CHAPTER 0. OVERVIEW
Les oiseaux de passage La chanson des gueux (1876) de Jean Richepin Poème interprété par Georges Brassens
The very first person I wish to thank is the director of this thesis, Markus Holzmann. I feel very lucky as I had the privilege of working alongside him, grasping every answer he gave to my too often naive questions. Markus truly is a remarkable supervisor, inside and outside the office, and I hope that he will continue to share his knowledge in his pleasant, passionate and easy way.
I would also like to thank all the members of the jury, for their expertise, time and kindness. My very last PhD physics discussion, during the defense, will be a lasting memory that I will enjoy remembering thanks to each member of the jury.
I thank the president of the jury Simone Fratini for his perfect french accent among many others things, as a representative of my university the Communauté Université Grenoble Alpes. I would like also to deeply thank the two rapporteurs Jordi Boronat and Patrizia Vignolo for their time and work. Their comments were especially important to me. Indeed, while writing the manuscript a PhD student needs to focus on all its defects aiming on how to improve the thesis. Their constructive and positive feedbacks were then very precious to me. I would like to thank the two examiners Stefano Giorgini and Tommaso Roscilde with whom I already had very interesting discussions on the topic of this thesis.
My gratitude goes also to my former internship supervisor Anna Minguzzi who is the current director of the laboratory LPMMC as she introduced me both to the laboratory and to Markus. The very first director of the laboratory that I met was Frank Hekking who was also my teacher, I would particularly like to thank him.
Lastly I wish to thank the University UGA for giving me the opportunity of doing my PhD at the LPMMC. I wish here to thank all the members of this University that were very helpful from my very first year in Grenoble, in particular Arnaud Ralko and Thierry Klein.
Je voudrais également remercier toute l'équipe pédagogique d'informatique de l'UPMF (puis UGA) qui m'a permis d'enseigner durant ces trois années de thèse. Ce fût une expérience très enrichissante, remplie de belles rencontres comme Nathalie Denos et Dominique Vaufreydaz.
Je voudrais ensuite chaleureusement remercier l'ensemble de mes collègues de laboratoire enseignants, gestionnaires, chercheurs, ingénieurs de recherche, informaticiens, stagiaires, post-docs ou doctorants, en particulier Malo, Aleksandr, Guillaume, Van-Duy, Jose Maria, Natalia, Ivan, Katarina, Antton, Steven, Li-Jing, Florian et Davide. Merci pour nos belles discussions et pour m'avoir accompagné tout au long de ces années. Questa tesi deve anche molto ai "vecchi" membri del laboratorio che sono oggi grandi amici. Auguro un bel in bocca al lupo a Daniel e Maria che ringrazio per la loro amicizia. Grazie anche al mitico trio degli italiani Angelo, Marco e Francesco. Il filosofo Marco e lo spericolato Francesco mi hanno trasmesso innumerevoli consigli già utili per il seguito della tesi. Merci aussi à la nouvelle génération de doctorants Evrard-Ouicem, Etienne et Nicolas avec qui j'ai partagé la dernière partie de ma thèse. Je vous souhaite le meilleur pour la suite et que tous vos projets se réalisent. Enfin j'ai été accompagné durant ces années par les gestionnaires du laboratoire Habiba et Claudine que je tiens à remercier. Merci Habiba pour ces longues discussions et rires, durant lesquelles ta gentillesse et ta spontanéité m'ont permis de rester moi-même.
Je remercie mes parents pour leurs encouragements et leur foi en moi, j'ai été très heureux et ému de partager ce moment symbolique avec vous, au nez et à la barbe des tumultes de la vie. En parlant de famille, un grand merci à mon frère de pensée et ami philosophe Benjamin Le Roux à qui je souhaite tout ce qu'il y a de meilleur pour la suite. Océane, tu es présente et m'accompagnes à chaque page de cette thèse et il m'est impossible de résumer ma gratitude envers toi. Je me contenterai de te remercier pour le sourire que tu me donnes chaque jour.
En conclusion, je remercie mon pays et ses habitants pour l'université qui m'a été offerte, le CROUS qui m'a accompagné de mille façons et la CAF qui m'a permis d'avoir un logement durant mes études pour ne citer que quelques exemples. Enfin, je voudrais terminer ces remerciements en exprimant ma reconnaissance envers toutes celles et tous ceux sans qui je n'aurais jamais pu vivre cette aventure. Merci pour tout et à très bientôt.
Chapter 0
Overview S PIN-ORBIT coupling links a particle's velocity to its quantum-mechanical spin, and it is essential in numerous condensed matter phenomena. Recently, in ultracold atomic systems [START_REF] Dalibard | Colloquium[END_REF], highly tunable synthetic spin-orbit couplings have been engineered enabling unique features and new physical phenomena. Spin-orbit coupled Bose gases present a notable example raising fundamentally new questions [START_REF] Lin | Spin-orbit-coupled boseeinstein condensates[END_REF][START_REF] Jiménez-García | Tunable spin-orbit coupling via strong driving in ultracold-atom systems[END_REF][START_REF] Galitski | Spin-orbit coupling in quantum gases[END_REF][START_REF] Wu | Realization of two-dimensional spin-orbit coupling for bose-einstein condensates[END_REF]. For instance, a pioneering experiment at NIST achieved a spin-orbit coupled Bose gas and Bose-Einstein condensation [START_REF] Lin | Spin-orbit-coupled boseeinstein condensates[END_REF]. Indeed realization of a pseudo-spin one-half Bose gas was achieved by selecting two internal states of the atoms and by coupling them through Raman processes.
At the mean-field level, spin-orbit coupling (SOC) introduces degenerate ground states expected to enhance fluctuation effects and giving rise to new, exotic quantum phases. However, the occurrence and nature of finite temperature transitions in bosonic systems have not yet been fully established [START_REF] Ozawa | Stability of ultracold atomic bose condensates with rashba spin-orbit coupling against quantum and thermal fluctuations[END_REF][START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF][START_REF] Zhai | Degenerate quantum gases with spin-orbit coupling: a review[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF][START_REF] Liao | Spin-orbitcoupled bose gases at finite temperatures[END_REF].
In this thesis, we determine the finite-temperature phase diagram of a twodimensional interacting Bose gas with two hyperfine (pseudospin) states coupled via a Rashba-Dresselhaus spin-orbit interaction using classical field Monte Carlo calculations.
First, we review the results of mean-field calculations [START_REF] Zhai | Degenerate quantum gases with spin-orbit coupling: a review[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF][START_REF] Liao | Spin-orbitcoupled bose gases at finite temperatures[END_REF][START_REF] Ho | Bose-einstein condensates with spin-orbit interaction[END_REF][START_REF] Wang | Spin-orbit coupled spinor bose-einstein condensates[END_REF] that indicate a Bose condensed ground state strongly dependent on the anisotropy of the interparticle interactions. At zero temperature, we expect exotic ground states formed by either a single plane wave with non-vanishing momentum or a linear superposition of two plane waves with opposite momenta, called plane wave state (PW) and stripe phase (SP), respectively. For spin-independent interaction between atoms, PW and SP remain degenerate at the mean-field level.
We then explore the phase diagram using classical field Monte Carlo calculations, and present the main results of the thesis. Classical field Monte Carlo simulations provide a numerical method to accurately describe continuous phase transitions of Bose gases at finite temperatures. We have adapted this method to perform simulations of interacting Bose gases with SOC. In two spatial dimensions, we show that for anisotropic SOC, the systems undergoes a Kosterlitz-Thouless phase transition from a normal to superfluid state. In the superfluid state, the single particle density matrix decays algebraically and directly reflects the PW/SP character of the mean-field ground state. In the limit of isotropic interparticle interaction, the PW/SP degeneracy is unaffected by the transition and fragmentation of the condensate occurs [START_REF] Kawasaki | Finite-temperature phases of twodimensional spin-orbit-coupled bosons[END_REF].
In the case of isotropic SOC, we show that the transition temperature decreases with increasing system size due to the increasing number of degenerate mean-field ground states and eventually vanishes in the thermodynamic limit. Our simulations show that the circular degeneracy of the single-particle ground state destroys the algebraic ordered phase. No superfluid transition is then expected in the thermodynamic limit.
Introduction et Résumé
L E COUPLAGE spin-orbite, essentiel dans de nombreux phénomènes en matière condensée, fait intéragir la vitesse d'une particule avec son propre spin. Récemment dans le domaine des atomes froids [START_REF] Dalibard | Colloquium[END_REF], un couplage spin-orbite finement réglable a été conçu. Il a depuis ouvert la voie à des dispositifs uniques et à la découverte de nouveau phénomènes physiques. Les gaz de Bose avec couplage spin-orbite sont un exemple majeur de ces nouvelles pistes qui soulèvent des questions inédites [START_REF] Lin | Spin-orbit-coupled boseeinstein condensates[END_REF][START_REF] Jiménez-García | Tunable spin-orbit coupling via strong driving in ultracold-atom systems[END_REF][START_REF] Galitski | Spin-orbit coupling in quantum gases[END_REF][START_REF] Wu | Realization of two-dimensional spin-orbit coupling for bose-einstein condensates[END_REF]. Par exemple, une expérience pionnière au laboratoire NIST a permis d'obtentir un gaz de Bose avec couplage spin-orbit ainsi qu'une condensation de Bose-Einstein dans ce système [START_REF] Lin | Spin-orbit-coupled boseeinstein condensates[END_REF]. Le gaz de bosons avec des pseudo spins 1/2 a été réalisé en sélectionnant deux états internes propre à chaque atome et en les couplant à travers un processus Raman.
Selon l'approximation champ moyen, le couplage spin-orbite introduirait une forte dégénérescence de l'état fondamental qui donnerait lieu à des phases quantiques exotiques et qui permettrait également d'envisager un rôle prépondérant des effets des fluctuations. Pourtant, l'apparition, l'existence et la nature de transitions à température finie dans un système de bosons avec couplage spin-orbite ne sont pas encore réellement établies [START_REF] Ozawa | Stability of ultracold atomic bose condensates with rashba spin-orbit coupling against quantum and thermal fluctuations[END_REF][START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF][START_REF] Zhai | Degenerate quantum gases with spin-orbit coupling: a review[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF][START_REF] Liao | Spin-orbitcoupled bose gases at finite temperatures[END_REF].
L'objectif de cette thèse est de déterminer le diagramme de phase à température finie d'un gaz de Bose bidimensionnels interagissant avec deux états hyperfins (pseudospins) couplés à travers une interaction spin-orbite Rashba-Dresselhaus en utilisant des calculs Monte Carlo type champs classiques.
Tout d'abord, nous examinons les résultats des calculs type champ moyen [START_REF] Zhai | Degenerate quantum gases with spin-orbit coupling: a review[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF][START_REF] Liao | Spin-orbitcoupled bose gases at finite temperatures[END_REF][START_REF] Ho | Bose-einstein condensates with spin-orbit interaction[END_REF][START_REF] Wang | Spin-orbit coupled spinor bose-einstein condensates[END_REF] qui indiquent un état fondamental, du gaz de Bose condensé, fortement déterminé par l'anisotropie des interactions interparticules. A température nulle, des phases exotiques sont attendues formant soit un état type onde plane avec une impulsion non-nulle soit une superposition linéaire de deux ondes planes avec deux impulsions opposées, chacune appelée respectivement onde plane (PW) et état de bande (SP). Pour des interactions indépendantes du spin de chaque atome, les états PW et SP restent dégénérés dans le cadre de la théorie champ moyen.
Nous explorons alors le diagramme de phase en utilisant les calculs Monte Carlo basés sur champs classiques pour ensuite présenter les résultats principaux de cette thèse. Les simulations Monte Carlo avec champs classiques fournissent une méthode décrivant précisemment les transitions de phases continues dans les gaz CHAPTER 0. OVERVIEW de Bose à températures finies. Nous avons adapté cette méthode afin d'effectuer des simulations d'un gaz de Bose interagissant avec couplage spin-orbite. En deux dimensions spatiales, nous montrons que, pour un coupalge spin-orbite anisotrope, le système subit une transition de phase type Kosterlitz-Thouless qui sépare une phase dite normale d'une phase superlfuide. Dans la phase superlfuide, la matrice densité à un corps décroît algébriquement et reflète directement le caractère type PW/SP en écho avec les prédictions d'état fondamental provenant de la théorie champ moyen. Dans la limite d'interactions interparticules totalement isotropes, la dégénérescence entre les états PW et SP n'est pas affectée par la transition. Une fragmentation du quasi-condensat s'opère dans ce cas [START_REF] Kawasaki | Finite-temperature phases of twodimensional spin-orbit-coupled bosons[END_REF].
Dans le cas d'un couplage spin-orbite isotrope, nous montrons que la température de transition diminue avec la taille du système à cause du nombre croissant d'états fondamentaux décrivant le minimum dégénéré d'énergie champ moyen. Cette température tend alors vers zéro à la limite thermodynamique. Nos simulations montrent que la dégénérescence circulaire de la courbe de dispersion de l'énergie de chaque particule détruit l'ordre algébrique et donc la phase ordonnée. Aucune transition vers une phase superfluide n'est attendue dans ce cas à température finie à la limite thermodynamique. new chapter in atomic and molecular physics, in which particle statistics and their interactions, rather than the study of single atoms or photons, are at center stage."
Chapter 1
Extracted from [START_REF] Bloch | Many-body physics with ultracold gases[END_REF].
Why simulate gauge fields in ultra-cold atoms ?
Trapped cold atoms usually have a neutral charge. However, magnetic phenomena are of a particular interest in quantum mechanics. They appear for example in spin Hall effects [START_REF] Kato | Vortex pairs in a spin-orbitcoupled bose-einstein condensate[END_REF][START_REF] König | Quantum spin hall insulator state in hgte quantum wells[END_REF], spin-orbit coupling [START_REF] Yu | Oscillatory effects and the magnetic susceptibility of carriers in inversion layers[END_REF][START_REF] Dresselhaus | Spin-orbit coupling effects in zinc blende structures[END_REF], Aharonov-Bohm effect, Hofstadter butterfly physics and topological insulators [19,[START_REF] Bernevig | Quantum spin hall effect and topological phase transition in hgte quantum wells[END_REF][START_REF] Hsieh | A topological dirac insulator in a quantum spin hall phase[END_REF] when a charged particle interacts with a magnetic field. In order to better understand these effects in situations where particle statistics and interactions are important, various propositions to enable such physics in ultra-cold gases were explored. Very recently new ways of creating artificially synthetic magnetic couplings have been found [START_REF] Lin | Synthetic magnetic fields for ultracold neutral atoms[END_REF]. This thesis focuses on the interplay between collective behavior and single particle magnetic phenomena in ultra-cold gases. In particular, we study the effect of a spin-orbit coupling in a homogeneous Bose gas with repulsive hardcore two-body interaction.
In the following we will first sketch how magnetic phenomena and artificial gauge fields can be created in single atoms, then we will describe the collective behavior of homogeneous bosonic gases in the presence of this new coupling. In the next chapters we will then focus on the interplay between the two-body interaction and the single particle spin-orbit coupling (SOC) spectrum.
CHAPTER 1. INTRODUCTION
Artificial gauge fields
A free non-relativistic particle of mass m with a charge q coupled to a magnetic field B = ∇ × A with A being the vector potential, is described by the Hamiltonian,
H = 1 2m p - q c A 2 (1.1)
where c is the speed of light and p is the canonical momentum operator. In most experimental systems, trapped ultra-cold atoms are neutral (q = 0) and do not naturally couple to electromagnetic fields. In the next section we will very briefly introduce how to create artificial gauge fields that mimics the effect of electromagnetic fields on neutral atoms.
Rotating gas
Artificial gauge fields were studied very early in trapped gases [START_REF] Ho | Local spin-gauge symmetry of the bose-einstein condensates in atomic gases[END_REF]. A standard way of creating strong artificial magnetic field is by rotating a neutral particle system which is equivalent to placing them in a magnetic field proportional to the rotation vector Ω with the appearance of additional terms.
p 2 2m -Ω(r × p) = p -mΩ × r 2 2m - 1 2 m(Ω × r) 2 (1.2)
The artificial magnetic field produced through rotation is necessarily uniform. This idea has shown to be very effective to study the creation of vortex lattices in BECs [START_REF] Bloch | Many-body physics with ultracold gases[END_REF].
Raman induced gauge field
Here we consider a toy model of a three-level atom coupled to two lasers in order to give a little insight on how induced transitions can simulate artificial gauge fields. This toy model is presented from the reference [START_REF] Dalibard | Colloquium[END_REF], which contains a more accurate and detailed description of artificial gauge field in ultracold atoms. As in the rotational gas case we will aim to cast the Hamiltonian in a form like Eq. (1.1), we will however not give an accurate explanation of the experimental realization itself. section. The transfer of momentum can be realized with no change in energy for ω a = ω b , such that the two ground states of energies E 1 and E 2 can be equally populated and considered degenerate, E 1 = E 2 , as it will be the case in the rest of this thesis.
In the rotating frame we can extract an effective Hamiltonian that does not depend on the excited state but only on |g 1 〉 and |g 2 〉. Raman transitions do not necessary populate the excited state (for large detuning ∆ e ) which is very convenient in cold atoms experiments. The reduced Hamiltonian for this effective two-level system is then a 2 × 2 matrix and writes in this basis,
H = ħ 2 ∆ κ * κ -∆ (1.3)
The effective Rabi frequency and the Raman mismatch write
κ = κ a κ * b 2∆ e ∆ = ħ(ω a -ω b ) -(E 2 -E 1 ) (1.4)
with κ a and κ b the Rabi frequencies corresponding to the two lasers of frequencies ω a and ω b . The light intensity is proportional to |κ| 2 . In order to extract the gauge field term appearing in the Hamiltonian, we write Eq. (1.3) in the a general form
H = ħ 2 cos(θ) e -i φ sin(θ) e i φ sin(θ) -cos(θ) (1.5)
with the generalized Rabi frequency Ω = ∆ 2 + |κ| 2 1/2 , the mixing angle tan(θ) = |κ|/∆ and the phase φ from κ = |κ|e i φ . In the basis defined by the eigenvalues |ψ -〉 CHAPTER 1. INTRODUCTION and |ψ + 〉, the two energies write E ± = ± ħΩ 2 .
|ψ -〉 = sin(θ/2) -e -i φ sin(θ/2)
|ψ + 〉 = cos(θ/2) e i φ sin(θ/2) (1.6)
Up to now Eq. (1.3) only described the internal degree of freedom of an atom. In order to describe the spatial degree of freedom of the whole atom we treat the internal (electronic) state in the adiabatic approximation.
In the adiabatic approximation Ψ(r, t ) = φ -(r, t ) |ψ -(r)〉 the equation of evolution for an atom in its internal ground state E -can be written in the form like Eq. (1.1)
i ħ ∂φ - ∂t = (p -A -(r)) 2 2M + E -(r) + ν -(r) φ -(r, t ) (1.7)
Where ν -(r) = ħ 2 8M (∆θ) 2 + sin 2 θ(∆φ) 2 and the vector potential is defined by
A -(r) = i ħ 〈ψ -|∇ψ -〉 (1.8)
Creating an artificial magnetic field, this causes a shearing of the atomic cloud and allows the entry of vortices into the BEC as presented in figure 1.2. In contrast to the case of a rotational gas which was equivalent to a uniform magnetic field, the coupling κ in Eq. (1.3) may depend on the position of the atom and on the shape of the laser beam. The two plane waves created by the two lasers can be each tuned independently described as κ a/b (r) = κ 0 e i 2k a/b •r where 2k a/b is the momentum transfered to the atom. During a Raman process, the single atom then acquires in total the difference of the two momenta carried from the two photons.
CHAPTER 1. INTRODUCTION
Raman induced SOC
"Thus, as first put forward by Higbie and Stamper-Kurn [START_REF] Higbie | Periodically dressed bose-einstein condensate: A superfluid with an anisotropic and variable critical velocity[END_REF], Raman transitions can provide the required velocity-dependent link between the spin [the internal state of the atom] and momentum: because the Raman lasers resonantly couple the spin states together when an atom is moving, its Doppler shift effectively tunes the lasers away from resonance, altering the coupling in a velocity-dependent way."
Extracted from [START_REF] Galitski | Spin-orbit coupling in quantum gases[END_REF] It is then possible to simulate different shapes of the vector potential A and tune each component differently. When the vector potential components A = A x , A y , A z do not commute, the gauge potential is called non-abelian. One particular example of non commuting components are Pauli matrices which give rise to SOC terms.
Generating non-abelian gauge potential is of a higher level of difficulty and it relies on multipod configuration (multiple low energy states).
In summary, it is possible to generate a vector potential A such that,
H = p -A 2 2M (1.9)
where the components of A do not necessary commute. In particular, in this thesis, we will study the case A ∝ σ x , σ y , 0 T where σ x and σ y are the x and y Pauli spin matrices acting on a two level atom (figure 1.1) [START_REF] Jiménez-García | Tunable spin-orbit coupling via strong driving in ultracold-atom systems[END_REF].
Ultra-cold SOCed systems have been of a great interest recently from both experimental and theoretical point of view [START_REF] Jiménez-García | Tunable spin-orbit coupling via strong driving in ultracold-atom systems[END_REF][START_REF] Lin | Synthetic magnetic fields for ultracold neutral atoms[END_REF][START_REF] Lin | Bose-einstein condensate in a uniform light-induced vector potential[END_REF]. Indeed extremely tunable SOC are a good playground for new accessible questions to arise. Some examples of phenomena that do not exist in standard condensed matter systems are listed here below.
• What is the collective behavior of SOCed bosons?
• How does SOC change when the spin is S > 1/2?
• What is the interplay between interactions and SOC for bosons?
After a more precise definition of our SOC Hamiltonian, we will study the single particle spectrum of SOCed atoms and then ask again the questions arising from these new features.
CHAPTER 1. INTRODUCTION
Rashba-Dresselhaus spin-orbit coupling
Spin-orbit coupling in condensed matter
For electrons in atoms or solid, the spin-orbit coupling is naturally present due to relativistic corrections. When a charged particle is moving in an electric field, the particle in its reference frame sees a magnetic field which couples to the internal magnetic moment (spin).
Different physical realizations produce different types of SOC with different symmetries. In two dimensions they are generally regrouped in two classes called Rashba [START_REF] Yu | Oscillatory effects and the magnetic susceptibility of carriers in inversion layers[END_REF] and Dresselhaus [START_REF] Dresselhaus | Spin-orbit coupling effects in zinc blende structures[END_REF] SOC. These couplings were originally studied in the context of two dimensional semiconductors.
In the context of ultra-cold gases, the SOC is not a relativistic correction to the electronic energy levels but arises as an effective description for the hyperfine states of the atom and its coupling to the laser fields via the atomic momentum. The form and the strength of the SOC are experimentally tunable and therefore their study is distinct and often far from the SOC studies based on electrons in standard condensed matter. We quickly define below the terminology used in the context of ultra-cold atoms where different types of SOC are experimentally achievable [START_REF] Jiménez-García | Tunable spin-orbit coupling via strong driving in ultracold-atom systems[END_REF][START_REF] Campbell | Realistic rashba and dresselhaus spin-orbit coupling for neutral atoms[END_REF].
Rashba and Dresselhaus couplings
The general Rashba-Dresselhaus coupling is defined as
H RD = ħ 2 κ m p x σ x + η soc ħ 2 κ m p y σ y
where κ corresponds to the strength of the SOC. The real scalar η soc characterizes the anisotropy/isotropy of the SOC in the x-y plane which can be experimentally controlled by coupling lasers with different amplitudes in the x-y plane. As a definition, the pure Rashba case corresponds to an isotropic SOC with η soc = 1. Otherwise for 0 ≤ η soc < 1 the anisotropic coupling is called Rashba-Dresselhaus. In the next section we will draw the energy spectrum of the free particle Hamiltonian including the general SOC and discuss its dependency on the parameter η soc .
CHAPTER 1. INTRODUCTION
Non interacting SOCed Bose gases
Before discussing the implications of including SOC in the Hamiltonian for interacting many body systems, let us first study the single particle spectrum of the ideal (non interacting) SOCed atoms. We introduce in this section the eigenbasis of the non interacting Hamiltonian given by a linear superposition of spin up and down states.
Diagonal form
In this thesis we will consider bosons with two internal degrees of freedom coupled by a Rashba-Dresselhaus SOC in a homogeneous system. In the second quantization basis the field creator operator writes
Ψ † (r) = 1 V k e i k•r Ψ † k (1.10)
We define û † k and d † k as creation operator of a spin up and down particle of momentum k.
Ψ † k = û † k , d † k (1.11)
In a homogeneous system and in presence of a SOC, the Hamiltonian takes the form
Ĥ0 = k Ψ † k ħ 2 k 2 + ħ 2 κ 2 2m I 2 + ħ 2 κ m k x σ x + η soc ħ 2 κ m k y σ y Ψk (1.12)
with I 2 the 2 × 2 identity matrix. The constant term ħ 2 κ 2 2m is added for convenience in order to set the absolute minimum of the energy to zero. The following results are of course completely independent of it. In matrix form, the Hamiltonian Ĥ0 writes Ĥ0 = 1
V k d r Ψ † k M (k) Ψk (1.13) CHAPTER 1. INTRODUCTION
For simplicity we set ħ = m = 1 in the following. The matrix M (k) writes
M (k) = k 2 /2 + κ 2 /2 κ(k x -i η soc k y ) κ(k x + i η soc k y ) k 2 /2 + κ 2 /2 = k 2 /2 + κ 2 /2 κk ⊥ e -i θ k κk ⊥ e +i θ k k 2 /2 + κ 2 /2 (1.14)
where we have written the off-diagonal terms as
κ(k x ± i η soc k y ) = κk ⊥ e ±i θ k (1.15)
or
e ±i θ k = k x ± i η soc k y k ⊥ k ⊥ = k 2 x + η 2 soc k 2 y (1.16)
Diagonalizing M (k), we obtain
Ĥ0 = k,σ=+,- σ (k) Φσ † k Φσ k (1.17)
where the energies of the two branches write
± (k) = k 2 /2 + κ 2 /2 ± |κk ⊥ | = (k ⊥ ± κ) 2 + 1 -η 2 soc k 2 y + k 2 z 2 (1.18)
The new field operators write
Φ± k = ûk ± e i θ k dk 2 (1.19)
The energy eigenfunctions is composed out of the two spin states of the corresponding momentum with equal amplitude but momentum dependent phase. Figure 1.3 shows the single particle energy spectrum of the ideal gas for two different values of η soc .
CHAPTER 1. INTRODUCTION
Energy spectrum
These last years in the context of ultra-cold atoms [START_REF] Galitski | Spin-orbit coupling in quantum gases[END_REF][START_REF] Zhai | Degenerate quantum gases with spin-orbit coupling: a review[END_REF], both fermionic and bosonic systems were studied in presence of these energy spectra.
In the case of bosons, at low temperature a large fraction of particles occupy the lowest energy state and can give rise to the phase transition known as Bose Einstein condensation (BEC). The absolute minimum of the energy is the key feature for the physics of bosons at low temperature which is strongly affected by the presence of a SOC term. For the pure Rashba gauge potential η soc = 1 the dispersion curve minimum is a constant nonzero radius in the momentum space as shown in the left figure 1.3. This corresponds to a massive degeneracy of the single-particle ground level. In this particular case, as we will see, Bose-Einstein condensation does not necessary occur and a highly correlated ground state may be expected in the presence of a contact interaction.
The existence of this degeneracy determines strongly the behavior of such bosonic systems. The simplest way to break explicitly this symmetry is to change the intensity of the two Raman lasers e.g. changing the amplitude of their plane-waves in the x-y plane leading to η soc = 1, as shown in the right figure 1.3.
What happens to the BEC scenario in presence of a SOC?
Ideal Bose gas in 3D
In three dimensions the degeneracy of the single particle ground state critically changes the BEC scenario. We study, in this section, the effect of SOC term in a homogeneous three dimensional Bose gas. The impact of a SOC on a BEC was already pointed out in recent theoretical studies [START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF][START_REF] Cui | Enhancement of condensate depletion due to spinorbit coupling[END_REF].
Ideal Bosons in absence of SOC : κ = 0 We briefly recall the standard BEC phase transition in a three dimensional homogeneous system without any coupling or interaction. The number of particles at temperature T = (k B β) -1 and chemical potential µ ≤ 0 in the energy eigenstate (k) is simply given by the Bose distribution,
N k = (exp(β[ k -µ)] -1) -1
, and the density of the excited states writes [START_REF] Baym | Condensation of bosons with rashbadresselhaus spin-orbit coupling[END_REF]. Two dimensional dispersion of a homogeneous SOCed system. The two branches ± touch at the origin. The left graph corresponds to the pure Rashba term whereas the right graph is plotted for η soc = 0.7. We see in this last case that only two minima appear in p x = ±κ. Measured location of energy minimum or minima, where as a function of laser intensity the characteristic double minima of SOC dispersion move together and finally merge. c) Dispersion measured in 6 Li.
n ex ≡ 1 V k = 0 N k =
CHAPTER 1. INTRODUCTION
where
g α (z) = ∞ l =1 z l
l α is the Riemann Zeta function and
λ T = 2πħ 2 mk B T (1.21)
is the thermal de Broglie wavelength. The BEC phase transition occurs at µ = 0 corresponding to a critical temperature T C where the density of particles in the excited state, Eq.(1.20) saturates and the number of particles in the ground state, N 0 , becomes extensive. The condensed fraction N 0 /N remains non-zero even in the thermodynamic limit. In this case the system has formed a Bose-Einstein condensate in the zero momentum mode k = 0.
Impact of SOC : κ = 0 We now consider a bosonic gas with two internal degrees of freedom. As we have seen in the previous section, in presence of the SOC the dispersion relation ± (k) is not minimal at k = 0 any more. In the basis of energy eigenstates, the total number of particles is given by the sum over both energy branches. The excited state density now writes
n ex = d 3 k (2π) 3 1 e β( + (k)-µ) -1 + 1 e β( -(k)-µ) -1 (1.22)
Depending on the value of η soc , two very different effects occur.
Rashba-Dresselhaus term η soc = 0 In this case, the two energy branches simply correspond to a shift of the energy dispersion in the k x direction
(2π) 3 n ex = ∞ -∞ d k z ∞ -∞ d k y ∞ -∞ d k x e β (k x +κ) 2 +k 2 y +k 2 z 2m -µ -1 -1 + ∞ -∞ d k z ∞ -∞ d k y ∞ -∞ d k x e β (k x -κ) 2 +k 2 y +k 2 z 2m -µ -1 -1 (1.23)
Shifting the integration variable, we recover the ideal gas expression for the density of excited particles in each energy branch. Below the transition at µ = 0, both eigenstates with k x = ±κ become occupied. The condensate is therefore built out of two exactly degenerate modes.
CHAPTER 1. INTRODUCTION
Pure Rashba η soc = 1 Whereas η soc < 1 is qualitatively similar to the scenario with η soc = 0 leading to a two-fold degeneracy of the condensate, fully isotropic SOC, η soc = 1, is essentially different. It was experimentally achieved and explored [START_REF] Lin | Spin-orbit-coupled boseeinstein condensates[END_REF][START_REF] Jiménez-García | Tunable spin-orbit coupling via strong driving in ultracold-atom systems[END_REF]. For η soc = 1 the minimum of the single particle energies is infinitely degenerate.
(2π) 2 n ex = ∞ -∞ d k z ∞ 0 k ⊥ d k ⊥ e β (k ⊥ +κ) 2 +k 2 z 2m -µ -1 -1 + ∞ -∞ d k z ∞ 0 k ⊥ d k ⊥ e β (k ⊥ -κ) 2 +k 2 z 2m -µ -1 -1 (1.24)
This expression does not simplify as in the pure Rashba case, but can be written as
n ex = n + + n -= 2 g 3 2 (e βµ-βκ 2 2m
)
λ 3 T + κ λ 2 T ∞ n=1 e nβµ n erf βn 2m κ (1.25)
where erf(x) = 1 π x -x e -t 2 d t is the error function. The last term in this expression prevents the occurrence of BEC since it diverges in the limit of vanishing chemical potential. Any arbitrary high density is therefore accessible without the need of macroscopically occupying any single particle state. In contrast to η soc < 1, the ideal Bose gas with isotropic SOC does not have a BEC phase transition at finite temperature.
As pointed out by reference [START_REF] Baym | Condensation of bosons with rashbadresselhaus spin-orbit coupling[END_REF], the absence of a BEC at finite temperature can be explained by the infinite degeneracy of the single particle ground state in the pure Rashba case (η soc = 1). At very low energies the density of states is constant like for non-interacting particles in two dimensions
n(E ) = d 3 k (2π) 3 δ E --(p) ∼ κ 2π (1.26)
Therefore, at low temperatures, our system behaves similar to a two dimensional Bose gas without SOC where Bose condensation is suppressed by the higher density of states.
Ideal Bose gas in 2D
In two spatial dimensions, a Kosterlitz-Thouless phase transition occurs for an interacting homogeneous Bose gas without SOC, so that we expect also the SOCed Bose gas to be particularly affected by presence of interactions. These systems are also of great interest for experimental groups [START_REF] Wu | Realization of two-dimensional spin-orbit coupling for bose-einstein condensates[END_REF]. Before focusing on the interacting system in the next chapters, we study quantitatively in this section the impact of the SOC on a two dimensional non interacting gas.
Ideal Bose gas without SOC: κ = 0 Similar to Eq.(1.22), we calculate the density of non-condensed particles
n = π 4π 2 2mT ħ 2 ∞ 0 d x e -x+βµ 1 -e -x+βµ (1.27) = - mT 2πħ 2 log[1 -e βµ ] (1.28)
Using the thermal wave length defined in Eq.(1.21 the density writes
nλ 2 T = -log[1 -e βµ ] (1.29)
As anticipated, Bose-Einstein condensation is absent, since the density diverges for vanishing chemical potential. Nevertheless, the number of bosons in the ground state, N 0 , becomes large at low temperatures,
N 0 T |µ| e nλ 2 (1.30)
though never macroscopic at any finite temperature, N 0 /V = 0 in the thermodynamic limit.
CHAPTER 1. INTRODUCTION
Isotropic (pure Rashba) SOC Since we have seen that no BEC occurs in two dimensions, we limit ourself to study the impact of a pure Rashba SOC, η soc = 1, where the single particle ground state is infinitely degenerate. In the two dimensional system, we get for the density
2πn = ∞ 0 k ⊥ d k ⊥ e β (k ⊥ +κ) 2 2m -µ -1 -1
(1.31)
+ ∞ 0 k ⊥ d k ⊥ e β (k ⊥ -κ) 2 2m -µ -1 -1 (1.32)
which can be simplified by changing the integration variables
2πn = 2 ∞ 0 r d r e β r 2 2m -µ -1 -1 + 2κ κ 0 d r e β r 2 2m -µ -1 -1 (1.33) (1.34)
or
nλ 2 = -log[1 -e βµ ] + 2κ
mT κ 0 d r 1 e β r 2 2m -µ -1 (1.35)
The additional term on the density due to SOC is always positive, so that the density of particles per spin at constant chemical potential is higher than without SOC.
Although BEC does not occur for η soc = 1 in two and three dimensions at finite temperatures for ideal Bosons, this does not exclude the occurrence of a phase transition in the interacting case. Therefore we investigate the effect of interparticle interactions within the mean field approximation in the next section.
What is the effect of the interactions ?
CHAPTER 1. INTRODUCTION
Interacting Bose gas: Mean Field Approximation
In a dilute system, interactions between the particles are dominated by two-body collisions. At low energies, the two-body interaction can be effectively described by a single parameter, the s-wave scattering length a, independently of the details of the two body potential. In the following, we will use a contact pseudo-potential g δ(r) for the interaction where the interaction strength g is related to the scattering length a through g = 4πħ 2 a/m [START_REF] Pitaevskii | International Series of Monographs on Physics[END_REF]. In the case of a Bose gas with two internal (spin) states, three coupling parameter are in general needed, g σ σ proportional to the scattering amplitudes between different the various hyperfine states as σ and σ . In second quantization, the interaction part of the Hamiltonian is then given by
Ĥint = d x σ,σ =↑,↓ g σ σ Ψ † σ (x) Ψ † σ (x) Ψσ (x) Ψσ (x) (1.36)
with g ↑↓ = g ↓↑ . Together with the single particle Hamiltonian, Eq.(1.17), the total Hamiltonian then writes
Ĥ = Ĥ0 (µ, κ, η soc ) + Ĥint (g ↑↑ , g ↓↓ , g ↑↓ ) (1.37)
At high density and low temperature the energy is dominated by interactions proportional to the density squared, Eq. (1.36). We may expect fluctuations of the density to be strongly suppressed. Replacing the density operator by its expectation value, we obtain a mean-field description neglecting flucuation effects.
As we will show, within mean-field, the low temperature phases are selected by the single combination
g = 2g ↑↓ -g ↑↑ -g ↓↓ (1.38)
In the other limit of high temperatures and low densities, the interparticle interaction can be considered as a perturbation to the non-interacting kinetic energy. This regime is again correctly described in leading order by a mean field theory. In the next section we will study the two mean field prediction at high and low temperature.
CHAPTER 1. INTRODUCTION
High temperature Mean Field Hartree approximation
In this section to review the derivation of the high temperature mean field approximation following closely the reference of J-P Blaizot and G.Ripka, Quantum theory of finite systems [START_REF] Blaizot | Quantum Theory of Finite Systems[END_REF] where the interaction leads to a simple shift of the effective chemical potential. For simplicity, we only consider an averaged interaction strength with g ↑↑ = g ↓↓ = g ↑↓ , or g = 0. Corrections due to an anisotropy g = 0 of the couplings g ↑↑ , g ↓↓ , g ↑↓ can be included without difficulty, but are negligible for all coupling parameters considered in this thesis.
The free energy of the system is defined as,
F = E -T S -µN = -k B T log(Z ) (1.39)
where Z = Tr[exp(-β(H -µN ))] is the grand canonical partition function.
The mean field approximation is based on the following inequality
F ≤ 1 β 〈log ρ 0 〉 0 + 〈 Ĥ 〉 0 (1.40)
where ρ 0 = exp[-H 0 ]/Z 0 is any trial density matrix corresponding to a Hamiltonian H 0 and Z 0 is the corresponding partition function,
〈 • 〉 0 ≡ Tr { • ρ 0 }
Our ansatz for the trial density matrix is based on the non-interacting part of the Hamiltonian with an additional mean-field shift of all energies, such that the Hamiltonian of our system can be separated as
Ĥ = Ĥ0 + Ĥ1 (1.41) Ĥ0 = k,σ=± (k ⊥ + κ) 2 + k 2 z 2m -µ + ξ Φ † k,σ Φ k,σ (1.42) Ĥ1 = g ↑↑ 2V p, k, q Φ † k+ q Φ † p-q Φ k Φp -ξ k,σ=± Φ † k,σ Φ k,σ (1.43)
We obtain
F ≤ 1 β 〈log ρ 0 〉 0 + 〈 Ĥ0 〉 0 + 〈 Ĥ1 〉 0 = F 0 + 〈 Ĥ1 〉 0 (1.44)
where F 0 (T, µ, ξ) = -T log Z 0 is the free energy of our reference system. Using Wick's theorem we get
〈 Ĥ1 〉 0 = g ↑↑ V k 〈 Φ † k Φ k 〉 0 p 〈 Φ † p Φp 〉 0 -ξ k,σ=± 〈 Φ † k,σ Φ k,σ 〉 0 (1.45) CHAPTER 1. INTRODUCTION
Introducing the density of our mean-field approximation
n(ξ) = 1 V k 〈 Φ † k Φ k 〉 0 = - T V d d ξ F 0 (1.46)
the variational expression for the free energy writes
F ≤ F 0 (ξ) + V g ↑↑ [n(ξ)] 2 -ξV n(ξ) (1.47)
Since we have
- ∂F 0 (ξ) ∂ξ = ∂F 0 (ξ) ∂µ = βV n(ξ) (1.48)
the derivative of the variational free energy with respect to ξ leads to the following condition for its minimum
-V n(ξ) + 2V g ↑↑ n(ξ)n (ξ) -V n(ξ) -ξV n (ξ) = 0 (1.49)
which determines the mean-field energy shift self-consistently
ξ = 2g ↑↑ n(ξ) (1.50)
Explicitly, we then obtain a set of equations for the quasi-particle energies and density
M F ± (k) = (k ⊥ ± κ) 2 + k 2 z 2m + 2g ↑↑ n M F (1.51) n M F (µ) = σ=± d 3 k (2π) 3 1 e β( M F σ (k)-µ) -1 (1.52)
which has to be solved self-consistently. For fixed chemical potential, the single particle energies are shifted by a constant 2g ↑↑ n M F . We see that the mean field approach at high temperature leads to qualitatively similar conclusions as for the ideal gas.
We will need this leading order approximation at high temperature to match our classical field calculations in Chapter II.
CHAPTER 1. INTRODUCTION
Mean Field ground states: Plane-Wave & Stripe phase
Our ideal gas calculations have shown that isotropic SOC may suppress Bose condensation at finite temperatures, and our previous mean field calculation only introduces a rigid shift of all energy levels.
Still, approaching zero temperature, the occupation of the lowest energy modes dominate and the formation of a condensate at zero temperature is expected. Here, we study possible phases of the ground state within the mean field approximation.
Let us rewrite the interaction energy, Eq. (1.36),
E int = 1 2 d r σ,σ =↑,↓ g σ σ 〈n σ (r)n σ (r)〉 (1.53)
using the density operator n σ (r). In a homogeneous (translationally invariant) system the tendency of the interaction is to flattened the coupled densities in real space.
Neglecting density fluctuations, we expect two different situations to minimize the interaction energy:
Case g ↑↑ , g ↓↓ > g ↑↓ Each densities n ↑ and n ↓ should be constant in space.
Case g ↑↓ > g ↑↑ , g ↓↓ The sum of the two densities is constant but densities of opposite spin, n ↑ and n ↓ , avoid each other spatially.
Based on this heuristic considerations, we will now write down a mean field ansatz for the different ground states of isotropic and anisotropic SOCed Bosons.
Variational calculation
In Fourier space, using Eq. (1.11), the interaction Hamiltonian, Eq. (1.36), writes
Ĥint = k 1 +k 2 =k 3 +k 4 g ↑↑ 2V û † k 1 û † k 2 ûk 3 ûk 4 + g ↓↓ 2V d † k 1 d † k 2 dk 3 dk 4 + g ↑↓ V û † k 1 d † k 2 ûk 3 dk 4 (1.54)
where we have explicitly written out all three couplings. For our variational calculations of the ground state energy, we start with the simplest, less symmetric case of bosons with anisotropic SOC.
CHAPTER 1. INTRODUCTION
Anisotropic SOC, η soc < 1 In this case the single particle spectrum has two degenerate minima in k = κ = (±κ, 0, 0) corresponding to the single particle states
Φ- † ±κ |0〉 = 1 2 û † ±κ ∓ d † ±κ |0〉 (1.55)
Introducing the angle φ that described the linear superposition of the two minimal states (in a single particle), our mean field ansatz for the ground state then writes
|Φ -(φ)〉 = cos(φ) Φ- † κ + sin(φ) Φ- † -κ N N ! |0〉 (1.56)
with N the number of particles in the system. We then evaluate the interacting energy, Eq.(1.54),
〈Φ -(φ)| Ĥint |Φ -(φ)〉 (1.57)
The non zero components of this expression can be decomposed into three different situations :
(1) all the particles carry a κ momentum, (2) all the particle carry a -κ momentum, (3) the two particles carry different momenta ±κ and exchange a momentum 2κ. In this last case, four different arrangement of momenta ±κ are possible. When the two interacting particle carry a different spin, the negative sign in Eq.
(1.55) has to be taken and it introduces a term proportional to
-g ↑↓ | cos(φ)| 2 | sin(φ)| 2 .
By explicitly calculating each non zero configuration and using the normalization condition we obtain,
〈Φ -| Ĥint |Φ -〉 = N (N -1) 8V g ↑↑ + g ↓↓ + 2g ↑↓ + | sin(2φ)| 2 g ↑↑ + g ↓↓ -2g ↑↓ (1.58)
Minimizing the interaction energy, Eq. (1.58), we distinguish three cases depending on the sign of the anisotropy of the interaction, g .
Case g ↑↑ + g ↓↓ > 2g ↑↓ i.e. g < 0 : The mimimum of the energy corresponds to φ = {0, π/2}. In this case the particle populate a single momentum ±κ and the state is called Plane Wave state.
Case 2g ↑↓ > g ↑↑ + g ↓↓ i.e. g > 0 : The mimimum of the energy corresponds to φ = π/4. The particles are in a superposition of two opposite momenta ±κ e.g. in a superposition of two Plane Wave states. This standing wave forms stripes in the real space and is called Stripe Phase.
CHAPTER 1. INTRODUCTION
Case g ↑↑ + g ↓↓ = 2g ↑↓ i.e. g = 0 : The Plane Wave state and Stripe Phase are degenerate. This degeneracy is present within the mean field approximation and it is not expected to be robust beyond mean field.
Isotropic SOC, η soc = 1 As we have seen in the non-interacting section, when the spin-orbit coupling is isotropic in the xy plane the single-particle ground state is infinitely degenerate along a momentum ring of radius κ. In order to minimize the energy, we can choose a mean-field ansatz where we choose to populate only one direction, so that we essentially recover the scenario above. Allowing for a combination of more momenta in the ansatz, the calculation also involves the angles k 1 , k 2 and k 3 , k 4 in Eq. (1.54). This approach was studied in reference [START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF] and numerical calculation done by reference [START_REF] Ho | Bose-einstein condensates with spin-orbit interaction[END_REF] produced consistent with the results based on a single direction shown in figure 1.5.
Mean field phase diagram and ground state wave function
We have seen that in the presence of interactions the mean field ground state breaks the symmetry of the degenerate single particle energy levels selecting one direction |κ| = κ. The wave function of the mean field ground state, Eq. (1.56), writes
ψ M F κ (r) = ψ M F ↑,κ (r) ψ M F ↓,κ (r) = 1 2 cos(φ)e i κ•r 1 -e i θ κ + sin(φ)e -i κ•r 1 e i θ κ
(1.59) up to total phase e i θ . Without loss of generality, we can choose κ = κ(1, 0, 0). For φ = 0 we have a pure plane-wave state :
ψ M F κ (r) = 1 2 e i κx , -e i κx .
The wave-function is described by a single plane-wave and the density of each spin component is therefore flat in space. For φ = π/4 we describe the stripe phase:
ψ M F κ (r) = 1 2
(cos(κx), i sin(κx)). The total density is also flat but the density of each component fluctuates spatially with a defined wavelength κ e.g. appearance of stripes.
However, since mean field usually overestimates ordering, it is questionable if these exotic mean field ground states are really stable at zero temperature, since the large degeneracy of the single particle spectrum may significantly enhance fluctuation effects.
What are the main theories/results beyond the mean approach? CHAPTER 1. INTRODUCTION
Fluctuations & Open questions
Fluctuations beyond mean-field We propose in this section to review different approaches that have been applied to address questions beyond mean field.
• Variational wave function Fragmentation is a main example of a ground state not captured by the mean field approach. Despite being particularly fragile, references [START_REF] Stanescu | Spin-orbit coupled bose-einstein condensates[END_REF][START_REF] Zhou | Fate of a bose-einstein condensate in the presence of spin-orbit coupling[END_REF] propose fragmented states as possible ground states. Based on very dilute limit arguments, reference [START_REF] Tigran | Composite fermion state of spin-orbit-coupled bosons[END_REF] propose a fermonized manybody state by composite fermion construction as a ground state of interacting bosons.
• Effective theory At extremely low temperature, only low energy excitations are populated and the degrees of freedom of the system are therefore expected to be reduced. A more simple, effective theory based on minimal fluctuations around the mean field solution has been used to describe the system. Following this approach we can, for instance, consider only the lower branch of the energy spectrum Φ -(k) in Eq. (1.19). One standard approximation is for instance, by defining the wave-function as a phase and a density component, to integrate out density fluctuations and to consider only phase fluctuations. The Plane Wave state energy is then described by only one phase. On the other hand the Stripe Phase breaks the translational symmetry and is therefore described by two phases [START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF]. Within these approaches, references [START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF] and [START_REF] Liao | Spin-orbitcoupled bose gases at finite temperatures[END_REF] propose a qualitative phase diagram at finite temperature of a SOCed Bose gas in two dimension. In the third chapter of this thesis, we will draw a significantly different phase diagram based on a classical field theory which is expected to be valid at finite temperature around the phase transition.
• Renormalized T-matrix approach In the pure Rashba case η soc = 1 the interacting term Eq. (1.54) couples any momentum relying on the ring minimum k = κ. As we showed in the previous section, in the case of an fully isotropic interaction g = 0 the ground state is degenerate. In order to lift this degeneracy, reference [START_REF] Ozawa | Ground-state phases of ultracold bosons with rashba-dresselhaus spin-orbit coupling[END_REF] consider renormalized interactions to determine the absolute ground state. Using the T-matrix formalism they consider a renormalized contact interaction that depends on the angle between the two momenta g k,k ,σ,σ . The effective interaction is stronger when k and k are in the same direction, therefore indicating a stripe phase as an absolute ground state.
CHAPTER 1. INTRODUCTION
• Bogoliubov approach The Bogoliubov approach is suited for studying fluctuations on top of the mean field prediction. In our case, this study is particularly interesting in the case of isotropic contact interaction g = 0 when no single ground state is selected within the mean field approximation. By an order by disorder mechanism, reference [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] finds that the absolute ground state corresponds to a Plane Wave state and results from a competition of the thermal and quantum fluctuations. By studying the depletion of the condensate and the impact of the finite temperature excitations, both references [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] and [START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF] draw a finite temperature phase diagram based on the Bogoliubov approximation, in particular in three dimensions (since the thermal fluctuations diverges in two dimensions).
• Experimental challenge The Stripe Phase is predicted to have supersolid properties by breaking both the gauge and the translational symmetries. Very fragile in experiments, many propositions were made to increase the stability of the stripe phase [START_REF] Martone | Visibility and stability of superstripes in a spin-orbit-coupled bose-einstein condensate[END_REF][START_REF] Martone | Approach for making visible and stable stripes in a spin-orbit-coupled bose-einstein superfluid[END_REF][START_REF] Ozawa | Striped states in weakly trapped ultracold bose gases with rashba spin-orbit coupling[END_REF]. The predicted density modulation of the stripes were observed in 2017 by reference [START_REF] Li | A stripe phase with supersolid properties in spin-orbit-coupled bose-einstein condensates[END_REF]. The role of an harmonic trap was also considered [START_REF] Sinha | Trapped two-dimensional condensates with synthetic spin-orbit coupling[END_REF][START_REF] Ramachandhran | Half-quantum vortex state in a spin-orbit-coupled bose-einstein condensate[END_REF]and the breaking of the stripe phase because of vortices was also carefully studied [START_REF] Kato | Vortex pairs in a spin-orbitcoupled bose-einstein condensate[END_REF]. Because of the SOC term, the wave-function can change sign by either rotating the relative phase or by flipping spin. Halfquatum vortices are therefore naturally present in the system as non standard topological defects [START_REF] Nikolić | Vortices and vortex states in rashba spin-orbit-coupled condensates[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF][START_REF] Fetter | Vortex dynamics in a spin-orbit-coupled bose-einstein condensate[END_REF].
• Superfluidity Because of the breaking of Galilean invariance in SOCed system, the superfluidity is expected to be strongly affected by the presence of the SOC. In particular Landau's criterion for the critical velocity cannot be defined independently of the reference frame [START_REF] Zhu | Exotic superfluidity in spin-orbit coupled bose-einstein condensates[END_REF]. Reference [START_REF] Stringari | Diffused vorticity and moment of inertia of a spin-orbit coupled bose-einstein condensate[END_REF] suggests that the normal density does not vanish at zero temperature, a strong reminder of the distinction between superfluidity and BEC.
Despite these efforts, the nature of the quantum ground state of interacting bosons with Rashba SO coupling remains an open issue. It is also a strong motivation for the experimental realization of such a SO coupling in cold atom systems, where a strongly correlated quantum state can be expected.
CHAPTER 1. INTRODUCTION
Outline of the thesis
In the following, we wish to establish the phase diagram of a two-dimensional homogeneous gas of Rashba-Dresselhaus spin-orbit-coupled bosons. This work has being inspired by both fundamental investigation and experimental progress, we will especially insist on a quantitative prediction of the phase diagram and on experimental implications of our results.
Chapter II : The method In order to focus on the finite temperature description of the bosonic gas, we selected a completely different approach from the methods presented in the previous section : the classical field approximation. This approximation is based on the field character description of the bosons that essentially occupy only the very low energy modes which become very highly populated. We then perform classical field Monte Carlo calculations which are expected to correctly describe the finite-temperature behavior close to a possible phase transition.
Chapter III : BKT phase transition After discussing the best numerical tools to correctly evaluate observables of interest like the condensed fraction and the density, we first show that the system undergoes a Kosterlitz-Thouless phase transition from a normal to superfluid state in presence of the SOC term. The thermodynamic limit behavior strongly depends on the anisotropy η soc and in particular, we show that for η soc = 1 a crossover occurs for finite systems at similar phase-space densities, but no superfluid transition is expected for infinite sizes.
Chapter IV : SP/PW orders We then characterize the low temperature phases and we show that the spin order of the quasicondensate is driven by the anisotropy of the interparticle interaction. In particular in the superfluid state, we study the singleparticle density matrix that decays algebraically and directly reflects the PW or SP character of the mean-field ground state. We show that in the case of an anisotropy g = 0 spins exhibit a quasi-long-range order corresponding to the KT transition.
In the case of a fully isotropic interparticle interaction, we show that the PW or SP degeneracy is unaffected by the transition.
Introduction
In this chapter we present the main methods we used to establish the finitetemperature phase diagram of two dimensional Bose gases in presence of SOC and inter particle interactions. We will concentrate on weakly interacting systems at low temperature region. From the absence of BEC in an ideal (or mean-field) gas at any finite temperature, invoking continuity, we can expect the critical temperature of any possible phase transition to approach zero temperature in the limit of vanishing interaction. Approaching zero temperature, bosons essentially occupy only the very low energy modes which become very highly populated. In this regime the field character of the quantum particles dominates and the description in terms of a classical field theory becomes quantitatively accurate [START_REF] Baym | The transition temperature of the dilute interacting bose gas[END_REF][START_REF] Holzmann | Condensate density and superfluid mass density of a dilute bose-einstein condensate near the condensation transition[END_REF][START_REF] Giorgetti | Semiclassical field method for the equilibrium bose gas and application to thermal vortices in two dimensions[END_REF][START_REF] Prokof | Two-dimensional weakly interacting bose gas in the fluctuation region[END_REF][START_REF] Holzmann | Superfluid transition of homogeneous and trapped two-dimensional bose gases[END_REF]. Beyond the weakly interacting region, classical field theory is still capable to describe the universal behavior around a continuous phase transition, a well known result from the theory of critical phenomena [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF]. In this thesis, we have established for the first time the phase diagram of interacting SOCed bosons, based on classical field Monte Carlo calculations.
Roughly speaking, the classical field description emerges by replacing the occupation of a "quantum mode" of energy ε, given by the Bose distribution (exp[ε/k B T ] -1), with that of the classical field k B T / , and, further, neglecting the non-vanishing commutators of the quantum fields. Both approximations becomes exact for the low energy modes in the limit of T → 0 and provide the starting point to a quantitative description of weakly interacting Bose gases.
Still, due to interactions, we cannot explicitly diagonalize the Hamiltonian even within the classical field approximation. However, the calculation of static observables directly maps to the calculation of classical probability distributions, well known in classical statistical mechanics. In our work, we have used Monte Carlo methods to numerically sample the classical field distribution.
The weight of each classical field configuration is given by a Boltzmann distribution according to its energy. However, two technical issues arise. First, in order to well define the energy of a classical field theory, we have to regularize its behavior at high energies ("ultraviolett divergencies"). Within numerical Monte Carlo calculations, this is naturally taken into account by discretizing the fields on a lattice. Second, SOC formally introduces imaginary terms in the action of the two complex fields representing the two spin states. So we first have to show that we really obtain a probability distribution, i.e the discretized action stays real for any configuration of the fields. Then, we can correctly sample classical field configurations by proposing efficient Monte Carlo algorithms ensuring the ergodicity of the system.
CHAPTER 2. METHODS
Eventually, we have to correct the raw densities of our classical field calculations which depends on the lattice discretization and to take into account the correct behavior of high energy modes which are Bose distributed. However, since these low occupied modes are only weakly affected by the interaction, the mean field approximation provides an accurate description for them. Using this correction, we can in principle match any observable from classical field calculations to provide quantitative predictions for our systems which can be directly compared to experiment [START_REF] Baym | The transition temperature of the dilute interacting bose gas[END_REF]. CHAPTER 2. METHODS
Classical field approximation
As discussed in Chapter I, in this thesis we are interested into establishing the phase diagram in the limit of small interaction strength, mg ↑↑ 1 and small spin-orbit coupling, κλ T 1 where λ T = 2πħ 2 /mk B T is the thermal wave length at temperature T .
In this limit, the leading order corrections to mean-field are captured within classical field theory where the occupation of low energy modes is high such that commutators like [ Ψ † (r), Ψ(r )] can be neglected. In this approximation, the field operator, Ψ(r), can be replaced by two complex fields, Ψ(r) ≡ (Ψ ↑ (r), Ψ ↓ (r)), one for each spin.
Starting from the ideal gas results of two dimensional systems presented in chapter I, we first introduce the classical field approximation in the absence of a SOC. Calculating the phase-space density for particles in the low-energy modes with k < k B T ,
n < λ 2 ≡ λ 2 β k <1 d 2 k (2π) 2 N k = log 1 -e βµ-1 1 -e βµ = nλ 2 + log[1 -e βµ-1 ] ≈ nλ 2 -log e e -1 (2.1)
we notice that at high degeneracy, nλ 2 , the occupation of higher energy modes becomes negligible. At the same level of accuracy, we may also replace the Bose occupation N k by the occupation of classical field,
N k ≈ N c f k ≡ 1 β( k -µ) (2.2)
which gives,
λ 2 β k <1 d 2 k (2π) 2 N c f k = log 1 + |βµ| |βµ| ≈ nλ 2 + |βµ| (2.
3)
The classical field distribution with a simple cut-off Λ = 2mk B T /ħ 2 therefore quantitatively describes the leading order behavior of the density up to corrections of order (nλ 2 ) -1 . The corresponding energy distribution of the classical fields writes
H 0 c f -µN = k<Λ ( k -µ)α * k α k ≈ r - ħ 2 2m |∇ψ(r)| 2 -µ|ψ(r)| 2 (2.4)
where α k are complex numbers which describe the classical field. Their Fourier transform, ψ(r ), defines their real space distribution on a lattice with minimum distance CHAPTER 2. METHODS ∼ Λ -1 . The theory is therefore naturally regularized by discretizing space on a lattice of linear extension L. The probability distribution for a given field configuration, ψ(r ), is then
p[ψ(r )] = Z -1 c f e -β(H c f -µN ) (2.5)
where Z c f = Dψ(r )e -βH c f is the partition function, where D indicates the summation over all discrete field configurations. For our non-interacting system, H c f = H 0 c f and we can explicitly perform the Gaussian integral to obtain Z 0 c f . The classical field description of the ideal Bose gas with SOC does not pose additional difficulties using the eigenmode basis.
For interacting fields, we simply add their interaction energy to the non-interacting system
H c f = H 0 c f -µN + r σσ g σσ 2 |ψ σ (r)| 2 |ψ σ (r)| 2 (2.6)
where we have explicitly written out possible spin dependence of the interaction.
Validity range
In order to study the validity range of the classical field distribution we Taylor expand the nominator of the Bose distribution for small energies,
N k = 1 exp(β( k -µ)) -1 = ∞ n=1 β n ( k -µ) n n! -1
(2.7)
The leading order term is precisely the classical field distribution,
N k → k B T E k -µ under the condition k -µ ≤ k B T (2.8)
Outside this energy range, classical field theory cannot be taken literally. However, interaction corrections to high energy modes k -µ ≥ k B T can usually be treated perturbatively in the limit of weak interactions. We can then match the classical field results with accurate high energy behavior obtained perturbatively.
At very low temperatures, the classical field distribution approaches the configuration which minimizes the energy. From the variation of the fields, we obtain the time independent Gross-Pitaevskii equation in the limit of vanishing temperature. Therefore, the classical field approximation merges continuously the Gross-Pitaevskii CHAPTER 2. METHODS theory. However, quantum corrections, as contained for example in the Bogoliubov approximation, are not included. At high temperatures and weak interactions, the classical field as well as the full quantum field theory are both accurately described by their corresponding mean-field approximation, so that the difference between quantum and classical theory can be worked out analytically.
Between these two regimes, the classical field approximation captures the leading order thermal corrections to mean-field for the strongly degenerate, low energy states. In particular, it is capable to detect and correctly describe any possible finite temperature continuous phase transitions. For systems without SOC classical field theory correctly describes the BEC transition in three dimensions as well as the Kosterlitz-Thouless phase transition in two dimensions (e.g. the XY-model).
CHAPTER 2. METHODS
Markov Chain Monte Carlo
Even after applying the classical field approximation, we cannot explicitly integrate over the distribution p[Ψ(r)] Eq. (2.5) because of the interaction term from Eq. (1.36) that needs to be included. Indeed, the interaction term and the single-particle terms are both diagonal in different basis : real space and Fourier space, respectively.
We will therefore calculate the observables as numerical integrals. Multi-dimensional integrals like Eq. (2.5) can be evaluated using Monte Carlo methods. For numerical efficiency of the algorithm we aim to use a local form of the Hamiltonian. Since the Laplacian and derivative operators remain local around a point r in the real space, we write down the effective total action for a given field configuration in real space.
Effective action S
We can write the probability p[Ψ(r)] of a given classical field configuration as proportional to exp(-S[Ψ(r)]). Summing the kinetic energy, the SOC term and the contact interaction, the total local action S writes
S[Ψ(r)] = a 2 k B T r σ=↑,↓ -Ψ * σ (r) ħ 2 ∇ 2 D 2m Ψ σ (r) -µ|Ψ σ (r)| 2 + ħ 2 κ m Ψ * ↑ (r) -i ∂ D x -η soc ∂ D y Ψ ↓ (r) + ħ 2 κ m Ψ * ↓ (r) -i ∂ D x + η soc ∂ D y Ψ ↑ (r) + 1 2 σ,σ =↑,↓ g σσ |Ψ σ (r)| 2 |Ψ σ (r)| 2 (2.9)
where a is the lattice spacing, µ is the chemical potential, and ∇ D and ∂ D α are finite difference expressions approximating the derivatives and we sum over all positions r of the lattice. Provided that the expression of S[Ψ(r)] is real for any configuration of the field we can sample the distribution by Monte Carlo methods. Below we show explicitly that this discrete action is real.
CHAPTER 2. METHODS
Kinetic term
r Ψ * σ (r)∇ 2 D Ψ σ (r) = r Ψ R σ (r) -i Ψ I σ (r) ∇ 2 D Ψ R σ (r) + i Ψ I σ (r) (2.10) = r Ψ R σ (r)∇ 2 D Ψ R σ (r) + r Ψ I σ (r)∇ 2 D Ψ I σ (r) (2.11) + i r Ψ R σ (r)∇ 2 D Ψ I σ (r) - r Ψ I σ (r)∇ 2 D Ψ R σ (r) (2.12)
with Ψ R σ and Ψ I σ the real and imaginary part of the two component complex field Ψ having spin σ. Using the following finite difference expression of the Laplacian,
∇ 2 D Ψ ↑ (r) = a -2 d i Ψ ↑ (r + ai) + Ψ ↑ (r -ai) -2Ψ ↑ (r) (2.13)
where i denotes the unit vector pointing to the nearest neighbors on the lattice and a the lattice spacing. For simplicity we consider in this section the case a = 1. Together with periodic boundary conditions, we write
r Ψ R σ (r)∇ 2 D Ψ I σ (r) ≡ r Ψ R σ (r)Ψ I σ (r -1) + Ψ R σ (r)Ψ I σ (r + 1) -2Ψ R σ (r)Ψ I σ (r) (2.14) ≡ r Ψ R σ (r + 1)Ψ I σ (r) + Ψ R σ (r -1)Ψ I σ (r) -2Ψ R σ (r)Ψ I σ (r) (2.15) ≡ r Ψ I σ (r)∇ 2 D Ψ R σ (r) (2.16)
Therefore the imaginary part of the kinetic energy identically vanishes and the corresponding action is real for any configuration of the fields.
SOC term
Writing out the Pauli matrices in the SOC part of the action, we have
S SOC = x Ψ * ↑ (r) i ∂ D x + η∂ D y Ψ ↓ (r) + Ψ * ↓ (r) i ∂ D x -η∂ D y Ψ ↑ (r) (2.17)
Let us concentrate on the terms involving ∂ D x first
Ψ * ↑ (r)∂ D x Ψ ↓ (r) + Ψ * ↓ (r)∂ D x Ψ ↑ (r) = Ψ R ↑ (r)∂ D x Ψ R ↓ (r) + i Ψ R ↑ (r)∂ D x Ψ I ↓ (r) -i Ψ I ↑ (r)∂ D x Ψ R ↓ (r) + Ψ I ↑ (r)∂ D x Ψ I ↓ (r) + Ψ R ↓ (r)∂ D x Ψ R ↑ (r) + i Ψ R ↓ (r)∂ D x Ψ I ↑ (r) -i Ψ I ↓ (r)∂ D x Ψ R ↑ (r) + Ψ I ↓ (r)∂ D x Ψ I ↑ (r) (2.18)
Again, using the finite difference expression together with periodic boundary condi-
CHAPTER 2. METHODS tions r Ψ R ↑ (r)∂ D x Ψ R ↓ (r) + Ψ R ↓ (r)∂ D x Ψ R ↑ (r) = r ∂ D x Ψ R ↑ (r)Ψ R ↓ (r) = 0 (2.19) r Ψ I ↓ (r)∂ D x Ψ I ↑ (r) + Ψ I ↑ (r)∂ D x Ψ I ↓ (r) = r ∂ D x Ψ I ↑ (r)Ψ I ↓ (r) = 0 (2.20)
and we obtain r
Ψ * ↑ (r)∂ D x Ψ ↓ (r) + r Ψ * ↓ (r)∂ D x Ψ ↑ (r) ≡ 2i r Ψ I ↓ ∂ D x Ψ R ↑ + Ψ I ↑ ∂ D x Ψ R ↓ (2.21)
After similar manipulations of the terms involving ∂ D y , we obtain
S SOC = 2 r Ψ I ↓ (r)∂ D x Ψ R ↑ (r) + Ψ I ↑ (r)∂ D x Ψ R ↓ (r) + 2η soc r Ψ R ↓ (r)∂ D y Ψ R ↑ (r) + Ψ I ↓ (r)∂ D y Ψ I ↑ (r) (2.22)
Again, the SOC action is real for any field configurations, as well as the interaction energy.
S(Ψ(r)) ∈ R ∀Ψ(r) in a periodic system (2.23)
Therefore, exp(-S[Ψ(r)]) is non negative and we can interpretate weight as a probability of a given field configuration suitable for Monte Carlo sampling.
Monte Carlo algorithms: Metropolis, Heat bath and Fourier moves
In this section, we briefly present few numerical algorithms that we have found essential to correctly sample the distribution of the fields using Monte Carlo methods based on a Markov process. Standard algorithms are typically based on Metropolis' rule for acceptation or rejection of changes in the field configuration.
The Markov chain is constructed by a random walk in configuration space, where the transition probability from one configuration R (note that R in our context labels the values of all fields at each lattice site) to another one R' satisfies
R' T (R → R') = 1 (2.24) R T (R → R') = 1 (2.25) CHAPTER 2. METHODS
together with so called detailed balance condition
π(R)T (R → R') = π(R')T (R' → R) (2.26)
where π(R) ∝ exp[-S(R)] is the probability distribution which we aim to sample.
Metropolis algorithm is a particularly simple solution for the transition where one proposes an arbitrary change of the configuration R which is accepted with probability
T (R → R') = min 1, e -S(R') e -S(R) (2.27)
Whenever rejected, we remain at the same configuration. One can explicitly verify that Metropolis rule satisfies the detailed balance condition.
Since our action, Eq. (2.9) is local, we can efficiently compute S(R') -S(R) for local changes avoiding the calculating of total action at each step.
However, standard Metropolis algorithm leads to a slow convergence for our purposes. Two main problems appear when decreasing the temperature. First, the acceptance of the moves decreases since Metropolis steps are completely random and most of the moves at low temperature lead to high energy changes which are highly unlikely. Secondly, different degenerate ground states of the ideal (or mean-field) system are separated from each other in the sense that they are not connected by local moves. In practice, local moves in real space do not ensure the ergodicity of the sampling for large systems where the probability to tunnel from one ground state to another gets exponentially small.
During the next sections, we propose few ways to tackle these problems. First, we will study the action itself and propose efficient changes instead of total random ones leading eventually to the Heat Bath, Gaussian, and Fourier space algorithms.
A priori probabilities To increase the acceptance probability of the simple Metropolis algorithm, we decompose the transition probability
T (R → R') = A (R → R') p (R → R') (2.28)
into the a-priori probability A (R → R') and the final acceptance rate p (R → R'). Our strategy will be to choose an a-priori probability which is easy to compute, typically CHAPTER 2. METHODS a Gaussian, and which increases the final acceptance rate given by
p (R → R') = min 1, e -S(R') A (R' → R) e -S(R) A (R → R') (2.29)
Instead of proposing completely random moves, we therefore try to orientate changes.
In order to propose the most efficient local move, let us consider the action keeping all the fields fixed except one component Ψ α (r). We will then propose an optimized change of Ψ α (r), the value of the α field at r. The part of the action without spin-orbit coupling involving Ψ α (r) writes
- 1 2 d i [Ψ α (r + ai) + Ψ α (r -ai))] Ψ α (r) + -µ + κ 2 2 + d Ψ α (r) 2 + g ↑↑ 2 N β=1 Ψ β (r) 2 2 (2.30)
where we have assumed symmetric interactions for simplicity. Note that the change of the action is diagonal in the different field components
Ψ α (r) = {Ψ R ↑ (r), Ψ I ↑ (r), Ψ R ↓ (r), Ψ I ↓ (r)
}, whereas the spin-orbit interaction couples different component of the fields.
∆S SOC Ψ R ↑ (r) ∝ const (2.31) ∆S SOC Ψ I ↑ (r) ∝ κ ∂ D x Ψ R ↓ (r) (2.32) ∆S SOC Ψ R ↓ (r) ∝ κ ∂ D x Ψ R ↑ (r) + ∂ D y Ψ I ↑ (r) (2.33) ∆S SOC Ψ I ↓ (r) ∝ κ ∂ D y Ψ R ↑ (r) (2.34)
Thus, changes in the total action involving Ψ α (r) can be written in the general form
∆S(Ψ α (r)) ∝ bΨ α (r) -aΨ α (r) 2 - g ↑↑ 2 Ψ α (r) 4 (2.35) with b = 1 2 d i [Ψ α (r + i) + Ψ α (r -i))] + ∆S SOC (Ψ α (r)) (2.36) a = -µ + κ 2 2 + d + g ↑↑ β =α Ψ β (r) 2 (2.37)
where ∆S SOC (r)) is linear in the field components Ψ β (r) with β = α containing the contributions from the SOC.
CHAPTER 2. METHODS
Heat bath algorithm
The heat bath algorithm provides an exact sampling at high temperatures where interaction effects are small. Neglecting the anharmonic term, ∝ g 0 Ψ α (r) 4 , the change of the action, Eq. (2.35), becomes a quadratic form. The distribution exp(-∆S[Ψ(r)]) is therefore Gaussian centered around the minimum given by
2aΨ α (r) = b (2.38)
or
Ψ α (r) = b 2a = d i [Ψ α (r)(r + i) + Ψ α (r -i))] + 2∆S SOC (r) 2(-2µ + κ 2 + 2d ) (2.39)
for g ↑↑ = 0. Explicitly, we have
∆S ∝ --µ + κ 2 /2 + d Ψ α (r) -Ψ α (r) 2 (2.40)
We can sample exactly the distribution using
Ψ α (r) → Ψ α (r) = Ψ α (r) + δΨ (2.41)
where δΨ is sampled from a normal distribution of variance
σ 2 = 1 -2µ + κ 2 + 2d (2.42)
The corrections needed for the acceptance rate then writes log
A (R' → R) A (R → R') = -Ψ α (r) -Ψ α (r) 2 -δΨ 2 2σ 2 (2.43)
We can further improve the acceptance using Eq. (2.37) for g 0 = 0 which takes into account the local interactions with the other field components.
Gaussian algorithm
The Heat Bath algorithm is based on an essentially exact sampling of the noninteracting system. However, for interacting systems with g ↑↑ > 0, we have shown in Chapter I section 1.3.1 that the effective chemical potential is shifted by the mean field interactions. The chemical potential µ can become positive, and the variance of the Gaussian sampling is not any more guaranteed to be positive.
CHAPTER 2. METHODS
In order to adapt to this situation, we make a general Gaussian ansatz for the a-priori probability
A (R' → R) = exp - Ψ(R) -Ψ(R') + f (Ψ(R')) 2 2σ 2 (2.44)
We determine the mean f and variance σ 2 of this Gaussians such that the acceptance rate gets close to one. We have
log e -S(R') A (R' → R) e -S(R) A (R → R') = -S Ψ + S (Ψ) - Ψ -Ψ + f 2 2σ 2 + Ψ -Ψ + f 2 2σ 2 = -S Ψ + S (Ψ) - f Ψ -Ψ σ 2 + f Ψ -Ψ σ 2 + f 2 2σ 2 - f 2 2σ 2 (2.45)
Assuming small changes in the field, we can expand the action
S (Ψ) = S Ψ + ∂S ∂Ψ Ψ -Ψ + Θ Ψ -Ψ 2 and approximate the acceptance rate log e -S(R') A (R' → R) e -S(R) A (R → R') = (∂ Ψ S + ∂ Ψ S) Ψ -Ψ 2 - ( f + f ) Ψ -Ψ σ 2 + f 2 -f 2 2σ 2 + Θ Ψ -Ψ 2 (2.46)
Up to first order, the acceptance of the moves p (R → R') → 1 is maximized by
f (Ψ) = σ 2
2 ∂ Ψ S, and we have log e -S(R') A (R' → R)
e -S(R) A (R → R') = σ 2 8 (∂ Ψ S) 2 -(∂ Ψ S) 2 + Θ Ψ -Ψ 2 (2.47)
From the second order terms, we can determine the variance σ 2 . However, a simpler solution is to consider σ 2 as an external parameter of our Monte Carlo algorithm, which we adapt for different temperatures to maximize the efficiency of the moves.
Fourier algorithm
Still, at low energy, ergodic sampling of the configurations is challenging. Local moves in the real space can easily change the high momenta k of the energy spectrum. However, it is very difficult at low temperature to equally sample the degenerate energy minima of SOCed bosons.
In order to sample efficiently these minima, we have implemented Metropolis moves in the Fourier space around the minima of the non-interacting energy spectrum. The calculation of the action for this moves scales worse in the system size, N log(N ) where N the number of discretized points, using fast Fourier transform. Below we present the main steps of our Fourier space Metropolis algorithm.
• Calculate the Fourier transforms Φ + k and Φ - k as defined in Chapter I or later in Eq. 2.62.
• Propose a change of mainly Φ - k around the mimina of the energy i.e the most populated momenta
Φ - k → Φ - k = Φ - k + z z ∈ C (2.48)
where z is a random, Gaussian distributed, complex variable. To gain efficiency, we simultaneously compute
u k = u k + e -i θ k ⊥ z d k = d k -z (2.49)
We then choose a momemtum k around the minimum of the energy selecting it from the distribution,
f (k) ∝ e -k 2 2 +κ k 2 x +η 2 soc k 2 y α α ∈ R (2.50)
where α is arbitrarily chosen to fit the density of state in function of the temperature.
• If the change is accepted, we calculate the inverse Fourier transform and update the fields.
Algorithms interplay
In order to optimize the efficiency and reduce the time consumption of the computation, we switch between different algorithms during a single simulation run. At each Monte Carlo step, we randomly select, according to a externally selected probability, one of the different algorithms. Using Metropolis algorithm we obtain a much larger variance and incertitude on the observable than using the Heat Bath and Gaussian algorithm.
CHAPTER 2. METHODS
Partition function
We have presented different algorithms to correctly sample the distribution p[ψ(r )],
p[ψ(r )] = Z -1 c f e -β(H c f -µN ) (2.51)
where Z c f = Dψ(r )e -βH c f is the partition function. One of our central observables in the following is total density as a function of the chemical potential. Discretizing the system size L on N sites, it is given by
n c f = α N i =1 Dψ α (i ) |ψ α (i )| 2 e -βH c f (ψ α (1),ψ α (2),...,ψ α (N )) Z c f (2.52)
Together with the density of particles in the energy minimum state, we will be able to to draw the phase diagram of the interacting SOCed bosons in Chapter III. However, as we have seen above, predictions of the classical field theory may systematically differ from those of interacting bosons, in particular in the high temperature limit.
In the next section, we will show how to reduce this differences in order to make quantitative predictions for dilute Bose gases.
Density matching
Procedure
Within classical field theory, the occupation of eigenmodes of energy is given by the equipartition theorem instead of the full Bose distribution. For weakly interacting systems at high energy, mean-field theory correctly describes the leading order interaction corrections [START_REF] Pitaevskii | International Series of Monographs on Physics[END_REF] and the occupation of energy eigenstates asymptotically approaches their mean-field values at high energy. Therefore, we can correct the densities of our classical field calculations to account for the correct ultraviolet behavior adding the difference
∆n = 1 L 2 k,α=± n B (ε m f ,B kα -µ) -n c f (ε m f ,c f kα -µ) (2.53)
where the single particle mean-field energies are given by ε
m f ,B /c f kα = ε B,c f kα +2 α g αα n m f ,B /c f α
, where ε B,c f kα are the eigen energies of the ideal SOC gas (see below). The corresponding mean-field densities,
n m f ,B /c f α = L -2 k n c f /B (ε m f kα -µ), have to be determined self-consistently as presented in figure 2.3.
Note that the Bose distribution of the occupation numbers merges the classical field occupation for low energies, as we have shown before. Therefore, the low energy modes do not contribute to the density difference, Eq. (2.53), the difference only arises from the different ultraviolet behavior.
Lattice expressions for ideal and mean-field classical fields
In the notation above we have indicated one further subtlety arising from the regularization of our classical field theory. The eigenmodes of our classical field theory on the lattice in general differ from those of the Bose gas for high momenta already for an ideal gas.
Let us therefore calculate explicitly the eigenmodes, ε c f kα and the occupation numbers, n c f (ε c f kα -µ), of the non interacting classical field system as presented in figure 2.4. The action of the ideal system is diagonal in the Fourier space with k i , j = 2π L i , j . The action can then be written
Ψ ↑ (r) = 1 L 2 L i , j =1 u k i , j e i k i , j •r Ψ ↓ (r) = 1 L 2 L i , j =1 d k i , j e i k i , j •r (2.
S U = 1 L 2 i , j k 2 i j ,U 2 -µ + κ 2 2 u -k i , j u k i , j + d -k i , j d k i , j + κ L 2 i , j k x,U -i η soc k y,U u -k i , j d k i , j + κ L 2 i , j k x,U + i η soc k y,U d -k i , j u k i , j (2.55)
where k x,U and k 2 U are the Fourier transform of the finite difference expressions approximating the derivatives. Using the lowest order finite difference expressions of the Laplacian, we have
∇ 2 U Ψ ↑ (r) = a -2 d i Ψ ↑ (r + ai) + Ψ ↑ (r -ai) -2Ψ ↑ (r) (2.56) k 2 U ≡ a -2 d i [2 -2 cos (k(i )a)] = k 2 + O (a 2 k 4 ) (2.57)
The derivatives and the corresponding wave vectors in Fourier space write
∂ x,U Ψ ↑ (r) = 1 2a Ψ ↑ (r + ai x ) -Ψ ↑ (r -ar x ) (2.58) k x,U ≡ 1 a sin (k x a) = k x + O (a 6 k 3 x ) (2.59)
Diagonalizing the action similar to the continuous system studied in section I.1.2, we obtain the corresponding lattice expressions
S U = 1 L 2 i , j k 2 i j ,U + κ 2 2 -µ + κ k 2 x i ,U + η 2 soc k 2 y j ,U Φ + -k Φ + k + 1 L 2 i , j k 2 i j ,U + κ 2 2 -µ -κ k 2 x i ,U + η 2 soc k 2 y j ,U Φ - -k Φ - k (2.60)
With the new basis vectors
Φ + (r) = e i θ k Ψ ↑ (r) + Ψ ↓ (r) 2 Φ -(r) = e i θ k Ψ ↑ (r) -Ψ ↓ (r) 2
(2.61)
Φ + k = 1 L 2 r Φ + (r)e -i k•r Φ - k = 1 L 2 r Φ -(r)e -i k•r (2.62)
where CHAPTER 2. METHODS Since these fields are non interacting, we can then explicitly calculate the density Eq.
e i θ k = (k x,U + i η soc k y,U )/ k 2 x,U + η 2 soc k 2 y,U CHAPTER 2. METHODS
(2.52) as a Gaussian integral
n c f (ε c f kα -µ) = 1 2L 2 i , j 1 k 2 i j ,U 2 -µ + κ 2 2 + κ k 2 x i ,U + η 2 soc k 2 y j ,U + 1 2L 2 i , j 1 k 2 i j ,U 2 -µ + κ 2 2 -κ k 2 x i ,U + η 2 soc k 2 y j ,U (2.63) with k x = 2πi N and k y = 2π j N .
The analytical results of the non-interacting classical field on the lattice already presents an important benchmark of our numerical Monte Carlo calculation in the non-interacting limit. In order to include the mean field corrections, it is rather straightforward to use these results and solve the self-consistent mean-field equation for the density n c f (ε
m f ,c f kα -µ).
CHAPTER 2. METHODS
Convergence and scale of energy
.5 presents, at each step of an algorithm, the density as a function of the corresponding instantaneous value of the action. We see that at high temperature i.e low density, the action is only determined by the chemical potential and fluctuations of the density are strong. At low temperature, fluctuations around the mean density are highly suppressed. We comment in more detail of this feature in the Appendix. In this strongly degenerate regime the convergence of the algorithm towards the correct distribution can be rather slow and must be checked for each observable separately.
Studying the correlation of the observable with the value of the action can give important insight into its convergence properties.
• I. Density n In all regimes, the density usually converges fast since density changes are strongly correlated with the action. As shown in figure 2.2, the density converges rapidly towards its mean value determined by the chemical potential µ.
• II. Condensed fraction and momentum distribution n(k) As shown in figure 2.2, the condensed fraction converges significantly slower than the density.
• III. SP and PW states n κ 0 The correct balance between the population of degenerate momenta is typically converging slower than the condensate fraction. Since in our later study we focus on the regime of very small anisotropy g , PW and SP states are always very close in energy. At low temperature, the local minima are ubiquitous and the distribution is converging too slowly for purely local moves. Global changes like the Fourier algorithm described in the previous section are needed to reach convergence. CHAPTER 2. METHODS
Résumé
Au cours de ce chapitre nous avons présenté les principales méthodes utilisées au cours de cette thèse pour établir le diagramme de phase à température non nulle d'un gaz de bosons bidimensionnels en présence d'un couplage spin-orbite et d'interactions interparticules. Nous nous sommes intéressés à des systèmes avec de faibles interactions et à des températures basses. En se basant sur l'absence d'un condensat de Bose-Einstein dans un gaz idéal (ou dans le cas des théories champ moyen) des arguments de continuité tenderaient vers une température critique d'une possible transition repoussée à zéro dans la limite d'une interaction également infiniment proche de zéro.
En approchant le zéro absolu, la vaste majorité des bosons occupent les états de très basse énergie qui sont donc fortement peuplés. Dans ce régime, le caractère ondulatoire des particules quantiques domine et la description en tant que champs classiques devient quantitativement correcte [START_REF] Baym | The transition temperature of the dilute interacting bose gas[END_REF][START_REF] Holzmann | Condensate density and superfluid mass density of a dilute bose-einstein condensate near the condensation transition[END_REF][START_REF] Giorgetti | Semiclassical field method for the equilibrium bose gas and application to thermal vortices in two dimensions[END_REF][START_REF] Prokof | Two-dimensional weakly interacting bose gas in the fluctuation region[END_REF][START_REF] Holzmann | Superfluid transition of homogeneous and trapped two-dimensional bose gases[END_REF]. Au-delà du régime de faibles interactions, la théorie de champs classiques reste capable de décrire des comportements universels autour d'une transition continue, un phénomène très connu des théories critiques [START_REF] Zinn-Justin | Quantum Field Theory and Critical Phenomena[END_REF]. Au cours de cette thèse et en se basant sur des calculs de champs classiques, nous avons établi pour la première fois le diagramme de phase de bosons interagissant avec couplage spin-orbite.
De manière schématique, la description type champs classiques apparaît en remplaçant l'occupation d'un état quantique donnée par la distribution de Bose (exp[ε/k B T ] -1) avec une énergie ε, par l'occupation d'un champ classique k B T / tout en négligeant les commutateurs des champs quantiques. Ces deux approximations deviennent exactes pour les états de basse énergie dans la limite T → 0 et elles fournissent le point de départ d'une description quantitativement correcte d'un gaz de Bose faiblement interagissant.
Pourtant, à cause des interactions, il n'est pas possible de diagonaliser explicitement l'Hamiltonien même après avoir appliqué l'approximation type champs classiques. Toutefois, le calcul des observables statiques correspond directement à un calcul de distribution de probablilités, bien connu en physique statistique classique. Au cours de cette étude et en se basant sur ce constat, nous avons utilisé les méthodes Monte Carlo pour échantillonner numériquement la distribution des champs classiques.
Le poids de chaque configuration du champ classique est donné par la distribution de Boltzmann en fonction de son énergie. Cependant, deux problèmes techniques CHAPTER 2. METHODS apparaissent en préambule de notre calcul. Premièrement, afin de correctement définir l'énergie d'une théorie de champs classiques, il faut régulariser son comportement à hautes énergies ("divergences ultraviolettes"). Au travers du calcul numérique Monte Carlo, cet obstacle est naturellement résolu par la discrétisation des champs sur réseau. Deuxièmement, le couplage spin-orbite introduit des termes imaginaires dans l'action des deux champs complexes représentant les deux états de spin. Nous devons donc montrer que l'on obtient réellement une distribution de probabilité, c'est à dire que l'action discrétisée est maintenue réelle quelque soit la configuration des champs. Ensuite seulement, il est possible de correctement échantillonner les configurations des champs classiques en proposant des algorithmes Monte Carlo efficaces qui assurent l'ergodicité du système.
Enfin, nous avons corrigé les densités brutes provenant des calculs de champs classiques qui dépendent de la discrétisation sur réseau en tenant compte des comportements à hautes énergie des modes distribués selon la statistique de Bose. Cependant, ces états étant peu peuplés, ils sont peu affectés par les interactions et l'approximation type champ moyen en fournit donc une description très pertinente. En utilisant cette correction, il est en principe possible de corriger chaque observable à partir des calculs de champs classiques et de fournir des prédictions quantitatives pour nos systèmes alors directement comparables aux expériences [START_REF] Baym | The transition temperature of the dilute interacting bose gas[END_REF].
Introduction
In this chapter, we explore the phase diagram of a two dimensional SOCed Bose gas based on the methods presented in the last chapter. Here, we will first establish the presence or absence of a finite temperature phase transition in the interacting system and provide quantitative predictions for the phase diagram. In the next chapter, we will study and characterize the (quasi-) ordering of the different phases at low temperature.
According the Mermin-Wagner theorem [START_REF] Hohenberg | Existence of long-range order in one and two dimensions[END_REF][START_REF] Mermin | Absence of ferromagnetism or antiferromagnetism in one-or two-dimensional isotropic heisenberg models[END_REF][START_REF] Coleman | There are no goldstone bosons in two dimensions[END_REF], no long range order can occur at finite temperatures. Still, in the absence of SOC, a Berenzinskii-Kosterlitz-Thouless phase (BKT) transition from the normal to a superfluid phase occurs for interacting Bose gases [START_REF] Holzmann | Superfluid transition of homogeneous and trapped two-dimensional bose gases[END_REF] where the low temperature superfluid phase is characterized by algebraic (quasi-long range) order.
Our numerical studies clearly establish that the weakly interacting Bose gas still undergoes a BKT phase transition for anisotropic SOC, η soc < 1. In the low temperature phase, the condensate fraction decays algebraically with system size and the gas becomes superfluid. However, for isotropic SOC, η soc = 1, our calculations show a cross-over behavior at finite systems, with strong evidence for the absence of a finite temperature phase transition in the thermodynamic limit.
Bose gas without SOC: Berezinskii, Kosterlitz and
Thouless phase transition
Let us first briefly review the Berezinskii, Kostelitz and Thouless transition of the interacting two-dimensional Bose gas without SOC described by the Hamiltonian
H = d 2 r Ψ † (r ) - ħ 2 2m ∆ + g 0 2 Ψ † (r )Ψ(r ) Ψ(r ) (3.1)
where g 0 is the interaction strength. At low temperatures, density fluctuations are strongly suppressed. Keeping only phase fluctuations, Ψ(r ) = ρe i θ(r ) , the Hamiltonian can be reduced to the so-called XY model [START_REF] Fröhlich | The kosterlitz-thouless transition in twodimensional abelian spin systems and the coulomb gas[END_REF].
In the XY-model, a Berenzinskii-Kosterlitz-Thouless transition takes place [START_REF] Berezinski | Destruction of Long-range Order in One-dimensional and Twodimensional Systems Possessing a Continuous Symmetry Group. II. Quantum Systems[END_REF][START_REF] Kosterlitz | Ordering, metastability and phase transitions in two-dimensional systems[END_REF][START_REF] Kosterlitz | The critical properties of the two-dimensional xy model[END_REF] where the superfluid density, n S , jumps from n s = 0 at high temperatures to n S = 2mk B T /πħ 2 (or n S λ 2 = 4) at the transition temperature T C . Below T C , the firstorder correlation function, g 1 (r ) = 〈Ψ † (r )Ψ(0)〉, algebraically decays, g 1 (r ) ∼ r -η(T ) characterized by a temperature dependent exponent η(T
) = 1/n S λ 2 . At T C , η(T C ) = 1/4
, and the exponent decreases with decreasing temperature. Thus, the algebraic decay is quite slow, so that for any finite size system we can expect a significant condensate fraction
n 0 = N 0 N = 1 N d 2 r g 1 (r ) ∼ L 2-η L 2 ∼ N -η/2 (3.2)
Although the condensate fraction vanishes in the thermodynamic limit, numerical simulations as well as many experimental systems will be affected by strong finite size effects. Experiments on ultra cold atomic gases typically involve mesoscopic system sizes, e.g. N ∼ 10 4-8 , where the condensate fraction at T C still plays a dominating role, n 0 ∼ 10 -1 .
As presented in Chapter I, in a SOCed two dimensional system the dispersion relation of the single particle is very different from Eq. (3.1), and quasi-long range order may be destroyed in certain parameter regimes [START_REF] He | Instability of a two-dimensional bose-einstein condensate with rashba spin-orbit coupling at finite temperature[END_REF].
What is the effect of the SOC on the BKT scenario ?
In the following, we study the condensate and superfluid fraction for different SOC anisotropy, η soc = 0, 0.5, 0.9, and 1, for signatures of a possible finite temperature phase transition.
Bose gas with SOC: condensate fraction
For a single component Bose gas without SOC, the single particle density matrix depends only on distance and decays algebraically in the low temperature superfluid phase [START_REF] Berezinski | Destruction of Long-range Order in One-dimensional and Twodimensional Systems Possessing a Continuous Symmetry Group. II. Quantum Systems[END_REF][START_REF] Kosterlitz | Ordering, metastability and phase transitions in two-dimensional systems[END_REF][START_REF] Kosterlitz | The critical properties of the two-dimensional xy model[END_REF]. The same algrebraic decay propagates to the condensate fraction.
In the case of a two-component Bose gas with SOC, the single particle density matrix further depends on the spin-projection, G σ,σ (r, r ). Quasi-long range order occurs in the distribution of the modes and the single particle density matrix gets dominated by one or few highly occupied modes. We therefore project G σ,σ (r, r ) over all degenerate PW mean-field ground states, e.g. we sum over all the minima of the single particle spectrum
n κ 0 = k=(±κ,0) σσ d rd r L 2 ψ M F kσ (r)G σ,σ (r, r )ψ M F * kσ (r ) (3.3)
to estimate the condensate fraction n κ 0 /n where n = σ G σσ (r, r) is the total particle density. It is important to notice that n κ 0 is a direct indicator for a phase transition. However, it does not fully describe the character of the low temperature phase, in particular it does not distinguish between PW or SP order.
In figure 3.1 we show n κ 0 as a function of density for a finite system of extension L/a = 80 where a = ħ/ mk B T is the minimal distance on our lattice. The condensate fraction grows rapidly around a cross-over density which decreases from η soc = 0 to η soc = 1. However, no differences are visible changing the sign of our small anisotropic interaction from negative to positive g . We therefore expect that the cross-over/transition temperature is a smooth, continuous function of g around g = 0.
In two dimensional systems of infinite size, the condensate density is expected to vanish at any finite temperature. However, as explained in the previous section, huge finite size effects are expected. In order to determine a possible sharp phase transition in the thermodynamic limit, we have to determine the behavior of the condensate fraction increasing the system size. The occurrence of a BKT phase then shows up in the algebraic scaling of the condensate fraction with system size, n κ 0 /n ∼ L -η(T ) . ] -1 for a finite system of length L/a = 80. The cross-over from normal to condensed phase slightly lowers with increasing SOC anisotropy, η soc . Although the PW/SOC character of the condensate depends essentially on the sign of the anisotropic interaction mg = ±π/100, differences in n κ 0 between g ≥ 0 and g < 0 for equal SOC are beyond our resolution. The colored zones indicate our estimates for the Kosterlitz-Thouless transition in the thermodynamic limit from finite-size-scaling of the condensate fraction described in the text. Condensate fraction, n κ 0 /n, as a function of inverse volume in presence of the SOC but in absence of interactions. As we will precisely detail in the paragraph dedicated to the isotropic SOC, we notice that the absolute value of the condensed density depends on the sum introduced in Eq. 3.4, describing the number of degenerate points corresponding to the fundamental state |k| = κ. The condensate density decreases with the volume for any density, no transition occurs.
CHAPTER 3. INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM
Non-interacting system
Let us start discussing finite size effects for non-interacting bosons. In figure 3.2 we show the behavior of the condensate fraction in the non-interacting case. The exact expressions of this limiting case not only provide a benchmark for our simulation, but also serve to illustrate the finite size scaling of the condensate in the normal phase. We then obtain
n = k B T L 2 i , j 1 k 2 i j ,U 2 -µ + κ 2 2 + κ k 2 x i ,U + η 2 soc k 2 y j ,U + k B T L 2 i , j 1 k 2 i j ,U 2 -µ + κ 2 2 -κ k 2 x i ,U + η 2 soc k 2 y j ,U n κ 0 = |k|=κ k B T L 2 µ (3.4)
In the expression of the condensate density, we explicitly sum over all degenerate ground states with |k| = κ. It is important to keep in mind that the number of degenerate minima depends on the isotropicity of the SOC. For η soc < 1, we have two degenerate minima, whereas for isotropic SOC, η soc = 1, we have an infinite degeneracy, a circle in momentum space in the thermodynamic limit. Finite size effects introduce qualitative changes for η soc = 1, where the additional symmetry of the underlying lattice strongly reduces the degeneracy to a finite number. These effects are present in both figures 3.4 and 3.5.
Nevertheless, in both cases, we see that the condensate density decreases with the system size as
n κ 0 n ∝ 1 L 2 .
This exponent of this algebraic decay simply reflects that the condensate fraction decays as 1/L 2 for all densities and no phase transition occurs.
Interacting system with anisotropic SOC
We now turn to the interacting systems, summarizing our results of the classical field Monte Carlo calculations. Whereas in the high temperature, normal phase the condensate density decreases with the volume, η = 2, the exponent changes rapidly around the temperature where condensation occurs in the finite system. At lower temperatures, the exponent almost vanishes. This behavior is consistent with a Berenzinskii-Kosterlitz-Thouless transition [START_REF] Berezinski | Destruction of Long-range Order in One-dimensional and Twodimensional Systems Possessing a Continuous Symmetry Group. II. Quantum Systems[END_REF][START_REF] Kosterlitz | Ordering, metastability and phase transitions in two-dimensional systems[END_REF][START_REF] Kosterlitz | The critical properties of the two-dimensional xy model[END_REF]. Assuming the transition to be within the Kosterlitz-Thouless class, the critical temperature can be estimated to , for anisotropic SOC bosons with η soc = 0 at different phase space densities and anisotropic interaction, g > 0. We observe a transition between the high temperature regime, where the condensate density decreases with the volume, and the low temperature regime where the exponent depends on the density and eventually vanishes. The solid black line indicates the critical exponent η = 1/4. Our simulations indicate that for anisotropic SOC, η soc < 1, the Berenzinskii-Kosterlitz-Thouless phase occurs at finite temperature, independent of the sign of the anisotropy g of the interaction (see figure 4.1 for η soc = 0). Further, the limit of isotropic interaction, g = 0, is approached smoothly from both sides, g > 0 and g < 0, so that the critical temperature is continuous around g = 0. This latter behavior is in contradiction with the discontinuity predicted in reference [START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF].
Interacting system with isotropic SOC
For isotropic SOC, η soc = 1, the system behaves qualitatively different. As shown in figure 3.3 for g < 0, we do not observe the onset of quasi-long range order over the whole density regime and system sizes we considered.
For g > 0, we observe a cross-over similar to η soc < 1, but this time the onset of algebraic order strongly depends on the number of degenerate mean-field ground states. For our finite simulation box, only 4 or 8 minima are strictly degenerate for the system sizes we considered. The circular degeneracy only occurs after performing the thermodynamic limit. As shown in figure 3.5, the behavior of the condensate fraction is qualitatively and quantitatively affected by the number of degenerate states. In particular, the onset of algebraic order is shifted towards considerable higher densities, i.e lower temperatures, increasing the degeneracy from 4 to 8 degenerate modes. For η soc = 1 and infinity system sizes, the transition will therefore be shifted to zero temperature and no finite temperature transition with algebraic order in the single particle channel should occur.
It is important to point out that the BKT transition is absent within the classical field calculation. Therefore, the transition is suppressed by purely classical fluctuations in strong contrast to prediction of reference [START_REF] Liao | Spin-orbitcoupled bose gases at finite temperatures[END_REF] i.e. quantum fluctuations do not play an essential role. , for isotropic SOC bosons with η soc = 1 at different phase space densities and anisotropic interaction, g < 0. We do not observe any quasi-long range order. Dashed lines corresponds to finite systems with 8 degenerate minima where the algebraic behavior, n κ 0 ∼ L -η with η > 1/4, at high phase space density is suppressed.
CHAPTER 3. INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM
Let us illustrate in more detail the degeneracy of the finite size simulation for isotropic SOC. In our simulations, we have chosen the value of the SOC strength κ = 2π 40 , commensurable with κ ≡ k = 2πi
L for system sizes L = 40, 80, 120 where we have four minima at k = (±κ, 0) and k = (0, ±κ). Instead, for L = 81, 144, we have eight minima as in the scheme shown below. Notice, that for g < 0, where mean field predicts PW, we have four or eight possibilities for the direction. Instead, for g > 0, the mean field ground state is a superposition of two opposite PW phases, so that we only have half of the possibilities for the unsigned momentum direction, two or four possibilities in our case. This simple picture may already explain the main qualitative different behavior of the condensate fraction depending on the sign g of the interparticle interaction we have observed in our simulations.
Intermezzo: XY vs Heisenberg model
In the thermodynamic limit, for η soc = 0 and isotropic interaction, we can eliminate the SOC via a gauge transformation, and we obtain a Bose gas with two internal spin component and isotropic interaction. This model is equivalent to a field theory with N = 4 internal components. Only the model with N = 2 maps to the XY model giving rise to a BKT transition. For N > 2, a Kosterlitz-Thouless phase transition is absent [START_REF] Polyakov | Interaction of goldstone particles in two dimensions. applications to ferromagnets and massive yang-mills fields[END_REF][START_REF] Brézin | Spontaneous breakdown of continuous symmetries near two dimensions[END_REF], in accordance with the absence of a phase transition in the 2D Heisenberg model.
However, in the case of η soc = 0, the thermodynamic limit is singular. For any finite system, the above argument only applies for situations where κ is commensurate with the boundary conditions. Here, we address the limit η soc → 0 continuously connected to non vanishing η soc > 0, which corresponds to situation with noncommensurate values of κ of any finite system, so that the BKT transition survives even in the limit of infinite system sizes. The situation η soc = 0 is therefore singular.
CHAPTER 3. INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM
Superfluidity
In systems without SOC, the quasi-long range order in the low temperature phase also implies superfluidity, and one of the most striking prediction of the Kosterlitz-Thouless transition is the occurrence of a universal jump of the superfluid density at the critical temperature [START_REF] Nelson | Universal jump in the superfluid density of two-dimensional superfluids[END_REF][START_REF] Kosterlitz | The critical properties of the two-dimensional xy model[END_REF][START_REF] Agnolet | Kosterlitz-thouless transition in helium films[END_REF]. In SOC systems, Galilean invariance is broken with important consequences for the superfluid phase. In such systems peculiar features have been predicted like the appearance of spatial anisotropic superlfuidty, BEC with zero superfluid fraction at zero temperature [START_REF] Stringari | Diffused vorticity and moment of inertia of a spin-orbit coupled bose-einstein condensate[END_REF], and a critical velocity which is not uniquely defined [START_REF] Zhu | Exotic superfluidity in spin-orbit coupled bose-einstein condensates[END_REF].
In thermal equilibrium, the superfluid mass density ρ S can be directly related to the phase stiffness ρ S = ∂ 2 F (θ) ∂θ 2 where F (θ) is the free-energy density where the momentum operator p is replaced by pθ in the Hamiltonian [START_REF] Baym | The microscopic description of superfluidity[END_REF][START_REF] Holzmann | Condensate superfluidity and infrared structure of the single-particle green's function: The josephson relation[END_REF][START_REF] Josephson | Relation between the superfluid density and order parameter for superfluid he near tc[END_REF][START_REF] Pollock | Path-integral computation of superfluid densities[END_REF].
We have therefore further calculated the superfluid and normal mass density, ρ n = mn -ρ s from the phase stiffness. For η soc = 0 and isotropic interactions, we have
ρ n = 1 k B T L 2 〈 P tot x + ħκS tot x 2 〉 (3.5)
where P tot is the total momentum and S tot = d rS(r) the total magnetization of the system. Deviations from a Boltzmann distribution of [P tot x + ħκS tot x ] 2 /(2mnL 2 ) are directly connected to the quantization of the center of mass motion in the x direction. For general η soc > 0 or anisotropic interactions g = 0, quantum effects may modify superfluid properties [START_REF] Zhang | Superfluid density of a spin-orbit-coupled bose gas[END_REF], but we can still use Eq. (3.5) to study the universal behavior of the normal density around a superfluid phase transition.
Our results for the normal/superfluid density (see inset of figure 3.1 for η soc = 0.5, g < 0) confirm the conclusions drawn above from the finite-size analysis of the condensate fraction. Consistent with the prediction of Berenzinskii, Kosterlitz and Thouless, the low temperature, algebraically ordered phase is superfluid for η soc < 1. The transition temperature is roughly independent of the sign of g and decreases with increasing η soc . It vanishes for isotropic SOC with increasing degeneracy. This absence of a transition for isotropic SOC is consistent with recent hydrodynamical results predicting the appearance of rigid flow at zero temperature in three spatial dimensions [START_REF] Stringari | Diffused vorticity and moment of inertia of a spin-orbit coupled bose-einstein condensate[END_REF].
CHAPTER 3. INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM
Conclusion and phase diagram
In conclusion we have drawn the finite temperature phase diagram of a two dimensional SOCed Bose gas. We have shown the signature of a KT transition in the case of η soc < 1 with the presence of superfluidity in the low temperature phase. By scaling the condensed fraction with the system size we have predicted the critical densities corrected by the matching presented in Chapter II. In the particular case of a pure Rashba SOC η soc = 1, we have shown that a crossover occurs for finite systems at similar phase-space densities, but no superfluid transition is expected in the thermodynamic limit. Beyond mean field, the existence and appearance of such exotic phases are still unclear and open questions remain. As an example, reference [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF] proposes a phase diagram based on qualitative argument in the very dilute limit, which does not depend on the sign of anisotropy g , in contrast to the mean field predictions. In the following, we will investigate in detail, the character of the low temperature phases observed in our classical field simulation.
In particular, we will study the magnetic ordering of the atoms/spins at low temperature and link it to the BKT phase transition studied in Chapter III. We show that in the case of an anisotropy g = 0 the spin correlations exhibit quasilong-range order induced by the KT transition in contrast to prediction of long range order from reference [START_REF] Su | Hidden long-range order in a spin-orbit-coupled twodimensional bose gas[END_REF]. We therefore find a phase diagram in strong connexion to the Kosterlitz-Thouless transition. We also investigated predictions of zero momentum transition by [START_REF] Chen | Quantum and thermal fluctuations in a raman spin-orbit-coupled bose gas[END_REF][START_REF] Yu | Ground-state phase diagram and critical temperature of twocomponent bose gases with rashba spin-orbit coupling[END_REF], or the appearance of bosons pairs by references [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF] but we did not find any indications of such exotic phases and behavior.
In the case of isotropic interactions, g = 0, mean field calculations do not select an unique ground state, SP and PW states remain degenerate. In this case, one may expect that thermal and quantum fluctuations break this degeneracy and select an unique ground state. The classical field approximation allows us to address the question of thermal fluctuations in a direct and explicit way. Here, we show that the system undergoes a KT transition without selecting a unique ground state. Instead, our calculations predict a fractionalization of the condensate where SP and PW remain degenerate.
Parameter values in the simulations
In order to study the competition between SOC and interparticle interaction, we have fixed σσ =↑↓ mg σσ /4 = κ/ mk B T = π/20 with mg = 0 to address isotropic interaction and mg = ±π/100 to slightly break the spin isotropy of scattering particles.
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Reduced single body density matrix
In the previous chapter, we have observed a Kosterlitz-Thouless transition for η soc < 1. The low temperature phase was characterized by the occurrence of a quasi-condensate, a large, though non extensive, occupation of the two degenerate single particle ground states. More precise information on the character of the quasicondensate can be expected from the full calculation of the reduced single-particle density matrix given by
G σ,σ (r, r ) = 〈 Ψ † σ (r ) Ψσ (r)〉 (4.1)
In the high temperature regime, the one-body density matrix decays to zero over a distance given by the thermal de Broglie length λ T . At the transition, quasi-long range order appears, characterized by an algebraic decay. In momentum space, the reduced one-body density matrix writes
G σ,σ (k, k ) = 〈 Φσ † k Φσ k 〉 (4.2)
and the distinction between quasi and true long range order is directly connect to the occurrence of a quasi-condensate or full BEC in momentum space.
BEC
The momentum distribution for a system revealing BEC exhibits a singular behavior at the minimum of the energy spectrum. In the case of a three dimensional ideal Bose gas without SOC, only the zero momentum state is macroscopically occupied. In real space, the one-body density matrix at large distances saturates to a finite value set by the condensate fraction of the gas. The system is then said to show off diagonal long-range order, i.e. finite values in G(r, r ) for r = r . The criterion for BEC of a macroscopic occupation given by Penrose and Onsager [START_REF] Penrose | Bose-einstein condensation and liquid helium[END_REF] is thus equivalent to the existence of a long range order.
In the case of ideal SOCed Bosons in three dimensions, the ground state is degenerate leading to a macroscopic occupation of all modes with |k| = κ. The Fourier transform of these modes will lead to oscillations of the one-body density matrix in real space.
Although possible, the observation of true-long range order gets more involved in the real space density matrix than in Fourier space. In general, BEC corresponds to a macroscopic occupation of one (or more) eigenmodes of the single-particle density matrix. In the case of SOC, these eigenmodes in general couple spin and momentum degrees of freedom.
BKT phase
In the case of a quasi-condensate below the BKT transition in two dimensions, the single particle density matrix in real space is algebraically decaying, with additional oscillations for systems with SOC corresponding to peaks of the momentum distribution at non-vanishing momenta. The spin-structure of the quasi-condensate can be obtained from the dominating modes after diagonalization of the single particle density matrix.
Since the occupation number is strongly peaked for k = κ with |κ| = κ we have calculated the reduced single particle density matrix only for momenta at the minimum of the single particle energy spectrum, G σ,σ (κ, κ ). For η soc < 1, we only need to consider (±κ, 0). The resulting 4 × 4 matrix can be calculated using classical field Monte Carlo.
Explicitly, using the definition 1.11, this reduced density matrix writes 〈M (κ)〉 with
M (κ) = u * κ d * κ u * -κ d * -κ ⊗ u κ d κ u -κ d -κ (4.3)
Population of momenta ±κ In the last Chapter we have shown that for η soc < 1 the system undergoes a BKT transition. There are only two minima in the single particle energy spectrum ±κ = (±κ, 0) and their population n(±κ) depends on the sign of the interaction g . In order to study the matrix M (k) the two directions ±κ are of course equivalent.
For isotropic SOC, η soc = 1, the minimum of the energy spectrum is a full ring of radius |k| = κ in the thermodynamic limit. However, for our numerical calculation on a finite system, we have only a small number of degenerate single particle ground states, typically four or eight. However, we numerically observe that for finite simulation time, only one direction κ is selected. The two corresponding momenta ±κ are almost macroscopically populated, whereas the other momenta on the ring have a negligeable occupation at low temperature when the density is large nλ 2 >∼ 1.
Averaging over different initial conditions reestablishes the symmetry between all directions κ. , for anisotropic SOC bosons with η soc = 0 at different phase space densities and anisotropic interaction, g > 0. Dashed lines show the corresponding maximal occupation number after diagonalizing the single body density matrix (not ensemble averaged). In the normal phase at low phase space density, we have n κ 0 ∼ L -2 and two degenerate modes, whereas in the superfluid phase at high phase space density we have n κ 0 ∼ L -η with η < 1/4, the degeneracy is broken, and only one mode contributes to the quasi-condensate.
Link between low temperature order and the BKT transition for η soc < 1 and g = 0 In the last chapter, we have determined the BKT from the appearance of a quasi-condensate density, averaged over the two momentum states which minimize the single particle energy. From the diagonalization of the single particle density matrix, we obtain additional information.
Above the critical temperature, we obtain two degenerate modes within our numerical accuracy -both minima in momentum space are equally populated within a single Monte Carlo run. Below the critical temperature, the system spontaneously chooses one mode which dominates. The two minima are only equivalent after ensemble averaging different Monte Carlo calculations. As shown in figure 4.1, for g = 0, in the BKT phase, the single particle density matrix is dominated by a single, highly occupied mode.
PW and SP phase
Since the low temperature phase of the system is dominated by a single mode of the reduced density matrix, the spin-structure of the corresponding eigenstate characterizes the spin-structure of the BKT phase, e.g. an algebraically decaying PW or SP quasi-condensate.
Diagonalizing the matrix M (k), we numerically obtain the eigenvectors that describe the appearing order. However, in order to interpret better these results, let us first analyze the structure of the matrix assuming PW or SP order.
Our mean-field ground state introduced in Chapter I Eq. (1.55), we based on the following
|Φ -(φ)〉 = cos(φ) Φ- † κ + sin(φ) Φ- † -κ N N ! |0〉 (4.4)
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
The resulting structure of the matrix M for pure PW (φ = 0) or SP (φ = π/4) states is then
Plane Wave : φ = 0 M (κ) ∝ (1 -1 0 0) ⊗ 1 -1 0 0 (4.5) Stripe phase : φ = π 4 M (κ) ∝ (1 -1 1 1) ⊗ 1 -1 1 1 (4.6)
We recover these features in our numerical calculation and we extracted numerically the angle φ. At high temperature, where we have two degenerate modes, the angle φ is not correctly defined. Approaching the transition point, when one mode starts to dominate, the value of angle becomes well defined. Its mean value depends on the anisotropy g > 0 or g < 0 (SP and PW respectively), fluctuations around it are strongly suppressed.
Our analysis of the eigenmodes and their occupations clearly connects the quasicondensate structure with the spin-ordering in the BKT phase showing PW or SP depending on the sign of g . Although there is clearly one mode dominating, the occupation is slowly decaying with system size. Therefore, also in spin space, there is no true, but only quasi-long range order. In the next section, we will study the spin order in a more intuitive and experimentally better accessible way, calculating correlation functions of the local spin density.
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Local Spin Density
The PW and SP character of the reduced one-body density matrix computation naturally propagates to various observables and higher order correlation function.
In the following we will analyze the spin-order in terms of the local spin density.
Spin Density Since we labeled the two hyperfine states of the atoms as spin up and spin down we can use the formalism of magnetism to study the ordered phases. For instance, counting the difference of the number of atoms in the two hyperfine states is equivalent to compute the magnetic moment in the direction z
σ z (r) ≡ Ψ † (r)σ z |Ψ(r) Ψ † (r)Ψ(r) = |ψ ↑ (r)| 2 -|ψ ↓ (r)| 2 |ψ ↑ (r)| 2 + |ψ ↓ (r)| 2 (4.7)
The colors of figures 4.2, 4.3 and 4.4 represent the value of the local spin density σ z (r) of the field at each point of space r. These quantities are not averages but they are rather obtained at single step of the algorithm. For g > 0 we can clearly identify stripes at low temperature whereas for g < 0 the density is constant in space.
Spin projections
We can then generalize the spin formalism to the other directions of the spin. Note that the gas is confined in two dimensions but the spin degree of freedom is three-dimensional. Then using the Pauli matrices,
σ x = 0 1 1 0 σ y = 0 -i i 0 σ z = 1 0 0 -1 (4.8)
we can then compute the local spin density of the field in the three directions
σ x (r) = Ψ † (r)σ x Ψ(r) Ψ † (r)Ψ(r) = ℜ[ψ † ↑ (r)ψ ↓ (r)] |ψ ↑ (r)| 2 + |ψ ↓ (r)| 2 (4.9) σ y (r) = Ψ † (r)σ y Ψ(r) Ψ † (r)Ψ(r) = ℑ[ψ † ↑ (r)ψ ↓ (r)] |ψ ↑ (r)| 2 + |ψ ↓ (r)| 2 (4.10) σ z (r) = Ψ † (r)σ z Ψ(r) Ψ † (r)Ψ(r) = |ψ ↑ (r)| 2 -|ψ ↓ (r)| 2 |ψ ↑ (r)| 2 + |ψ ↓ (r)| 2 (4.11)
and by this way obtain all the information about the spin structure of the field.
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Mean-field predictions It is instructive to explicitly write down the mean-field spin density for PW and SP using the mean-field wave function
ψ M F κ (r) = ψ M F ↑,κ (x, y) ψ M F ↓,κ (x, y) = 1 2 cos(φ)e i κx 1 -1 + sin(φ)e -i κx 1 1
For g < 0 the mean-field ground state corresponds to φ = 0, a Plane Wave state, with
σ x (r) = 1 σ y (r) = 0 σ z (r) = 0 (4.12)
All the spins are aligned and point in the direction κ = (κ, 0) of the minimum.
For g < 0, the minimum of the energy corresponds to φ = π 4 , the Stripe Phase state, where we have
σ x (r) = 0 σ y (r) = sin(2κx) σ z (r) = cos(2κx) (4.13)
Now the spins direction rotates around the x-axis with periodicity 2κ.
Numerical simulation From figures 4.2 and 4.3 we observe that the spin density of our classical field simulation reflects the mean-field ground state at very low temperature. However, at slightly higher temperature, thermal excitations mask the state. Further, we also observe vortices due to thermal excitations, typical for BKT physics. ), at phase space density 1/nλ 2 0.026 for η soc = 0.9 where n κ 0 /n ∼ 40%. For g < 0, M x (x, 0) shows quasilong range order indicating PW, whereas M y (x, 0) is short ranged. For g > 0 we obtain SP where the amplitude of the oscillations of M y (x, 0) decays algebraically and no order is present in M x (x, 0).
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Spin density correlation functions
In the last section, we have shown the local spin density of an instantaneous field configuration. For a more quantitative study, we calculated the spin-density correlation function averaged over many field configurations
M α (r) = 〈 Ŝα (r) Ŝα (0)〉 with Ŝα (r) = Ψ † (r)σ α Ψ(r) (4.14)
As shown in figure 4.5, the spin structure of this condensate mode is directly reflected in the spin correlation function M α (r).
Quasi-long range order Quasi-long range stripe order is reflected in slowly decaying oscillations of period 2κ in M y (x, 0). For g < 0, M x (x, 0) develops quasi-long range order. In both cases, the exponent of the algebraic decay is given by the scaling exponent of n κ 0 and compatible with η(T ) obtained from the superfluid density. Therefore, the quasi-long range spin order results from the spin structure of the underlying quasi-condensate.
Anisotropy
We also notice that the correlation functions, M x (x, 0) and M y (x, 0), remain short ranged in the SP and PW state, respectively. The system therefore exhibits a strong anisotropy between the direction κ and the one orthogonal to it. Reference [START_REF] Stringari | Diffused vorticity and moment of inertia of a spin-orbit coupled bose-einstein condensate[END_REF] addresses this feature, focusing in particular on the possibility of anisotropic superfluidity.
Isotropic interaction: Fragmented condensate
Let us now study the case of isotropic interparticle interaction g = 0. As shown in Chapter I, mean field calculations do not select an unique ground state between SP and PW states.
Since SP and PW degeneracy may only reflect the insensitivity of the mean field ansatz for the ground state, many studies focused on looking for the true absolute ground state [START_REF] Yu | Ground-state phase diagram and critical temperature of twocomponent bose gases with rashba spin-orbit coupling[END_REF][START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF][START_REF] Ozawa | Ground-state phases of ultracold bosons with rashba-dresselhaus spin-orbit coupling[END_REF][START_REF] Ozawa | Stability of ultracold atomic bose condensates with rashba spin-orbit coupling against quantum and thermal fluctuations[END_REF]. In these approaches the symmetry between the SP and PW phases is broken by different physical mechanisms, for example by introducing quantum fluctuations [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] or by renormalization procedures [START_REF] Ozawa | Ground-state phases of ultracold bosons with rashba-dresselhaus spin-orbit coupling[END_REF].
However, as shown in figure 4.6 for two fundamentally distinct cases η soc = 0 and η soc = 1, in the limit of isotropic interaction, g = 0, we always obtain two highly occupied modes of the single particle density matrix, degenerate within our The top and bottom plots correspond to two opposite SOC anisotropies η soc , respectively η soc = 0 and η soc = 1. Each point represents a single long run of a computation at extremely low temperature in a regime numerically challenging within our approximation. In the superfluid regime and for any density, we observe a strong signature of fractionalization. Black points show the corresponding eigenvalues in the case of an anisotropic interaction g = 0 when only one eigenvalue λ 1 is non zero (λ 2 ∼ 10 -2 λ 1 ).
λ 2 λ 2 /2π λ 1 λ 2 /2π λ 1 =λ 2 η SOC =1, 1/nλ 2 =0.006 η SOC =1, 1/nλ 2 =0.013
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE numerical precision. By analyzing the spin correlation functions as in the previous sections, we find that both M x (x, 0) and M y (x, 0) become quasi-long ranged and indicate simultaneous PW and SP characters.
Therefore, for our great surprise, PW and SP remain degenerate and robust against thermal, critical fluctuations. In case of macroscopically occupied modes, this phenomena corresponds to a fractionalized condensate [START_REF] Mueller | Fragmentation of bose-einstein condensates[END_REF]. Since we do not have true long range order, we observe for the first time a fractionalized quasi-condensate.
This unusual behavior indicates that although two modes are extremely populated, the phase φ between them is not locked and the spin does not prefer any particular direction. However, classical field description takes only into account thermal fluctuations. From the Bogoliubov approximation around the T = 0 mean-field ground states, we expect that quantum fluctuations lift the degeneracy and favor the PW character decreasing the temperature without further phase transition [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF].
Low temperature wave-function
References [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF] predicted the occurrence of a paired condensate. Our classical field Monte Carlo have not shown any evidence for the occurrence of this phase. However, it is not clear which observable would best show up such a phase. Here, we present some of our analysis done on the whole field distribution.
Figure 4.8 shows the distribution of the field along the real (x-axis) and imaginary axis (y-axis) at high and low temperature. At a single single Monte Carlo step, we plot the field Ψ σ (r) at every point in space r for both σ =↑ and σ =↓.
At high temperature we recognize the Gaussian regime centered on 〈Ψ σ (r)〉 = 0. At low temperature, we recover the mean-field predictions in addition to a broadening due to thermal fluctuations .
CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Bosons pairs
In this distribution, we can notice a great difference between the SP and the other phases. Indeed the average 〈Ψ 2 〉 is non zero in the case of a SP wave-function and zero otherwise.
High temperature Gaussian regime
The average of a squared Gaussian distributed function is zero.
Plane Wave
The average of a squared Plane Wave is also zero.
〈 e i kx 2 〉 = 〈e i 2kx 〉 = 0 (4.15)
Stripe Phase The square of a Stripe Phase wave-function is not zero 〈si n(2κx) 2 〉 = 0.5 (4.16)
The results discussed in last chapter, indicated that for isotropic SOC, η soc = 1, no standard BKT transition occurs. In this particular region of the phase diagram and for g > 0, in the Stripe Phase, condensation of pairs of bosons were predicted by references [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF].
We analyzed how this observable for pairing scales with the system size. Figure 4.10 shows that it decreases with the system size L 2 depending on the discrete number of degenerate minima as the condensate fraction observable studied before. Therefore we do not find any indications of a phase transition to a pairing phase as proposed in reference [START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF]. However, we cannot exclude the possibility of pair superfluidity at considerably lower densities or much larger system sizes. .9: Imaginary part of the wave-function as a function of its real part at every point in space r. The system size is L/a = 80, the SOC anisotropy η soc = 0, the space density 1/nλ 2 = 0.0126 and the contact interaction anisotropy is set to g < 0 i.e Plane Wave state. At low temperature, in addition to a broadening due to thermal fluctuations, we recover the mean-field prediction of ψ M F κ (r) ≡ e i κx 1 -1 . g = 2g ↑↓g ↑↑g ↓↓ l'état fondamental par champ moyen est alors décrit soit comme une onde plane (PW) soit comme une superposition de deux ondes planes avec impulsions opposées appelée état de bande (SP) [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF].
Au-delà de l'approximation type champ moyen, l'existence et l'apparition de ces phases exotiques n'est pas établi et beaucoup de questions restent ouvertes. Par exemple, la référence [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF] propose un diagramme de phase basé sur des arguments qualitatifs dans la limite du régime fortement dilué. Les différentes phases ne dépenderaient plus alors du signe de l'anisortopie g , en fort contraste avec les prédictions en champ moyen. Nous avons, à notre tour, caractérisé les différentes phases à basse température observées au sein de nos simulations de champs classiques.
En particulier, nous avons étudié l'ordre magnétique des atomes/spins à basse température que nous avons relié à la transition BKT étudiée au cours du chapitre précédent. Nous avons également montré que dans le cas d'une anisotropie g = 0 les corrélations de spin présentent un ordre de quasi-longue portée induit par la transition KT en opposition avec les prédictions de ordre longue portée de la part de la référence [START_REF] Su | Hidden long-range order in a spin-orbit-coupled twodimensional bose gas[END_REF]. Nous observons donc un diagramme de phase fortement déterminé par la transition Kosterlitz-Thouless. Nous avons également sondé différentes prédictions, en particulier celle d'une transition à impulsion nulle [START_REF] Chen | Quantum and thermal fluctuations in a raman spin-orbit-coupled bose gas[END_REF][START_REF] Yu | Ground-state phase diagram and critical temperature of twocomponent bose gases with rashba spin-orbit coupling[END_REF] ainsi que l'apparition de paires de bosons [START_REF] Gopalakrishnan | Universal phase structure of dilute bose gases with rashba spin-orbit coupling[END_REF][START_REF] Chao | Paired superfluidity and fractionalized vortices in systems of spin-orbit coupled bosons[END_REF]. Toutefois, nous n'avons pas trouvé de trace de ces comportements et phases très exotiques.
Dans le cas d'interactions isotropes, g = 0, l'approximation type champ moyen ne sélectionne pas d'état fondamental unique, les états SP et PW restent alors Our numerical studies clearly establish that the weakly interacting Bose gas undergoes a BKT phase transition for anisotropic SOC, η soc < 1. In the low temperature phase, the condensate fraction decays algebraically with system size and the gas becomes superfluid. However, for isotropic SOC, η soc = 1, our calculations shows a cross-over behavior at finite systems, with strong evidence for the absence of a finite temperature phase transition in the thermodynamic limit.
We have further characterized superfluid many body states for η soc < 1 as a function of a vanishing or small spin-anisotropy of the interparticle interaction, g = 2g ↑↓g ↑↑g ↓↓ , of positive or negative sign. In particular, we have shown that in the case of an anisotropy g = 0 the spin correlations exhibit quasi-long-range order and that the magnetic ordering of the atoms/spins at low temperature is linked to the BKT phase transition. Our calculations confirm mean field predictions for the character of the quasi-condensate in the superfluid state, i.e. PW or SP order depending on the sign of g . For isotropic interactions, g = 0, we obtained a fractionalized quasi-condensate with two degenerate modes at the transition showing both, PW and SP character.
Originally motivated by the mean field prediction of the degeneracy between the SP/PW states, the system of isotropically interacting bosons, g = 0, with Rashba spin-orbit coupling, η soc = 1, has attracted considerable attention [START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF][START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF]. Fluctuations and correlations beyond mean field were expected to break this degeneracy. Using classical field Monte Carlo calculations, we directly addressed the role of thermal fluctuations. The stability of the SP/PW degeneracy leading to a fractionalized quasi-condensate in our calculations came out unexpectedly. However, within classical field theory, quantum effects due to non-vanishing commutators of the quantum fields are neglected.
Close to zero temperature, the Bogoliubov approach is suited for studying quantum fluctuations around the mean field state. Reference [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] shows that, within the Bogoliubov approximation, the three dimensional isotropic SOCed Bose gas condenses into a single-momentum state of the Rashba spectrum, thus resulting in order by disorder.
In two dimensions, thermal fluctuations destabilize the system at any finite temperature. Nevertheless, decreasing temperature, the analogous calculation of reference [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] predicts quantum fluctuations to lift the degeneracy and favor the PW character for η soc < 1 and g = 0. However, the exact transition from a fractionalized quasi-condensate at the critical temperature to the broken degeneracy at zero temperature is unclear and still an open question.
Addressing numerically the full quantum system is extremely challenging due to the presence of the SOC which introduces a sign problem into all known quantum Monte Carlo algorithms. Similar to the fermionic sign problem, the error of such a calculation increases exponentially with system size, inverse proportional to temperature. Similar, already for classical field calculations, SOC prevents the use of the Worm algorithm [START_REF] Prokof | Worm algorithms for classical statistical models[END_REF] to speed up our computation.
As an outlook, we want to include quantum fluctuations, as described in the Bogoliubov theory, within our classical field approach, with the hope to quantitatively descrive both thermal and quantum fluctuations for a weakly interacting SOCed Bose gas.
CHAPTER 5. CONCLUSION AND PERSPECTIVES
Conclusion et Perspectives
A U COURS de cette thèse nous avons déterminé le diagramme de phase à température finie d'un gaz de Bose bidimensionnel avec deux états hyperfins (pseudospin) couplés au travers d'une interaction spin-orbite Rashba-Dresselhaus en utilisant des calculs Monte-Carlo basés sur champs classiques.
Nos études numériques établissent clairement qu'un gaz de Bose intéragissant subit une transition de phase de type BKT en présence d'un couplage spin-orbite anisotrope η soc < 1. La phase à basse température présente une fraction condensée qui décroit algébriquement avec la taille du système et le gaz devient alors superfluide. Au contraire, dans le cas d'un couplage spin-orbite isotrope, η soc = 1, nos calculs pointent l'absence d'une transition de phase à température finie et à la limite thermodynamique. Une transition lisse (cross-over) subsiste pour des systèmes à taille finie.
Nous avons ensuite étudié plus en détail les différents états fondamentaux à plusieurs corps pour η soc < 1 en fonction d'une anisotropie des interactions interparticules g = 2g ↑↓g ↑↑g ↓↓ nulle, positive ou négative. En particulier, nous avons montré que dans le cas d'une fine anisotropie g = 0 les correlations de spins présentent un ordre quasi-longue portée et que l'ordre magnétique des atomes/spins à basse température est lié à la transition de phase BKT. Nos calculs confirment les prédictions de type champ moyen à propos de la nature du quasi-condensat dans la phase superluide c'est à dire la sélction des ordres SP et PW en fonction du signe de g . Dans le cas d'interactions isotropes, g = 0, nous obtenons un quasi-condensat fractionnalisé avec deux modes dégénérés à la transition et qui présente deux ordres simultanément PW et SP.
Une attention toute particulière de la part de la communauté [START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF][START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] est portée sur le système composé de bosons interagissant de manière isotrope, g = 0, avec un couplage spin-orbite également isotrope, η soc = 1 provoquée à l'origine par la prédiction d'une dégénérescence entre les états SP et PW dans le cadre des théories de type champ moyen. Il serait attendu que cette dégénérescence soit levée par les fluctuations au-delà du champ moyen. En utilisant les calculs de champs classiques par Monte Carlo, nous avons pour notre part abordé directement le rôle des fluctuations thermiques. C'est alors que la stabilité de la dégénérescence entre les ordres SP et PW entraînant une fractionalisation du quasi-condensat est apprarue dans nos calculs de façon inattendue. Toutefois, au sein d'une théorie de champs classiques, les effects quantiques dûs aux commutateurs des champs quantiques CHAPTER 5. CONCLUSION AND PERSPECTIVES sont négligés.
Au contraire, en se rapprochant de la limite à température nulle, l'approche Bogoliubov est particulièrement adaptée pour l'étude des fluctuations quantiques autour de l'état fondamental provenant du champ moyen. En particulier la référence [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] montre que, au sein de l'approximation Bogoliubov, le gaz de Bose tridimensionnel avec couplage spin-orbite isotrope condense dans un état du spectre Rashba avec une unique impulsion, ceci se traduisant alors dans un processus de ordre par le désordre.
En deux dimensions, les fluctuations thermiques déstabilisent le système pour n'importe quelle température non nulle. Néanmoins, en diminuant la tempéraure, des calculs analogues à ceux de la référence [START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] prédisent que les fluctuations quantiques lèvent la dégénéréscence et favorise la nature PW du système pour η soc < 1 et g = 0. Pourtant, la transition exacte entre un quasi-condensat fractionnalisé autour de la température critique vers une dégénéréscence brisée à tempérautre nulle, est incertaine et plusieurs questions restent ouvertes.
Chapter 6
Contents 1 . 1 7 1. 1 . 1 9 1. 1 . 2 1 . 2 1 . 3 1 . 5 1 . 6 1 . 1
117119121213151611 Spin-orbit coupling (SOC) in ultra-cold atoms . . . . . . . . . . . . Artificial gauge fields . . . . . . . . . . . . . . . . . . . . . . . . Rashba-Dresselhaus spin-orbit coupling . . . . . . . . . . . . Non interacting SOCed Bose gases . . . . . . . . . . . . . . . . . . . 1.2.1 Diagonal form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Energy spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Ideal Bose gas in 3D . . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 Ideal Bose gas in 2D . . . . . . . . . . . . . . . . . . . . . . . . Interacting Bose gas: Mean Field Approximation . . . . . . . . . . 1.3.1 High temperature Mean Field Hartree approximation . . . . 1.3.2 Mean Field ground states: Plane-Wave & Stripe phase . . . . 1.4 Fluctuations & Open questions . . . . . . . . . . . . . . . . . . . . . Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spin-orbit coupling (SOC) in ultra-cold atoms "The achievement of Bose Einstein condensation (BEC) (Anderson et al. 1995; Bradley et al. 1995; Davis et al. 1995) and of Fermi degeneracy (DeMarco and Jin, 1999; Schreck at al., 2001; Truscott et al. 2001) in ultra-cold dilute gases has opened a
Figure 1 .Figure 1 . 1 :
111 Figure1.1 shows the induced transitions of a schematic three-level atom placed in two counter propagating laser beams. A Raman transition corresponds to the absorption of a single photon from one laser beam and its stimulated re-emission into the second. The momentum carried by each of these photons can be quite large compared to the typical ultra-cold atom setup. It is possible to coherently couple the state and the momentum of the cold atoms as we will see in the next
Figure 1 . 2 :
12 Figure 1.2: Figure extracted from [22]. Vortices created in a BEC of 87 Rb coupled to an artificial magnetic field.
g 3 2 (e βµ ) λ 3 TFigure 1 . 3 :
2313 Figure 1.3: Figures extracted from[START_REF] Baym | Condensation of bosons with rashbadresselhaus spin-orbit coupling[END_REF]. Two dimensional dispersion of a homogeneous SOCed system. The two branches ± touch at the origin. The left graph corresponds to the pure Rashba term whereas the right graph is plotted for η soc = 0.7. We see in this last case that only two minima appear in p x = ±κ.
Figure 1 . 4 :
14 Figure 1.4: Figure extracted from [4] a) Typical level diagram. b) Minima location.Measured location of energy minimum or minima, where as a function of laser intensity the characteristic double minima of SOC dispersion move together and finally merge. c) Dispersion measured in6 Li.
Figure 1 . 5 :
15 Figure 1.5: Mean field phase diagram from ref[START_REF] Ho | Bose-einstein condensates with spin-orbit interaction[END_REF] α = g ↑↓ /g , β = (g ↑↑g ↓↓ )/g and g = (g ↑↑ + g ↓↓ )/2. Using the formalism of this thesis, the critical value of α is predicted for α c = 1 and the indices p + and p -correspond to the directions k x = ±κ. Region I is a superposition of p + and p -e.g. the stripe phase. Region II and III are states in which only p + and p -are populated respectively.
Figure 2 . 1 :
21 Figure 2.1: Qualitative scheme of the validity range of the theories in function of the temperature.
algorithm Condensed density, Gaussian algorithm Condensed density, Metropolis algorithm
Figure 2 . 2 :
22 Figure 2.2: Example of convergence of different algorithms. The plot represents the condensed fraction starting from a random set in function of the steps of the different algorithms. Using Metropolis algorithm we obtain a much larger variance and incertitude on the observable than using the Heat Bath and Gaussian algorithm.
Figure 2 . 3 :
23 Figure 2.3: Density as a function of the chemical potential.We have corrected the densities of our classical field calculations to account for the correct ultraviolet behavior. We see especially at low density that the Hartree mean field approach developed in Chapter I recovers the numerical integration. At low temperature the density is proportional to the chemical potential as expected c.f Chapter I section 1.3.
Figure 2 . 4 :
24 Figure 2.4: Density as a function of the chemical potential for non interacting SOCed bosons. It is interesting to notice that n Id,MC as a constant deviation from the exact solution n Id,SOC . As expected from the non interacting results in Chapter I the SOC term increases the density at a fixed chemical potential.
2 Figure 2 . 5 :
225 Figure 2.5: Density n as a function of the action S for a finite system length L/a = 40 with small anisotropy g = π/100, spin-orbit coupling κ = π/20 and anisotropy η soc = 0. Each point corresponds to one Monte Carlo iteration step.We observe that at high temperature µ = -2 the density is small, the fluctuations are strong and its value is strongly correlated to the value of the action S. Indeed, in this regime every point of Ψ(r) is almost independent and the action is determined by the chemical potential µ. For increasing chemical potential, i.e decreasing temperature, we observe a decorrelation of the density and the action. As presented in the Appendix, at low temperature, fluctuations of the density are strongly suppressed and the value of the action is determined by the many body state i.e interactions and density of state.
CHAPTER 2. METHODS Chapter 3 InteractingContents 3 . 1 59 3. 2 61 3. 3 62 3. 4 69 3. 5 70 3. 6
331592613624695706 Bosons with SOC in 2D: Phase Diagram Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bose gas without SOC: Berezinskii, Kosterlitz and Thouless phase transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bose gas with SOC: condensate fraction . . . . . . . . . . . . . . . . Superfluidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusion and phase diagram . . . . . . . . . . . . . . . . . . . . . Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
CHAPTER 3 .
3 INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM 3.2. Bose gas without SOC: Berezinskii, Kosterlitz and Thouless phase transition Page 61
CHAPTER 3 . 2 η 40 Figure 3 . 1 :
324031 Figure 3.1: Condensate fraction n κ0 /n as a function of inverse phase space density [nλ 2 ] -1 for a finite system of length L/a = 80. The cross-over from normal to condensed phase slightly lowers with increasing SOC anisotropy, η soc . Although the PW/SOC character of the condensate depends essentially on the sign of the anisotropic interaction mg = ±π/100, differences in n κ 0 between g ≥ 0 and g < 0 for equal SOC are beyond our resolution. The colored zones indicate our estimates for the Kosterlitz-Thouless transition in the thermodynamic limit from finite-size-scaling of the condensate fraction described in the text.
Figure 3 . 2 :
32 Figure 3.2: Condensate fraction, n κ0 /n, as a function of inverse volume in presence of the SOC but in absence of interactions. As we will precisely detail in the paragraph dedicated to the isotropic SOC, we notice that the absolute value of the condensed density depends on the sum introduced in Eq. 3.4, describing the number of degenerate points corresponding to the fundamental state |k| = κ. The condensate density decreases with the volume for any density, no transition occurs.
CHAPTER 3 .
3 INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM
Figure 3 . 3 :
33 Figure 3.3: Condensate fraction, n κ 0 /n, as a function of the inverse volume, L -2, for anisotropic SOC bosons with η soc = 0 at different phase space densities and anisotropic interaction, g > 0. We observe a transition between the high temperature regime, where the condensate density decreases with the volume, and the low temperature regime where the exponent depends on the density and eventually vanishes. The solid black line indicates the critical exponent η = 1/4.
2 Critical exponent η=1/ 4 Figure 3 . 4 :
2434 Figure 3.4: Exponent η(T ) as a function of the density. The exponent is obtained by scaling the condensate fraction from figure 3.3, with the system size n κ 0 /n ∼ L -η(T ) . By scaling the condensed fraction for different SOC anisotropies η soc we obtain the phase diagram showed in figure 3.7.
Figure 3 . 5 :
35 Figure 3.5: Condensate fraction, n κ 0 /n, as a function of the inverse volume, L -2, for isotropic SOC bosons with η soc = 1 at different phase space densities and anisotropic interaction, g < 0. We do not observe any quasi-long range order.
Figure 3 . 6 :
36 Figure 3.6: Solid lines: Condensate fraction, n κ 0 /n, as a function of inverse volume for isotropic SOC, η soc = 1, of finite systems with 4 degenerate minima and g > 0.Dashed lines corresponds to finite systems with 8 degenerate minima where the algebraic behavior, n κ 0 ∼ L -η with η > 1/4, at high phase space density is suppressed.
Figure 3 . 7 :
37 Figure 3.7: Critical densities as a function of the SOC anisotropy η soc . The KT phase transition from normal to superfluid phase takes place at slightly higher densities with increasing SOC anisotropy, η soc . No finite temperature phase transition occurs for isotropic SOC η soc = 1.
CHAPTER 4 .
4 LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Figure 4 . 1 :
41 Figure 4.1: Solid lines: Condensate fraction, n κ 0 /n, as a function of the inverse volume, L -2, for anisotropic SOC bosons with η soc = 0 at different phase space densities and anisotropic interaction, g > 0. Dashed lines show the corresponding maximal occupation number after diagonalizing the single body density matrix (not ensemble averaged). In the normal phase at low phase space density, we have n κ 0 ∼ L -2 and two degenerate modes, whereas in the superfluid phase at high phase space density we have n κ 0 ∼ L -η with η < 1/4, the degeneracy is broken, and only one mode contributes to the quasi-condensate.
Figure 4 . 2 :
42 Figure 4.2: Spin local projection at extremely low temperature for anisotropic interaction g < 0 i.e Plane Wave state. The arrows' directions represent the local spin σ x and σ y projection on the x-y plane. Colors represent the value of the local spin density σ z . As expected for the Plane Wave state, spins' projections point in average in the same direction. We observe vortices typical of the KT physics.
Figure 4 . 3 :
43 Figure 4.3: Spin local projection at extremely low temperature for anisotropic interaction g > 0 i.e Stripe Phase state. The arrows' directions represent the local spin σ x and σ y projection on the x-y plane and colors represent the value of the local spin density σ z . We observe the signature of a SP order : spins rotate with periodicity 2κ. The direction of the rotation is determined by the direction of the two populated momenta ±κ.
Figure 4 .Figure 4 . 4 :Figure 4 . 5 :
44445 Figure 4.4 also shows that few thermal excitations are enough to significantly modify the stripe order of the local spin density. In the following, we will use spin density correlations for a more quantitative study to characterize spin ordering.
CHAPTER 4 .
4 LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Figure 4 . 6 :
46 Figure 4.6: Second largest eigenvalue λ 2 of the reduced density matrix M (κ) as a function of the largest eigenvalue λ 1 for isotropic interactions g = 0. The top and bottom plots correspond to two opposite SOC anisotropies η soc , respectively η soc = 0 and η soc = 1. Each point represents a single long run of a computation at extremely low temperature in a regime numerically challenging within our approximation. In the superfluid regime and for any density, we observe a strong signature of fractionalization. Black points show the corresponding eigenvalues in the case of an anisotropic interaction g = 0 when only one eigenvalue λ 1 is non zero (λ 2 ∼ 10 -2 λ 1 ).
Figure 4 . 7 :
47 Figure 4.7: Imaginary part of the wave-function as a function of its real part at every point in space r. The system size is L/a = 80, the SOC anisotropy η soc = 0 and the space density 1/nλ 2 = 0.240. As expected, at high temperature the density is Gaussian distributed.
Figure 4 . 8 :
48 Figure 4.8: Imaginary part of the wave-function as a function of its real part at every point in space r. The system size is L/a = 80, the SOC anisotropy η soc = 0, the space density 1/nλ 2 = 0.0126 and the contact interaction anisotropy is set to g > 0 i.e Stripe Phase state. At low temperature, in addition to a broadening due to thermal fluctuations, we recover the mean-field prediction of ψ M F κ (r) ≡ cos(κx) -i sin(κx) .
CHAPTER 4 .
4 LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE
Figure 4
4 Figure 4.9: Imaginary part of the wave-function as a function of its real part at every point in space r. The system size is L/a = 80, the SOC anisotropy η soc = 0, the space density 1/nλ 2 = 0.0126 and the contact interaction anisotropy is set to g < 0 i.e Plane Wave state. At low temperature, in addition to a broadening due to thermal
Figure 4 . 10 : 7
4107 Figure 4.10: Pairs of bosons < Ψ 2 > as a function of the inverse volume, L -2, for isotropic SOC with η soc = 1 at different phase space densities and anisotropic interaction, g > 0 i.e Stripe Phase. We recognize the dependency of this observable on the discrete number of degenerate minima as the condensate fraction studied before. We do not find any indication of a finite temperature phase transition.
CHAPTER 4 . 5
45 LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE dégénérés. Dans ce cas spécifique, il serait attendu que les fluctuations thermiques et quantiques lèvent cette dégénérescence et qu'un unique état fondamental soit sélectionné. L'approximation type champs classiques nous permet d'aborder la question des fluctuations thermiques de façon directe et explicite. Nous avons montré que le système subit une transition de phase sans sélectionner d'état fondamental unique. Nos calculs prédisent dans ce cas une fractionalisation du condensat, les états PW et SP se maintenant dégénérés. Dans le cadre de notre étude et pour étudier la compétition entre couplage spin-orbite et interactions interparticules, nous avons fixé σσ =↑↓ mg σσ /4 = κ/ mk B T = π/20 avec mg = 0 pour étudier le cas d'une interaction isotrope et mg = ±π/100 pour étudier le cas d'une fine brisure de symmétrie entre les diffusions des différents spins. CHAPTER 4. LOW TEMPERATURE STATES: PLANE WAVE AND STRIPE PHASE Chapter Conclusion and Perspectives I N THIS THESIS we have determined the finite-temperature phase diagram of a two-dimensional interacting Bose gase with two hyperfine (pseudospin) states coupled via Rashba-Dresselhaus spin-orbit interaction using classical field Monte Carlo calculations.
très haute température l'effet des interactions correpond à un décalage effectif du potentiel chimique[START_REF] Blaizot | Quantum Theory of Finite Systems[END_REF] égal à 2g ↑↑ n M F avec g ↑↑ l'interaction moyenne entre deux particules et n M F la densité champ moyen déterminée de façon auto-cohérente. Celui-ci correspond au terme connu de Hartree. Deux phases distinctes[START_REF] Ho | Bose-einstein condensates with spin-orbit interaction[END_REF] sont alors apparues en fonction de l'intensité des interactions entre particules de même (g ↑↑ et g ↓↓ ) et différent spin (g ↑↓ ). Pour g ↑↑ + g ↓↓ > 2g ↑↓ , l'état fondamental correspond à une onde plane avec impulsion non nulle (PW). Dans le cas opposé, 2g ↑↓ > g ↑↑ + g ↓↓ , chaque particule est décrite comme une superposition de deux ondes planes avec impulsions opposées. Ce dernier état est appelé état de bande (SP) car des bandes apparaissent dans la densité en espace réel.Enfin nous avons étudié les enjeux et avancées actuelles autour de ce type de système en insistant sur le rôle prépondérant des fluctuations[START_REF] Ozawa | Condensation transition of ultracold bose gases with rashba spin-orbit coupling[END_REF][START_REF] Barnett | Order by disorder in spin-orbit-coupled bose-einstein condensates[END_REF] qui dépassent les prédictions souvent trop schématiques des théories type champ moyen.
1.6. Résumé
1.6 Résumé
Au cours du premier chapitre de cette thèse nous avons introduit le concept de
champ de jauge artificiel. Afin de simuler des effets magnétiques dans les systèmes
d'atomes neutres et ultra-froids, des techniques basées sur les transitions Raman entre deux états hyperfins des atomes ont été développées ces dernières années [1, 2, 3]. A l'aide de faisceaux lasers, ces différents champs peuvent être manipulés et contrôlés pour donner naissance en particulier à des vecteurs potentiels effectifs Chapter 2
dont les components ne commutent pas.
Our calculations then indicate a
fractionalized quasicondensate where the mean-field degeneracy of the two states
remains robust against critical fluctuations. We conclude on new prospects and
motivations beyond the classical field approximation.
CHAPTER 1. INTRODUCTION CHAPTER 1. INTRODUCTION
Un célèbre exemple, inclus dans ce dernier cas, est le couplage spin-orbite (SOC) dont nous avons sélectionné l'écriture en tant que interaction Rashba-Dresselhaus
[START_REF] Yu | Oscillatory effects and the magnetic susceptibility of carriers in inversion layers[END_REF][START_REF] Dresselhaus | Spin-orbit coupling effects in zinc blende structures[END_REF]
: les amplitudes du couplage selon les deux directions spatiales x et y peuvent être différentes et nous avons donc défini le nombre 0 ≤ η soc ≤ 1 décrivant cette anisotropie.
Nous considérons ensuite le cas d'une particule isolée pour obtenir son spectre d'énergie en présence du terme SOC. Après avoir correctement transformé la base décrivant les deux états couplés de l'atome, nous avons mis en évidence la nouvelle dégénérescence de l'état fondamental induite par le couplage SOC. En effet dans le cas d'une anisotropie η soc < 1 le spectre d'énergie présente deux minima. Dans le cas d'un couplage SOC isotrope, le minimum de l'énergie correspond à un anneau dans l'espace des impulsions. Dans ce dernier cas et pour un système de bosons non interagissants, nous avons également montré l'absence de transition de phase telle que la condensation de Bose-Einstein (BEC).
Nous avons alors introduit une première approche pour décrire les effets des interactions interparticules dans les gaz de bosons avec couplage SOC : l'approximation champ moyen. Deux régimes peuvent en principe être correctement décrits par cette méthode, celui d'une température infinie et celui d'une température nulle. A A très basse température nous avons procédé par méthode variationnelle en écrivant un Ansatz de la fonction d'onde minimisant l'énergie d'une particule simple. Nous CHAPTER 1. INTRODUCTION avons alors déterminé les paramètres libres en minimisant l'énergie d'interaction. Methods Contents 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.2 Classical field approximation . . . . . . . . . . . . . . . . . . . . . . 38 2.3 Markov Chain Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.1 Effective action S . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.2 Monte Carlo algorithms: Metropolis, Heat bath and Fourier moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.3 Partition function . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.4 Density matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.4.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.4.2 Lattice expressions for ideal and mean-field classical fields . 50 2.4.3 Convergence and scale of energy . . . . . . . . . . . . . . . . . 55 2.5 Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.6 Résumé Au
cours de ce chapitre, nous explorons le diagramme de phase d'un gaz de bosons bidimensionnel avec couplage spin-orbite en utilisant les différentes méthodes définies dans le chapitre précédent. Nous avons établi spécifiquement la présence ou l'absence d'une transition de phase à température finie dans le système avec interactions interparticules. Nous avons également proposé des prédictions quantitatives pour le diagramme de phase. Au cours du prochain chapitre nous proposons d'étudier et correctement caractériser le (quasi-) ordre des différentes phases à basse température.Selon le théorème de Mermin-Wagner[START_REF] Hohenberg | Existence of long-range order in one and two dimensions[END_REF][START_REF] Mermin | Absence of ferromagnetism or antiferromagnetism in one-or two-dimensional isotropic heisenberg models[END_REF][START_REF] Coleman | There are no goldstone bosons in two dimensions[END_REF], aucun ordre à longue portée ne peut s'établir à température finie. Cependant, en absence de couplage spin-orbite, une transition de phase Berenzinskii-Kosterlitz-Thouless (BKT) est possible dans un gaz de bosons interagissant entre une phase normale et une phase superfluide. Cette dernière est caractérisée par un ordre algébrique de quasi-longue portée.Nos études numériques établissent que une transition BKT a toujours lieu dans le gaz de bosons faiblement interagissant en présence d'un couplage spin-orbite anisotrope η soc < 1. Dans la phase basse température, la fraction condensée décroit de manière algébrique en fonction de la dimension du système et le gaz devient alors superfluide. Au contraire, pour un couplage spin-orbite isotrope, η soc = 1, nos calculs indiquent une absence de transition de phase à température finie et à la limite thermodynamique. Une transition lisse (cross-over) subsiste pour des systèmes à taille finie. La figure 3.7 présente le diagramme de phase de notre système, détaillant les densités critiques en fonction de l'anisotropie du couplage spin-orbite η soc . CHAPTER 3. INTERACTING BOSONS WITH SOC IN 2D: PHASE DIAGRAM 4.1. Introduction and motivations the mean-field ground state is either given by a single Plane Wave State (PW), or by a linear superposition of two Plane Waves with opposite momenta, the Stripe Phase (SP).
1.2. Non interacting SOCed Bose gases
1.3. Interacting Bose gas: Mean Field Approximation
Résoudre numériquement le système quantique complet est extrêmement difficile à cause de la présence du couplage spin-orbite qui introduit un problème de signe abondamment connu dans tous les algorithmes de Monte Carlo quantique. De façon semblable au problème du signe fermionique, l'erreur provenant de ce type de calcul croît de façon exponentielle avec la taille du système et de manière inversement proportionnelle à la température. De manière analogue dans le cadre des calculs de champs classiques que nous avons développés, le terme traduisant le couplage spin-orbite empêche l'utilisation des algorithmes type Worm[START_REF] Prokof | Worm algorithms for classical statistical models[END_REF] pour accélérer nos calculs.Comme projet futur et comme proposition d'ouverture, nous souhaitons inclure les fluctuations quantiques, telles celles décritent dans la théorie de Bogoliubov, au sein de notre approche se basant sur des champs classiques avec l'espoir de décrire de manière quantitativement correcte simultanément les fluctuations quantiques et les fluctuations thermiques dans un gaz de Bose faiblement interagissant.CHAPTER 5. CONCLUSION AND PERSPECTIVES
Acknowledgments
Chapter 4
Low temperature states: Plane Wave and Stripe Phase
Introduction and motivations
In Chapter III, we have studied the phase diagram of a two-dimensional SOCed Bose gas. For anisotropic SOC, we have identified a low and high temperature phases separated by a transition that we have shown to be within the Kosterlitz-Thouless class. As we have seen in Chapter I, mean field theory predicts exotic many body ground states, in particular, depending on the strength of interactions between same and different spins (defined in Eq. (1.38))
Fluctuations of the density at low temperature
As shown in figure 2.5, at low temperature, the interaction energy, g n 2 /2, dominates over the kinetic energy, ∝ nλ 2 . For simplicity, let us consider the system in absence of SOC, κ = 0, where the density distribution at low temperature is approximately given by
Therefore, fluctuations of the density around its mean value, n = 〈n〉 ≡ µ/g , are Gaussian distributed, with mean-square fluctuations
and highly suppressed for large phase space density, ∆n 2 /n 2 ∼ [g nλ 2 ] -1 0, in contrast to the non interacting case. Approaching zero temperature, density fluctuations smoothly vanish. At high temperature, fluctuations around the density are much larger, since the kinetic energy can never be neglected. In this regime, the Fourier modes of the fields are Gaussian distributed leading to qualitatively different density fluctuations, 〈n 2 〉 = 2n 2 .
Abstract
In this thesis, we theoretically study the occurence of exotic phases in a dilute two component (spin) Bose gas with artificial spin-orbit coupling (SOC) between the two internal states.
Including spin-orbit coupling in classical field Monte Carlo calculations, we show that this method can be used for reliable, quantitative predictions of the finite temperature phase diagram. In particular, we have focused on SOCed bosons in two spatial dimensions and established the phase diagram for isotropic and anisotropic SOC and interparticle interactions.
In the case of anisotropic SOC, the system undergoes a Berenzinskii-Kosterlitz-Thouless transition from a normal to a superfluid state at low temperature. The spin order of the quasicondensate in the low temperature superfluid phase is driven by the spin dependence of the interparticle interaction, favoring either the occurence of a single plane wave state at non-vanishing momentum (PW) or a linear sperposition of two plane waves with opposite momenta, called stripe phase (SP). For spin-independent interparticle interaction, our simulations indicate a fractionalized quasicondensate where PW and SP remain degenerate. For isotropic SOC, our calculations indicate that no true phase transition at finite temperature occurs in the thermodynamic limit, but a cross-over behavior remains visible for large, but finite number of atoms.
Résumé
Cette thèse est dédiée à l'étude théorique de phases exotiques dans un gaz dilué de bosons avec deux composantes (spins) en présence d'un couplage spin-orbite (SOC) entre ces deux états internes. En ajoutant ce dernier couplage à une description de type champs classiques de notre système, nous montrons que cette méthode permet de prédire le diagramme de phase à température finie de manière quantitative, efficace et fiable. Notre étude porte en particulier sur un système de bosons bidimensionnels avec SOC dont nous dessinons le diagramme de phase en fonction de l'anisotropie du SOC ainsi que des interactions. |
00175925 | en | [
"shs.eco",
"sde.es"
] | 2024/03/05 22:32:10 | 2007 | https://shs.hal.science/halshs-00175925/file/Flachaire_Hollard_07b.pdf | Keywords: starting point bias, preference uncertainty, contingent valuation JEL Classification: Q26, C81
Introduction
The NOAA panel recommends the use of a dichotomous choice format in contingent valuation (CV) surveys. This format has several advantages: it is incentive-compatible, simple and cognitively manageable. Furthermore, respondents face a familiar task, similar to real referenda. The use of a single valuation question, however, presents the inconvenience of providing the researcher with only limited information. To gather more information, [START_REF] Hanemann | Statistical efficiency of doublebounded dichotomous choice contingent valuation[END_REF] proposed adding a follow-up question. This is the double-bounded model. This format, however, has been proved to be sensitive to starting point bias, that is, respondents anchor their willingness-to-pay (WTP) to the bids. It implies that WTP estimates may vary as a function of the bids. Many authors propose some specific models to handle this problem [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF][START_REF] Cooper | One and one-half bids for contingent valuation[END_REF][START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF]. The behavioral assumption behind these models is that respondents hold a unique and precise willingness-to-pay prior to the survey. Observed biases are interpreted as a distortion of this initial willingness-to-pay during the survey.
Independently, several studies document the fact that individuals are rather unsure of their own willingness-to-pay [START_REF] Li | Discrete choice under preference uncertainty: an improved structural model for contingent valuation[END_REF][START_REF] Ready | Contingent valuation when respondents are ambivalent[END_REF], 2001[START_REF] Welsh | Elicitation effects in contingent valuation: comparisons to a multiple bounded discrete choice approach[END_REF][START_REF] Van Kooten | Preference uncertainty in non-market valuation: a fuzzy approach[END_REF][START_REF] Hanley | What's it worth? Exploring value uncertainty using interval questions in contingent valuation[END_REF][START_REF] Alberini | Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty[END_REF].
To account for such uncertainty, these studies allow respondents to use additional answers to valuation questions. Rather than the usual "yes", "no" and "don't know" alternatives, intermediate responses, such as "probably yes" or "probably no", are allowed. Alternatively, an additional question asks respondents how certain they are of their answers and provides a graduated scale.
In contingent valuation, starting-point bias and respondent's uncertainty have been handled in separate studies. In this article we develop a dichotomous choice modelhereafter called the Range model -in which individuals hold a range of acceptable values, rather than a precisely defined value of their willingness-to-pay. The range model is drawn from the principle of coherent arbitrariness, suggested by Ariely et al. (2003b). Prior to the survey, the true willingness to pay is assumed to be uncertain in an interval with upper and lower bounds. Confronted with the first valuation question, respondents select a value and then act on the basis of that selected value. Because of this initial uncertainty, the initial choice is subject to starting point bias. In contrast, the subsequent choices are no longer sensitive to the bid offers. A clear-cut prediction follows: biases occur within a given range and affect the first answer only. The Range model thus provides an alternative interpretation of the starting point bias in the dichotomous choice valuation surveys.
An empirical study is presented to compare various models, using the well-known Exxon Valdez contingent valuation survey. Results show that a special case of the proposed Range model, in which a "yes" response is given when the bid value falls within the range of acceptable values, is supported by the data, i.e. when uncertain, individuals tend to say "yes".
The article is organized as follows. The following section presents the Range model and the respondent's decision process. The subsequent sections provide estimation details, give further interpretation and present an application. Conclusions appear in the final section.
The Range model
The Range model derives from the principle of "coherent arbitrariness" (Ariely et al. 2003b). These authors conducted a series of valuation experiments (i.e. experiments in which the subjects have to set values for objects they are not familiar with). They observed that "preferences are initially malleable but become imprinted (i.e. precisely defined and largely invariant) after the individual is called upon to make an initial decision". But, prior to imprinting, preferences are "arbitrary, meaning that they are highly responsive to both positive and normative influences".
In a double-bounded CV survey, two questions are presented to respondents. The first question is "Would you agree to pay x$?". The second, or follow-up, question is similar but asks for a higher bid offer if the initial answer is yes and a lower bid offer otherwise. Confronted with these iterative questions, with two successive bids proposed, the principle of coherent arbitrariness leads us to consider a three-step decision process:
1. Prior to a valuation question, the respondent holds a range of acceptable values 2. Confronted with a first valuation question, the respondent selects a value inside that range 3. The respondent answers the questions according to the selected value.
The following subsections detail each step.
A range of acceptable values
At first, let us assume that a respondent i does not hold a precise willingness-to-pay but rather an interval of acceptable values:
wtp i ∈ W i , W i with W i -W i = δ. (1)
The lower bound and the upper bound are different for each respondent, but we assume the width of the range δ to be constant across individuals.1
Several psychological and economic applications support this idea. For instance, [START_REF] Tversky | Judgment under uncertainty: Heuristics and biases[END_REF] and Ariely et al. (2003bAriely et al. ( , 2003a) ) suggest the existence of such an interval. In addition, several studies in contingent valuation explore response formats that allow for the expression of uncertainty, among others see [START_REF] Li | Discrete choice under preference uncertainty: an improved structural model for contingent valuation[END_REF], [START_REF] Ready | Contingent valuation when respondents are ambivalent[END_REF], Welsh andPoe (1998), van Kooten et al. (2001), [START_REF] Hanley | What's it worth? Exploring value uncertainty using interval questions in contingent valuation[END_REF] and [START_REF] Alberini | Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty[END_REF]. These studies also conclude that there is a range of values for which respondents are uncertain.
Selection of a particular value
Confronted with a first bid offer b 1i a respondent i selects a specific value inside his range of acceptable values W i , W i . The selection rule can take different forms. We propose a selection rule in which the respondent selects a value so as to minimize the distance between his range of willingness-to-pay and the proposed bid:
W i = Min wtp i |wtp i -b 1i | with wtp i ∈ W i , W i . (2)
This selection rule has attractive features. It is very simple and tractable. It is also in accordance with the literature on anchoring, which states that the proposed bid induces subject to revise their willingness to pay as if the proposed bid conveyed some information about the "right" value [START_REF] Chapman | Anchoring, activation, and the construction of values[END_REF]. At a more general level, the literature on cognitive dissonance suggests that subjects act so as to minimize the gap between their own opinion and the one conveyed by new information.
In this range model the first bid plays the role of an anchor: it attracts the willingnessto-pay. A different b 1i results in the selection of a different value W i . Thus, this selection rule should exhibit a sensitivity of the first answer to the first bid, that is, an anchoring effect. Consequently, it is expected to produce anomalies such as starting point bias.
Answers to questions
The last step of the decision process deals with the respondent's answer to questions. It is straightforward that a respondent will answer yes if the bid is less than the lower bound of his range of acceptable value W i . And he will answer no if the bid is higher than the upper bound of his range W i . However, it is less clear what is happening when the bid belongs to the interval of acceptable values.
-Answers to the first question -
A respondent i will agree to pay any amount below W i and refuse to pay any amount that exceeds W i . When the first bid belongs to his interval of acceptable values, he may accept or refuse the bid offer. Here, we do not impose a precise rule: respondents can answer yes or no with any probability when the bid offer belongs to the interval. If the bid belongs to the range of acceptable values, respondents answer yes to the first question with a probability ξ and no with a probability 1 -ξ. Thus, the probability that a respondent i will answer yes to the first question is equal to:2
P (yes) = P (b 1i < W i ) + ξ P (W i < b 1i < W i ) with ξ ∈ [0, 1]. (3)
In other words, a respondent's first answer is yes with a probability 1 if the bid is below his range of acceptable values and with a probability ξ if the bid belongs to his range. A ξ close enough to 1 (resp. 0) means that the respondent tends to answer yes (resp. no) when the bid belongs to the range of acceptable values. Estimation of the model will provide an estimate of ξ.
-Answers to follow-up questions -
The uncertainty that arises in the first answer disappears in the follow-up answers. A respondent answers yes to the follow-up question if the bid b 2i is below his willingness-topay, W i > b 2i ; and no if the bid is above his willingness-to-pay, W i < b 2i (by definition, the follow-up bid is higher or smaller than the first bid, that is b 2i = b 1i ).
Estimation
In this section, we present in detail how to estimate the Range model. It is assumed that if the first bid b 1i belongs to the interval of acceptable values of respondent i, [W i ; W i ], he will answer yes with a probability ξ and no with a probability 1 -ξ. We can write these two probabilities as follows:
ξ = P (W i < b 1i < W ξ i ) P (W i < b 1i < W i ) and 1 -ξ = P (W ξ i < b 1i < W i ) P (W i < b 1i < W i ) , (4)
with
W ξ i ∈ [W i ; W i ].
Note that, when ξ = 0 we have W ξ i = W i , and when ξ = 1 we have W ξ i = W i . From ( 4) and (3), the respondent i answers yes or no to the first question with the following probabilities
P (yes) = P (W ξ i > b 1i ) and P (no) = P (W ξ i ≤ b 1i ). ( 5
)
It is worth noting that these probabilities are similar to the probabilities derived from a single-bounded model with W ξ i assumed to be the willingness-to-pay of respondent i. It follows that the mean value of WTPs obtained with a single-bounded model would correspond to the mean of the W ξ i in our model, for i = 1, . . . , n. The use of follow-up questions will lead us to identify and estimate ξ and to provide a range of values rather than a single mean of WTPs.
If the initial bid belongs to his range of acceptable values, respondent i selects the value W i = b 1i , see (2). If his first answer is yes, a follow-up higher bid b h 2i > b 1i is proposed and his second answer is necessarily no, because W i < b h 2i . Conversely, if his first answer is no, a follow-up lower bid b l 2i < b 1i is proposed and his second answer is necessarily yes, because W i > b l 2i . It follows that, if the first and the second answers are similar, the first bid is necessarily outside the interval [W i ; W i ] and the probabilities of answering no-no and yes-yes are respectively equal to
P (no, no) = P (W i < b l 2i ) and P (yes, yes) = P (W i > b h 2i ). ( 6
)
If the answers to the initial and the follow-up questions are respectively yes and no, two cases are possible: the first bid is below the range of acceptable values and the second bid is higher than the selected value W i = W i , otherwise the first bid belongs to the range of values. We have
P (yes, no) = P (b 1i < W i < b h 2i ) + ξ P (W i < b 1i < W i ) (7) = P (b 1i < W i < b h 2i ) + P (W i < b 1i < W ξ i ) (8) = P (W i < b h 2i ) -P (W ξ i < b 1i ). (9)
Similarly, the probability that respondent i will answer successively no and yes is:
P (no, yes) = P (b l 2i < W i < b 1i ) + (1 -ξ) P (W i < b 1i < W i ) (10) = P (W ξ i < b 1i ) -P (W i < b l 2i ). (11)
To make the estimation possible, a solution would be to rewrite all the probabilities in terms of W ξ i . In our model, we assume that the range of acceptable values has a width which is the same for all respondents. It allows us to define two parameters:
δ 1 = W i -W ξ i and δ 2 = W i -W ξ i . ( 12
)
Note that δ 1 ≤ 0 and δ 2 ≥ 0 because 12) in ( 6), ( 9) and ( 11), we have
W ξ i ∈ [W i ; W i ]. Using (
P (no, no) = P (W ξ i < b l 2i -δ 2 ), P (no, yes) = P (b l 2i -δ 2 < W ξ i < b 1i ), (13)
P (yes, yes) = P (W ξ i > b h 2i -δ 1 ), P (yes, no) = P (b 1i < W ξ i < b h 2i -δ 1 ). ( 14
)
Let us consider that the willingness-to-pay is defined as,
W ξ i = α + X i β + u i , u i ∼ N (0, σ 2 ), ( 15
)
where the unknown parameters β, α and σ 2 are respectively a k × 1 vector and two scalars, X i is a 1 × k vector of explanatory variables. The number of observations is equal to n and the error term u i is Normally distributed with a mean of zero and a variance of σ 2 . This model can easily be estimated by maximum likelihood, using the log-likelihood function
l(y, β) = n i=1
r 1i r 2i log P (yes, yes) + r 1i (1 -r 2i ) log P (yes, no)
+ (1 -r 1i ) r 2i log P (no, yes) + (1 -r 1i ) (1 -r 2i ) log P (no, no) , (16)
where r 1 (resp. r 2 ) is a dummy variable which is equal to 1 if the answer to the first bid (resp. to the second) is yes, and is equal to 0 if the answer is no. To estimate our model, we can derive from ( 13) and ( 14) the probabilities that should be used:
P (no, no) = Φ[(b l 2i -δ 2 -α -X i β)/σ], (17)
P (no, yes) = Φ[(b 1i -α -X i β)/σ] -Φ[(b l 2i -δ 2 -α -X i β)/σ], (18)
P (yes, no) = Φ[(b h 2i -δ 1 -α -X i β)/σ] -Φ[(b 1i -α -X i β)/σ], (19)
P (yes, yes) = 1 -Φ[(b h 2i -δ 1 -α -X i β)/σ]. (20)
Non-negativity of the probabilities ( 18) and ( 19) require respectively b 1i > b l 2i -δ 2 and b h 2i + δ 1 > b 1i . We have defined δ 1 ≤ 0 and δ 2 ≥ 0, see ( 12): in such cases the probabilities ( 18) and ( 19) are necessarily positive. However, the restrictions δ 1 ≤ 0 and δ 2 ≥ 0 are not automatically satisfied in the estimation. To overcome this problem, we can consider a more general model, for which our Range model becomes a special case.
Interrelation with the Shift model
It is worth noting that the probabilities ( 13) and ( 14) are quite similar to the probabilities derived from a Shift model [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], but in which we consider two different shifts. Indeed, in a Shift model, respondents are supposed to answer the first question with a prior willingness-to-pay W i and the second question with an updated willingnessto-pay defined as:
W ′ i = W i + δ. ( 21
)
The probability of answering successively yes and no is:
P (yes, no) = P (b 1i < W i ∩ W ′ i < b h 2i ) = P (b 1i < W i < b h 2i -δ), (22)
which is equal to the corresponding probability in ( 14) with δ = δ 1 . Similar calculations can be made for the other probabilities, to show that the Range model can be estimated as a model with two different shifts in ascending/descending sequences. The underlying decision process is very different from the one developed in the Range model. In the Shift model, respondents answer questions according to two different values of WTP, W i and W ′ i . The first bid offer is interpreted as providing information about the cost or the quality of the object. Indeed, a respondent can interpret a higher bid offer as paying more for the same object and a lower bid offer as paying less for a lower quality object. Alternatively, a higher bid can make no sense to the individual, if delivery was promised at the lower bid. [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF] propose taking into account the dynamic aspect of followup questions: they suggest specification allowing the initial and follow-up answers to be based on two different WTP values. The WTP is broken down in two parts, a fixed component and a varying component over repeated questions. The random effect model can be written:
Random-effect model
W 1i = W ⋆ i + ε 1i W 2i = W ⋆ i + ε 2i
where
W ⋆ i = α + X i β + ν i . ( 23
)
The difference between the two WTP values is due to the random shocks ε 1i and ε 2i , assumed to be independent. The fixed component W ⋆ i can be split into two parts. X i β represent the part of the willingness-to-pay due to observed individual specific characteristics. ν i varies with the individual, but remains fixed over the indivual's responses: it relates unobserved individual heterogeneity and introduces a correlation between W 1i and W 2i . The correlation is high (resp. low) if the variance of the fixed component is large (resp. small) relative to the variance of the varying component, see [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] for more details. At the limit, if the two WTP values are identical, W 1i = W 2i , the correlation coefficient is equal to one, ρ = 1. [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] have modified this random-effect model to the case of the Shift model. Since the Range model can be estimated as a model with two different shifts in ascending/descending sequences (see above), the use of a random-effect model in the case of the Range model is straightforward. From equations ( 17), ( 18), ( 19) and ( 20), we can write the probability that the individual i answers yes to the j th question, j = 1, 2:
P (W ji > b ji ) = Φ [(α + X i β -b ji + δ 1 D j r 1i + δ 2 D j (1 -r 1i )) /σ] , (24)
where D 1 = 0, D 2 = 1, and r 1i equals 1 if the answer to the first question is yes and 0 otherwise. Consequently, the Range model can be estimated from the following bivariate probit model:
P (yes, yes) = Φ [α 1 + X i θ + γ b 1i ; α 2 + X i θ + γ b 2i + λ r 1i ; ρ ] . (25)
The parameters are interrelated according to:
α = -α 1 /γ, β = -θ/γ, σ = -1/γ, δ 1 = -λ/γ and δ 2 = (α 1 -α 2 )/γ. ( 26
)
Estimation with a bivariate probit model based on equation ( 25) does not impose any restriction on the parameters. The Range model is obtained if δ 1 ≤ 0 and δ 2 ≥ 0; the Shift model is obtained if δ 1 = δ 2 . It is clear that the Range model and the Shift model are non-nested; they can be tested through (25).
Interpretation
We have seen above that the estimation of the Range model derives from a general model, that also encompasses the Shift model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], see ( 25) and ( 26). Estimation of model ( 15), based on equation ( 25), provides estimates of α, β, σ, δ 1 and δ 2 , from which we can estimate a mean µ ξ and a dispersion σ of the willingness-to-pay - [START_REF] Hanemann | The statistical analysis of discrete response CV data[END_REF] by
µ ξ = n -1 n i=1 W ξ i = n -1 n i=1 (α + X i β). ( 27
)
This last mean of WTPs would be similar to the mean of WTPs estimated using the first questions only, that is, based on the single-bounded model.
Additional information can be obtained from the use of follow-up questions: estimates of δ 1 and δ 2 allows us to estimate a range of means of WTPs. The mean value of WTPs estimated from our model µ ξ is the mean of the estimates of W ξ i for all the respondents, i = 1, . . . , n. From ( 12), we can derive the lower bounds of the range of acceptable values for all respondents and a mean of WTPs associated with it:
µ 0 = n -1 n i=1 W i = n -1 n i=1 (W ξ i + δ 1 ) = µ ξ + δ 1 , δ 1 ≤ 0. ( 28
)
It would be the mean of WTPs when respondents always answer no if the bid belongs to their range of acceptable value. Similarly, we can derive the upper bounds of their range,
µ 1 = n -1 n i=1 W i = n -1 n i=1 (W ξ i + δ 2 ) = µ ξ + δ 2 , δ 2 ≥ 0. (29)
It follows that we can provide a range of means of WTPs
[µ 0 ; µ 1 ] = [µ ξ + δ 1 ; µ ξ + δ 2 ]
with δ 1 ≤ 0, and δ 2 ≥ 0.
This range can be estimated with μξ , δ1 and δ2 . The lower bound µ 0 corresponds to the case where respondents always answer no if the bid belongs to the range of acceptable values (ξ = 0). Conversely, the upper bound µ 1 corresponds to the case where respondents always answer yes if the bid belongs to the range of acceptable values (ξ = 1). How respondents answer the question when the bid belongs to the range of acceptable values can be tested as follows:
• respondents always answer no corresponds to the null hypothesis H 0 : δ 1 = 0,
• respondents always answer yes corresponds to the null hypothesis H 0 : δ 2 = 0.
Finally, an estimation of the probability ξ would be useful. For instance, we could conclude that when the first bid belongs to the range of acceptable values, respondents answer yes in (100 ξ) % of cases. If the first bids are drawn randomly from a probability distribution, ξ can be rewritten
ξ = P (µ 0 < b 1i < µ ξ ) P (µ 0 < b 1i < µ 1 ) . (31)
In addition, if the set of first bids are drawn from a uniform distribution by the surveyors, it can be estimated by ξ = δ1 /( δ1 -δ2 ).
Application
Since independent variables other than the bid are not needed to estimate the Range model, we can use data from previously published papers on this topic. In this application, we use data from the well-known Exxon Valdez contingent valuation survey. 3 The willingness-to-pay question asked how the respondent would vote on a plan to prevent another oil spill similar in magnitude to the Exxon Valdez spill. Details about the Exxon Valdez oil spill and the contingent valuation survey can be found in [START_REF] Carson | Contingent valuation and lost passive use: Damages from the Exxon Valdez oil spill[END_REF]
Results
With the assumption that the distribution of WTP is lognormal, results in Alberini et al. show evidence of a downward shift. Here, we consider the more general model given in ( 25) from which the Double-bounded, Shift and Range models are special cases.
Estimation results are given in Table 1. We use the same model as in Alberini et al.: there are no covariates and the distribution of the WTP is assumed lognormal (θ = 0 and b ij are replaced by log b ij in ( 25)). The mean of log WTP is given by α = -α 1 γ and the median of WTP is given by exp(α). Estimation results of the Single-bounded model are obtained from a probit model. Estimation results obtained from a bivariate probit model with no restrictions in (25) are presented in column M ; the Double-bounded model is obtained with δ 1 = δ 2 = 0 ; the Shift model is obtained with δ 1 = δ 2 and the Range yes model is obtained with δ 2 = 0. -1345.70 -1303.36 -1301.32 -1301.45 Note: standard errors are in italics; n.c.: no constraints.
From Table 1, we can see that the estimates of the mean of log WTP in the Singlebounded and Double-bounded models are very different (3.727 vs. 3.080). Such incon-sistent results lead us to consider the Shift model to control for such effects. It is clear that the estimates of the mean of log WTP in the Single-bounded model and in the Shift model are very close (3.727 vs. 3.754), and that the Double-bounded model does not fit the data as well as the Shift model. Indeed, we reject the null hypothesis δ 1 = 0 from a likelihood-ratio test (LR = 84.68 and P < 0.0001). 4 To go further, we consider estimation results obtained from the model defined in (25) with no restrictions (column M). On the one hand, we reject the null hypothesis δ 1 = δ 2 from a likelihood-ratio test (LR = 4.08 and P = 0.043). It suggests that the Shift model does not fit the data as well as model M. On the another hand, we cannot reject the null hypothesis δ 2 = 0 (LR = 0.26 and P = 0.61). It leads us to select the Range yes model hereafter.
The estimated values of the parameters δ 1 and δ 2 allow us to interpret the model as a Range model (δ 1 ≤ 0, δ 2 = 0). Respondents are unsure of their willingness-to-pay in an interval; they answer yes if the initial bid offer belongs to their interval of acceptable values. We compute an interval of the median WTP:
[exp(α -δ1 ); exp(α -δ2 )] = [9.45; 44.21].
(32)
This interval suggests that, if the respondents answer no if the initial bid belongs to their range of acceptable values, the median WTP is equal to 9.45; if the respondents answer yes if the initial bid belongs to their range of acceptable values, the median WTP is equal to 44.21 (see Section 4).
Main findings
From these empirical results, we select the Range yes model, with an interval of the median WTP [9.45;44.21]. Previous researchers have also found that, when uncertain, individuals tend to say yes [START_REF] Ready | How do respondents with uncertain willingness to pay answer contingent valuation questions?[END_REF]. New with the Range model is the fact that no additional question such as "how certain are you of your answer?" is required.
From our results, several conclusions can be drawn:
1. From the Range yes model, we cannot reject the null hypothesis ρ = 1.5 This result has an important implication. It suggests that the underlying decision process defined in the Range model is supported by the data. Confronted with an initial bid, respondents select a value, then they answer both the first and the second questions according to the same value (see Sections 2 and 3.2). This is in sharp contrast to the existing literature that explains anomalies by the fact that respondents use two different values to answer the first and follow-up questions. 6 The Range model supports the view that anomalies can be explained by a specific respondent's behavior prior to the first question, rather than by a change between the first and the second questions.
2. As long as the Range yes model is selected, the Single bounded model is expected to elicit the upper bound of the individual's range of acceptable WTP values. Indeed, in the case of Exxon Valdez, the estimated median WTP is equal to exp(α) = 41.55. This value is very close to the upper bound provided by the interval of the median WTP in the Range yes model, i.e. 44.21. The discrete choice format is then likely to overestimate means or medians compared to other surveys' formats. It confirms previous research showing that, with the same underlying assumptions, the discrete choice format leads to a systematically higher estimated mean WTP than the openended format [START_REF] Green | Referendum contingent valuation, anchoring, and willingness to pay for public goods[END_REF] or the payment card format [START_REF] Ready | How do respondents with uncertain willingness to pay answer contingent valuation questions?[END_REF].
3. Existing results suggest that anomalies occur in ascending sequences only (i.e. after a yes to the initial bid).7 DeShazo ( 2002) offers a prospect-theory explanation, interpreting the first bid as playing the role of a reference point. The Range model offers an alternative explanation: anomalies come from the fact that, when uncertain, respondents tend to answer yes. Indeed, if the bid belongs to his range of acceptable values, a respondent answers yes to the first question and necessarily no to the second question (see Section 2). This specific behavior occurs in ascending sequences only. Such asymmetry can be viewed from the estimation of the model too, since the Range model can be estimated as a model with two different shift parameters in ascending/descending sequences (see Section 3.1).
All in all, based on Exxon Valdez data, the Range model: (1) confirms existing findings on the effect of respondent uncertainty; (2) offers an alternative explanation to anomalies in CV surveys.
Conclusion
In this article, we develop a model that allows us to deal with respondent uncertainty and starting-point bias in the same framework. This model is based on the principle of coherent arbitrariness, put forward by Ariely et al. (2003b). It allows for respondent uncertainty without having to rely on follow-up questions explicitly designed to measure the degree of that uncertainty (e.g., "How certain are you of your response?"). It provides an alternative interpretation of the fact the some of the responses to the second bid may be inconsistent with the responses to the first bid. This anomaly is explained by respondents' uncertainty, rather than anomalies in respondent behavior. Using the well-known Exxon Valdez survey, our empirical results suggest that, when uncertain, respondents tend to answer yes.
Table 1 :
1 Exxon Valdez Oil Spill Survey: Random-effect models
Parameter Single Double Shift M Range yes
(δ 1 = δ 2 = 0) (δ 1 = δ 2 ) (n.c.) (δ 2 = 0)
α 3.727 3.080 3.754 3.797 3.789
(0.124) (0.145) (0.127) (0.129) (0.134)
σ 3.149 3.594 3.236 3.298 3.459
(0.432) (0.493) (0.421) (0.387) (0.272)
δ 1 -1.108 -1.424 -1.583
(0.212) (0.356) (0.222)
δ 2 -0.062
(0.114)
ρ 0.694 0.770 0.997 0.998
(0.047) (0.045) (0.010) (0.014)
ℓ -695.51
It would be interesting to consider a model in which δ varies across individuals. Some variables that are proved to play a role in individual value assessment (such as repeated exposure to the good or representation of the good(Flachaire and Hollard
2007)) may also influence the length of the range. This requires a particular treatment which is beyond the scope of this paper.
P (yes) = P (yes|b 1i < W i )P (b 1i < W i ) + P (yes|W i < b 1i < W i )P (W i < b 1i < W i ) + P (yes|b 1i > W i )P (b 1i > W i )where the conditional probabilities are respectively equal to 1, ξ and 0.
The bid values are given inAlberini, Kanninen, and Carson (1997, Table 1).
A LR test is equal to twice the difference between the maximized value of the loglikelihood functions (given in the last line ℓ); it is asymptotically distributed as a Chi-squared distribution.
Estimation results of the Range and of the Range yes models obtained by using the constraint ρ = 1 are not reported: they are similar to those obtained without imposing this constraint and the estimates of the loglikelihood functions are identical.
[START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF][START_REF] Kanninen | Bias in discrete response contingent valuation[END_REF][START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF]
[START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF][START_REF] Cooper | One and one-half bids for contingent valuation[END_REF][START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF] |
sic_01759274 | en | [
"shs.info.hype"
] | 2024/03/05 22:32:10 | 2012 | https://archivesic.ccsd.cnrs.fr/sic_01759274/file/new%20musical%20organology%20-%20videogames.pdf | Hervé Zénouda
email: zenouda@univ-tln.fr
NEW MUSICAL ORGANOLOGY: THE AUDIO GAMES THE QUESTION OF "A-MUSICOLOGICAL" INTERFACES
Keywords: Audio-games, video games, computer generated music, gameplay, interactivity, synesthesia, sound interfaces, relationships image/sound, audiovisual music
This article aims to shed light on a new and emerging creative field: "Audio Games," a crossroad between video games and computer music. Today, a plethora of tiny applications, which propose entertaining audio-visual experiences with a preponderant sound dimension, are available for game consoles, computers, and mobile phones. These experiences represent a new universe where the gameplay of video games is applied to musical composition, hence creating new links between these two fields. In proposing to manipulate what we refer to as "a-musicological" 1 representations (i.e. using symbols not normally associated with traditional musicology to manipulate and produce sound), this creative aspect raises new questions about representations of sound and musical structures and requires new instrumental gestures and approaches to composing music which will be examined. Furthermore, these objects play a role in the rise of a new amateur profile, already put forth by authors like Vilèm Flusser (Flusser, 1996), with regards to photography.
After having defined the characteristics and the limits of this field and highlighting a few of the historical milestones (abstract cinema, gaming theory in music, graphic actions and scores), we will study a few examples of musical games and propose areas for further research to devise analytical tools for these new objects.
1/ Introduction
In this article, we would like to shed some light on an emerging creative field : "Audio Games," a crossroad between video games and computer music. Today, a plethora of tiny applications, which propose entertaining audio-visual experiences with a preponderant sound dimension, are available not only for game consoles and computers, but mobile phones as well. These experiences represent a new universe where the notion of gameplay derived from video games can facilitate the link with musical composition. In proposing to manipulate what we refer to as "a-musicological"2 representations (i.e. using symbols not normally associated with traditional musicology to manipulate and produce sound), this creative aspect raises new questions about representations of sound and musical structures and requires new instrumental gestures and approaches to composing music which will be examined. In an original way, he thus questions the issue of usability with respect to creativity. Indeed, a new organology is emerging, profoundly renewing musical relationships (abstract synesthesiatic representations, 3D manipulations, simulation of physical events like rebounds or elasticity…). After having defined the characteristics and the limits of this field and highlighting a few of the historical milestones (abstract cinema, gaming theory in music, graphic actions and scores), we will study a few examples of musical games and propose areas for further research to devise analytical tools for these new objects.
2/ Definition
A quick search on the Internet gives us many terms describing this new field and its multiple sub-genders : audio-games, music video-games, music memory-games, rhythm-games, pitch-games, volume-games, eidetic music-games, generative music-games… To avoid contention, we prefer to accept the broadest of terms which takes into account all of the situations where games and musical production are combined with the main particularity of the field being the layering of these two universes with points of contact, but also different cognitive objectives. Thus, it is useful to set aside, from the outset, any and all ambiguity by specifying that the object of our research does not directly deal with the field of sound in video games (a field which deals primarily with interactive sound design), but rather a new organology which uses fun video activities to produce sound and extensively uses digital interactivity, creating a new relationship with images. The « audio game », with respect to the dimension of sound, is rooted in a realm between a musical instrument, a music box, and musical automatons. Moreover, the question of balance between the freedoms and constraints of players is crucial to the success of a entertaining sound-based video. Therefore, the richer and more complex the rules are, the more musically interesting the game is, even if it requires a lot of time to learn how to use. On the contrary, the more the constraints are preset, the easier it is for the user to use, managing almost immediately, but with the undeniable risk of producing either mediocre or repetitive sound. A second important characteristic of this new cultural object that we are trying to define is the development of a new relation between image and sound. Indeed, the overlapping of two universes (games and sound production) is materializing in the interface and interactive manipulation which often do not refer to musicology, but rather to rules specific to games (fighting opponents, manipulating objects…). It is above all, and in this case, the use of interfaces that we refer to as "amusicological", as does Francis Rousseau and Alain Bonardi, which appear to be the main originality of this new organology.
3/ Gameplay and Instrumental games
A link exists between video games and sound in the notion of "playing", a notion which deserves our attention. In the world of video games, the terms playability and gameplay are the most often used to describe this dimension. For some (and in Canada in particular), the two terms are considered as synonyms and "jouabilité" is used as the French Translation of "game-play". For others, the two terms refer to different realities, despite being very closely related. The first is considered as a subgroup of the second. The first term refers to the pleasure of playing while the second term represents the principles of the game. Both notions being similar (and each directly depending on the other), we prefer to use the French term of "jouabilité" to describe all of the elements in the ludic video experience: the rules, interface, and maneuverability as well as player appropriation (the pleasure procured and the ease of appropriation). What does the playability of video games and instrumental games have in common? If we define playability as the characteristic of that which is playable, which can be played, we can therefore find a musical link in the notion of musicality, that which is "well composed" for a given instrument and which "sounds" good. We can then speak of "playability" for a piano score in reference to the pleasure procured by the pianist when playing this composition and his or her ability to incorporate, personify, express his or her interpretation. As regards the aspect of applying the rules of the game, there is no direct equivalent of the player that can be found in the traditional roles associated with music : neither a true composer, nor solely an instrumentalist, nor simple listener. Yet, one composer has remarkably shifted how these roles were assigned and enabled the development of a ludic dimension. Indeed, by the end of the 1950s, John Cage (1912-1992) made headway in his research on chance and indetermination by transforming the score into a game with rules that the musician had to incorporate and above all personalize. The score thus changed from notations of musical parameters to that of actions. For example, in "Variations I" (1958), "Variations II" (1961) or "Cartridge Music" (1960), the notes were written as relative values (the size of the points reflect the intensity and length) on transparent sheets the musician placed over other transparent sheets containing the rules of the game (geometric shapes, a clock face…). The overlapping of sheets containing information on different levels (notes and rules) produces a score which must be interpreted. As shown below, the musician must implement a plan to produce the musical information before playing a single note, a "task" normally reserved for composers themselves. This new role of instrumentalists embodies the two English terms which describe games: "game" and "play". The term "game" refers to the notion of structure (a system of rules which the player must respect to successfully carry out an action), while the term "play" denotes a ludic attitude the player adopts ("actions of the player") (Genvo, 2002). In modifying the role of the musician, John Cage considerably increased the amount of freedom and creativity of the musician, two fundamental elements of a ludic attitude. The link between composition and the theory of games can be more explicitly found in the famous performance "Reunion" (1968), featuring Marcel Duchamp playing a game of chess where each move produces musical events more or less randomly. The game theory can be found among other composers such as Iannis Xenakis ("Duel" (1959), "Stratégie" (1962) and "Linaia Agon" (1972)), Mauricio Kagel ("Match" (1964) inspired by a game of tennis) or John Zorn ("Cobra" (1984)).
If these approaches involve a dialogue between two specialists (the composer and the musician), the ludic video objects are particular in nature in that they mainly address a more general amateur public. This mainstream ludic attitude embodies the work of Vilèm Flusser (Flusser, 1996), on photography, writings which particularly shed light on the particular nature of our research: Flusser in fact defines the camera as a structurally complex toy but functionally simple, enabling you to take pictures without any particular skills, making this activity accessible to a vast amateur public : "The camera is not a tool, but a toy and the photographer is not a worker, but a player: not "homo faber" but "homo ludens". 3 And furthermore: "While the camera is based on complex scientific and technical principles, it is very easy to operate. It's a structurally complex toy, but functionally simple. In that way, it is the contrary of a game of chess, which is structurally simple, but functionally complex: Its rules are simple, but it is very difficult to play chess well. A person who operates a camera can very well take excellent pictures without necessarily having the slightest inclination of the complexity of the processes that take place when he or she pushes the button". 4 This transfer of specialized skills (the photographer or the composer) to an amateur appears to be the founding paradigm of this new class of cultural objects. A paradigm which is part of a broader movement linked to new information technologies. Jean Louis Weissberg (Weissberg, 2001), in his article on the emerging amateur figure clearly shows new intermediary spaces which appear between production and reception, between professionals and amateurs. The author puts forth the notion of graduated skills and highlights the political dimension of these new autonomy improving postures. In transferring this capacity, the user-friendliness of the interfaces is clearly essential. If it facilitates the transfer of skills and endeavors to make complex operations more intuitive, we argue that in the field application of musical composition we are studying, the question of user-friendliness is directly linked to the key issue of the representation of sound and musical structures in its ability to make concrete and visible that which is abstract and sound based.
4/ A few historical milestones
We can comprehend the recent emergence of "audio-games" by linking it to the closing gap between graphic arts and musical arts which arose toward the end of the nineteenth century and experienced rapid growth throughout the twentieth century (Bosseur, 1998). The scale of such historical dynamics exceeding the framework of this article (Zénouda, 2008), we shall very briefly introduce three :
-"Audiovisual music": The expression of audiovisual music progressed in the beginning of the 1940s5 thanks to John Withney6 , who defined the characteristics by studying the question of tempo in relation to specific films (combinations of time and space as well as shape and color). Nevertheless, audiovisual music existed long before, as early as the 1920s, with directors like Oskar Fishinger7 or Len Lye8 and is a part of the circle of influence regarding questions of connections between the arts and synesthesia so dear to abstract painters like Kandinsky or Paul Klee. The will of these directors was to find a specific film language "free" of any narration and forms inspired by novels or plays. In doing so, filmmakers strayed from cinema's primary function of recording reality by favoring to use non figurative forms and searching for construction models for their work in music which is the most abstract of arts. With duration and rhythm as a common denominator, these filmmakers sought to produce "completely" synesthetic works playing with the subtle connections between different senses. In the 1970s, the composer Iannis Xenakis, also interested in the connection between image and audio, proposed a graphic interface for the composition of music dubbed UPIC 10 11 . Thanks to a graphic palette, the composer can draw waveforms and amplitude envelopes and control both the structure of the sound and the general form of the work. More recently, Golan Levin 12 , a researcher at MIT 13 , a series of instruments 14 which closely combined image, sound, and gestures. A developer of graphic palettes to which he adds real-time computer generated sounds, he associates movement parameters, like the direction and speed of movement or the pressure of an electronic pencil, with sound parameters, like timbre, pitch, panning, and with graphic parameters, like color, the thickness of a stroke or direction. For Golan Levin, the notions of interactivity and generativity are closely linked: Images and sound are produced in real time as the result of a user's movement, hence creating a "generative and interactive malleable audiovisual substance". 15 In France, the company Blue Yeti 16 proposes a dual screen musical drawing "Grapholine" system based on the transformation of sound samples (via a standard but also customizable sound database) by granular synthesis and offers a large range of manipulations and relations between image and sound (speed of the stroke, transparency, luminosity, color, pencil pressure…). At the UTC de Compiègne, two students 17 in their final year of computer science studies, proposed a virtual reality application (Immersive Music Painter, 2010) 18 in which the user, with his or her gestures, draws curves of different colors and thinknesses to which sound or melodies of controllable pitch, panning, and volume are associated.
These three fields are a part of the aspects of representing sound and interaction with sound and concerns the specialized experimental cinema audience for the first field, and composers and musicians for the latter fields. "Audio-games" add a ludic dimension to these two aspects and in doing so, the question of the user-friendliness and the user-friendliness of the interface; thus making this type of application available to a larger amateur audience, regardless of their knowledge of music.
5/ Typology of "audio-games"
We aim to develop an extremely simple and general typology that incorporates all types of "audio-games" as well as all types of music produced regardless of the style and level user expertis. In doing so, we concentrate on the question of representation (set or dynamic) of both types of sound : the basic elements (paradigmatic axis) and elements created through manipulations (playability) proposed via games (syntagmatic axis).
-The vertical paradigmatic axis pertains to the representation of basic sound elements that the audiogame provides the player. The graphic representation of this basic element range from the classic sol-fa to graphical creations without any direct link to sound in using abstract representations related to synesthesia (and any type of connection with different sound parameters). Note that the distinction between an elementary sound and one that is created depends on the type of game and interaction proposed. Thus, what is considered as composed in one audio game may be considered as elementary in another. The items on this axis may therefore be modified depending on the audio-game that is studied and placed according to their distance or proximity to the traditional sol-fa.
-
The horizontal syntagmatic axis pertains to the representation of sound objects created through the game's playability. It regards the representations of second level sound creations. These representations may be set and close to classic musical manipulations (representations of repetitions, transposition, reversals…) or dynamic of simulations of physical phenomena like rebounding, explosions, accumulations) or describe the dynamics of "musicological" or "a-musicological" playability. The first can find direct musical analogies (moving around within space, mirroring….) the second have no musical analogy whatsoever and imply arbitrary and ad-hoc relations linked to the diegesis of the game (fighting games…). As for the paradigmatic axis, the items on the syntagmatic axis can be modified depending on the audio-game studied and will be positioned according to their distance or proximity to the traditional sol-fa. -"Aura" (on iPhone and iPad) 19 [2,1]20 , very close to the aesthetics of a work of Oskar Fishinger as "Allegretto" (1936) or Jordan Belson's "Allures" (1961), allows users to create their own melodies over a computer generated musical background. Always in harmony with music in the background, the user can select the notes, timbre and volume using simple colored shapes on the screen. The audio-visual creation process is fluid and uninterrupted. The musical and visual background changes simply by shaking your iPhone. The user produces music and abstract shapes which are presented as an interactive generator of background music (and images) with the touch of a button. An application like "SynthPond"21 (iPhone et iPad) [2,1] allows you to place different shapes on circles of varying diameters. Notes are played when they collide with waves generated by the user or with other nodes. Contrary to "Aura", "SythPond" produces repetitive musical loops making it possible to visually anticipate the output by following the movement of the nodes as they get closer to different junctions. The player can select the pitch, timbre, and tempo of the loops, which thus allows you to produce potentially complex melodies. -"Rez" (Tetsuya Mizuguchi, 2001, Sega)22 and "Child of Eden" (Tetsuya Mizuguchi, 2010, Xbox360)23 [2,4] are more similar to traditional games in that they preserve a significant sound dimension. "Rez" is like a fight in which the musical elements are dynamically created by the player's acts. Each time the player's or enemy's vessel shoots, a sound is produced which adapts in rhythm and in harmony. The game experience is therefore quite original, the ludic mission (to destroy the enemy vessels) is the source of the images and sound that is produced. The arguments of the designer referring to Kandinsky and synaesthesia link this game to those previously mentioned. To increase the quest for synaesthesia, the multiple simultaneous sensations are brought on by the vibration of the joystick when the images and sound synchronise. In the "Child of Eden", Tetsuya Mizuguchi continues to improve playability by generating sound using state of the art technological innovations (HD images, 5.1 surround sound, Kinect motion sensors which enable you to play without a mouse or joystick). Motion detectors enable direct interaction with images as well as new types of interactivity (for example, clapping you hands enables you to change weapons during the game). The visual aesthetics are far more dazzling and psychedelic that in Rez and the ludic mission is to eradicate a virus which jeopardizes the project to recreate a human within Eden (the archive of humanity). Each level corresponds to a step in this archive of humanity and the last stage incorporates the personal contributions of players with pictures of their happiest moments. -"Metris"24 (Mark Havryliv, Terumi Narushima, Java Applet) [2,3] adds a musical creation dimension to the famous game "Tetris". A synthetic bell sound is generated in real time each time a block is moved. When the player moves or rotates blocks the pitch changes. They way in which the blocks fit together produces different chords. All of the different possibilities enable players to produce sophisticated micro-tonal compositions without losing the interest of the original "Tetris" game.
-"Pasy02"25 (iPhone and iPad) [2,2] is laid out as a grid that you can reshape and stretch as much as you would like and which comes back to normal with an elastic effect. These physical modifications influence the pitch, tempo, timbre of the musical loop played by a synthesizer with waveforms (sine, triangles, squares, sawtooth) that can be chosen by the user to produce diverse melodies. The simple application offers an original and live production of sound. -"Elektro-Plankton"26 (Toshio Iwai, Nintendo) is an emblematic game which offers ten different plankton themed interfaces with various musical situations enhanced with ludic graphics. Certain interfaces emphasize the strong link between musical gestures and pictorial gestures. Furthermore, with "Tracys," [3,3], the player draws lines (straight or curved) the plankton follow while playing piano notes in rhythm with the graphic shapes created. Yet, others display a labyrinthine and rhizomic dimension of music (using a complete range of possible notes): with "Luminaria" [3,3], several plankton move around on a grid of nodes and links, each link following the arrows. The player can change the connections between the nodes and hence change the direction the plankton take. In doing so, the player modifies the notes that are played. Others use a device which is close to what could be referred to as a sound installation. The "Hanenbrows" [3,2], for example, are projected on the screen and produce musical notes when they bounce off leaves. In changing the angle of the leaves, the player can influence the melodies that are produced. Each time a leaf is hit, it changes color and the sound it makes. A screen like "Nanocarps" [3,2], uses plankton, each with their own behaviour and direction. A sound is produced when the plankton hit a wave and using the microphone allows the player to reorganize them. In the same way, the application "Volvoice" [3,1] uses a computer microphone to record sounds which can then be modified (pitch, speed, filter) as much as you'd like by simply changing the shape of the plankton on the screen. Finally, the screen "Sun-Animalcule" [3,1] proposes for you to plant plankton anywhere on the screen. The musical note depends on where you plant it. The light produced by a day/night cycle hatches the seeds and produces the corresponding musical note. As the plankton embryos get bigger, their musical behaviour changes. Using the same graphical elements, each of the three productions make use of a particular aspect of music: Legato uses audio mixing of melodic lines on a particular theme in a loop, Cellos aligns different preset melodies, Moon tribe allows you to synchronize rhythmic loops. The structure itself of the interaction is copied or transposed on aspects of music such as harmony, counterpoint, melodic structure, synchronization of rhythms. The graphics possess their own aesthetic coherence and arbitrary sounds. In the same way, each visual and audio mode, possesses its own tempo. The tempo of the graphics and that of the music do not always perfectly overlap. They produce visual and audio temporal loop delays. The gesture does not merge the two modes, but coordinates them. It's the junction, the border between the two worlds. If this relationship leads to sensory fusion, it is only elusive, unambiguous, and subject to interpretation that varies greatly from one user to another. It is not directly a part of the technical process, but rather accomplished in the course of an untimely gesture, as a better user appropriation. It is impossible to say whether our gesture is guided by our interest in producing sound or pleasant visual graphics. We imperceptibly emphasize one or the other, thus combining the three dimensions: visual, audio, and gestural. These examples fit into our typology as follows :
« Aura » [2,1], « SynthPond » [2,1], « Rez » [2,4] , « Child of Eden »[2,4] , « Metris » [2,3], « Pasy02 » [2,2] , « Tracys » [3,3], « Luminaria » [3,3], « Hanenbrows » [3,2], « Nanocarps » [3,2], « Volvoice » [3,1] , « Sun-Animalcule » [3,1], « Flying puppet » [3,4]
7/ Towards an analysis grid for "audio games"
In addition to a classification table of these new objects, we propose some ideas for developing a grid analysis of "audio-games" :
-The Distinction in relations between image and sound: Three modes interact and slide from one to another within the same ludic audio video production. Sound for the overall benefit of images (derived from traditional sound and audiovisual illustrations), images for the overall benefit of sound (derived from computerbased music where images aim to represent and manipulate sound), images and sound of equal importance (more specific to "audio-games" and hypermedia) producing perceptible fusion effects between the two modes.
-The Distinction between sound producing modes: Some sounds are produced by direct intervention of the user. For example, when the user moves the mouse, clicks on or goes over an icon. Other sounds are automatically generated by the system without any user intervention, such as automatic background sounds. Yet others are generated automatically by the system but linked to user intervention, like a particular path in the application or an amount of time spent on an interaction. These different means for producing sound have a tendancy to interfere with each other and lead to a certain degree of confusion. Thus, studying precisely how the author manages the untimely audio mixing of the different layers of sound is essential for a detailed analysis on the relationships between images and sound in an interactive situation.
-Taking graphic/audio/gestural tripthych into account: expressed with the notion of mapping (transcribing information that has been received in one register, the movement or graphic manipulation, into another register such as musical in this case). Several types of mapping can generally be distinguished: the relationship in which one parameter of a field corresponds to the parameter of another (one-to-one), the situation in which the parameter of one field is associated with several parameters of the other (one-to-many), and finally the relationship in which several parameters of one field are associated with one parameter of the other (many-toone). In these multisensory associations, the choice in associating the different parameters of each modality at stake and the manner in which they are associated is essential. Indeed, we note which audio and visual dimensions are affected by the interaction and what perceptible effects they produce. Regarding the sound: the note (pitch, length, intensity), the timbre (envelope, frequency…), the rules of manipulating musical structure, the audio mixing of several tracks, the general parameters like tempo… and to which graphic parameters these are assigned to (color, shape, opacity, sharpness, frame, level of iconicity…). What sensory effects are produced by the multiple combinations of image/sound/ gestures ? -The analysis of different cognitive objectives: "Audio-games" present specific situations where a user's gesture controls and produces images and sound at the same time while taking part in another logic, the game itself (for example, destroying space vessels in "Rez" or placing obstacles to manage how balls rebound in "Elektroplankton"). We have demonstrated this specific situation produces perceptible complex effects where the effects of synchronisation and fusion of images and sound are enhanced by gestures and different cognitive stakes. The main difficulty is associating these two levels (a game and musical production) in a meaningful way (neither a confrontational relationship, nor one that is too distant or anecdotal). Ideally, the ludic layer would give rise to new meaningful instrumental gestures in sound production and ultimately innovate music. To obtain optimal results, the game's rules and missions must therefore clarify the musical structures when played while keeping their impact and coherence as a game. We can be seen here, the difficulty of this desired balance.
-These new objects thus stress the necessity of developing multimodal semiotic approaches of analysis which simultaneously take into account what can be seen and heard as well as gestures. A few tools might help us to make headway :
-In 1961, Abraham Moles [START_REF] Moles | Théorie de l'information et perception esthétique[END_REF] proposed a scale of iconicity with thirteen levels for ranking images according to how closely they resemble the real object they represent. This progressive axis went from the most concrete and figurative representations to the most abstract representations like natural languages or artificial languages (mathematics etc.). This scale of iconicity can be applied to sound by developing two axis: that which goes from concrete to musical and that which goes from recorded to simulated (computer generated sound).
-Inspired by Peirce's sign theory, the composer, François Bayle [START_REF] Bayle | Musique acousmatique : propositions … positions[END_REF] defines three properties of sound linked to the attention of the listener: the "icon" (isomorphic image or im-sound): the object is indicated by all of its characteristics, the "index" (indexed images or di-sound): certain graphic traits denote the object, the "symbol" (a metaphore or me-sound): the image represents the object with associative properties. These three kinds of signs give rise to three levels of hearing: one in which sounds are heard as corresponding to directly identifiable referents of reality (quality : quali-sign) ; one in which the relationship is more abstract, the sound becomes a significant element of something. Specialized listening: Sounds are heard as having been transformed (filtering, transposition, insert…), indications of musical composition (singularity: syn-sign); and finally, one in which the sign is governed by a known law which is independent from the sign itself (rebounds, oscillation…), listening which is oriented towards a sense of organisation, formal law…(stability: legi-sign).
-Conceived by a team of musical researchers in Marseilles, an Unité Sémiotique Temporelle is "a musical segment which, even out of context, possesses a precise temporal signification thanks to its morphological structure" 30 . Nineteen USTs were identified and labelled: A word or a literary expression which most directly describes the manner in which the energy of sound is deployed over time [START_REF] Chute | Par vagues, Qui avance, Qui tourne, Qui veut démarrer, Sans direction par divergence d'information[END_REF] , most often with the help of a "morphological appellation, a qualifier which often refers to something extramusical. 32 This extramusical reference is a first step towards a generalization of these labels to other modalities. In this way, we can emphasize that all of these labels refer to an energetic or spatial movement which naturally connect them to gestures and images.
8/ To conclude: Towards what new instrumental and compositional gestures?
It is currently difficult to foresee new instrumental and compositional gestures resulting from these types of interfaces. Nevertheless, we can note that they are part of a general movement which is creating completely new situations of musical interaction: real-time musical devices are transforming musical gestures by increasing or reshaping them, which results in separating gestures from the sound produced. Professional computer generated sound software 33 more and more frequently add network representations, physical simulations (rebounds, soundclouds…), shapes moving through labyrinths of notes, and ludic approaches to their traditional representations (treble clef and bass clef scores, midi data note grids, player piano music rolls, displayed waveforms…) (Vinet, 1999). « Audio-games » play a role in renewing musical gestures as well as the representation of sound and their musical structures. They make playing and composing music easier for a broad audience regardless of their knowledge in music. They make the musical gestures of musicians on stage more visible and comprehensible. Furthermore, they are likely to make the relationship between audiences and composers more explicit thanks to interactivity.
RECOMMENDED LITTERATURE :
-Cage J. ( 1976
RECOMMENDED WEBSITES :
-http://www.centerforvisualmusic.org/ -Allures : http://www.mediafire.com/?fy920bhvu6q6b1v
Figure 1 :
1 Figure 1 : John Cage (1960), Cartridge Music
Figure 2 :
2 Figure 2 : John Cage (1968), Reunion (featuring Marcel Duchamp in Toronto)
Figure 3 :
3 Figure 3 : Jordan Belson (1961), Allures (http://www.mediafire.com/?fy920bhvu6q6b1v) -Graphic scores: As we have seen above with the score Cartridge Music (John Cage, 1960), the principles of indetermination developed by numerous composers starting in the late 1950s 9 caused us to reconsider the score as a musical communication tool. A new genre appeared as sound and visual arts merged. Composers broadened their range of symbols (colors, shapes, depth, textures…) to express new tones and novel musical processes. These new representations introduced spaces of freedom linked to improvisation and emphasized global musical indications
Figure 4 :
4 Figure 4 : Cornelius Cardew (1963 -1967), Treatise-Graphic interfaces for musical compositions : In the 1970s, the composer Iannis Xenakis, also interested in the connection between image and audio, proposed a graphic interface for the composition of music dubbed UPIC 10 11 . Thanks to a graphic palette, the composer can draw waveforms and amplitude envelopes and control both the structure of the sound and the general form of the work. More recently, Golan Levin 12 , a researcher at MIT 13 , a series of instruments 14 which closely combined image, sound, and gestures. A developer of graphic palettes to which he adds real-time computer generated sounds, he associates movement parameters, like the direction and speed of movement or the pressure of an electronic pencil, with sound parameters, like timbre, pitch, panning, and with graphic parameters, like color, the thickness of a stroke or direction. For Golan Levin, the notions of interactivity and generativity are closely linked: Images and sound are produced in real time as the result of a user's movement, hence creating a "generative and interactive malleable audiovisual substance".15 In France, the company Blue Yeti 16 proposes a dual screen musical drawing "Grapholine" system based on the transformation of sound samples (via a standard but also customizable sound database) by granular synthesis and offers a large range of manipulations and relations between image and sound (speed of the stroke, transparency, luminosity, color, pencil pressure…). At the UTC de Compiègne, two students 17 in their final year of computer science studies, proposed a virtual reality application (Immersive Music Painter, 2010) 18 in which the user, with his or her gestures, draws curves of different colors and thinknesses to which sound or melodies of controllable pitch, panning, and volume are associated.
Figure 5 :
5 Figure 5 : Typology of « Audio-games » 6/ A brief presentation of a few "audio-games"-"Aura" (on iPhone and iPad)19 [2,1] 20 , very close to the aesthetics of a work of Oskar Fishinger as "Allegretto" (1936) or Jordan Belson's "Allures" (1961), allows users to create their own melodies over a computer generated musical background. Always in harmony with music in the background, the user can select the notes, timbre and volume using simple colored shapes on the screen. The audio-visual creation process is fluid and uninterrupted. The musical and visual background changes simply by shaking your iPhone. The user produces music and abstract shapes which are presented as an interactive generator of background music (and images) with the touch of a button. An application like "SynthPond" 21 (iPhone et iPad) [2,1] allows you to place different shapes on circles of varying diameters. Notes are played when they collide with waves generated by the user or with other nodes. Contrary to "Aura", "SythPond" produces repetitive musical loops making it possible to visually anticipate the output by following the movement of the nodes as they get closer to different junctions. The player can select the pitch, timbre, and tempo of the loops, which thus allows you to produce potentially complex melodies.
Figure
Figure 6 : « Aura » Figure 7 : « Synthpond »
Figure 8 :
8 Figure 8 : « Rez » Figure 9 : « Child of Eden »
Figure
Figure 10 : « Pasy02 »
Figure 11 :
11 Figure 11 : « ElectroPlankton » -"Flying puppet" (Nicolas Clauss, painter) 27 [3,4] : Nicolas Clauss' website proposes numerous "interactive screens" where visual aesthetics is clearly figurative, seeking a multi-sensorial experience without any objectives or particular missions. While sound is extremely important, the images never reflect any logical representation of the sound. The two modalities have creative autonomy and produce new sensorial experiences. For example, the triptych Legato, Cellos and Moon tribe uses dancing stick figures. Using the same graphical elements, each of the three productions make use of a particular aspect of music: Legato uses audio mixing of melodic lines on a particular theme in a loop, Cellos aligns different preset melodies, Moon tribe allows you to synchronize rhythmic loops. The structure itself of the interaction is copied or transposed on aspects of music such as harmony, counterpoint, melodic structure, synchronization of rhythms. The graphics possess their own aesthetic coherence and arbitrary sounds. In the same way, each visual and audio mode, possesses its own tempo. The tempo of the graphics and that of the music do not always perfectly overlap. They produce visual and audio temporal loop delays. The gesture does not merge the two modes, but coordinates
Figure 12 :
12 Figure 12 : « Moon Tribe »
Figure 13 :
13 Figure 13 : Typology of « Audio-games »
), Pour les oiseaux, Belfond. -Bonardi A., Rousseau F. (2003)," Music-ripping " : des pratiques qui provoquent la musicologie, ICHIM 03. -Bosseur J. Y. (1998), Musique et arts graphiques, Interactions au XX ième siècle, Minerve, Paris. -Bosseur J. Y. (2005), Du son au signe, Alternatives, Paris. -Collectif (2004), Jouable. Art, jeu et interactivité Workshop/Colloque, Haute école d'arts appliqués HES, Ecole Nationale des Arts Décoratifs, Ciren, Université Paris 8 -Genève, Centre pour l'image contemporaine. -Flusser V. (1996), Pour une philosophie de la photographie, Circé, Paris. -Genvo S. (2002), « Le game design de jeux vidéo : quels types de narration ? » in « Transmédialité de la narration vidéoludique : quels outils d'analyse ? », Comparaison, Peter Lang, 2002, p.103-112. -Havryliv M. (2005), « Playing with Audio : The Relationship between music and games », Master of arts, University Of Wollongong USA. -Kandinsky V. (1969), Du spirituel dans l'art, et dans la peinture en particulier, Denoël-Gonthier, Paris -Manovitch L. (2001), The language of new media, MIT Press, Cambridge. -Natkin S. (2004), Jeux vidéo et médias du XXIe siècle : quels modèles pour les nouveaux loisirs numériques, Paris, Vuibert. -Stranska L. (2001), Les partitions graphiques dans le contexte de la création musicale Tchèque et Slovaque de la seconde moitié du vingtième siècle, Thèse de Musicologie, Paris IV. -Vinet H. (1999), Concepts d'interfaces graphiques pour la production musicale et sonore in Interfaces homme-machine et création musicale, Hermes, Paris. -Weissberg J.L. (2001), L'amateur, l'émergence d'une nouvelle figure politique, http://multitudes.samizdat.net/L-amateuremergence-d-une-figure -Zénouda H. (2008), Les images et les sons dans les hypermédias artistiques contemporains : de la correspondance à la fusion, L'Harmattan, Paris. fusion, L'Harmattan, Paris.
10 Unité Polyagogique Informatique du CEMAMu (Centre d'Etudes de Mathématiques et Automatique Musicales) 11 http://www.youtube.com/watch?v=yztoaNakKok 12 http://acg.media.mit.edu/people/golan/ 13 Massachusetts Institute of Technology (USA) 14 « Aurora » (1999), « Floo » (1999), « Yellowtail » (1999), « Loom » (1999), « Warbo » (2000) 15 « an inexhaustible audiovisual substance which is created and manipulated through gestural mark-making » Golan Levin, Painterly Interfaces for Audiovisual Performance, B.S. Art and Design, [LEVIN 1994], p.19. 16 http://www.blueyeti.fr/Grapholine.html 17 Camille Barot and Kevin Carpentier.
18
http://www.utc.fr/imp/
-Golan Levin : http://acg.media.mit.edu/people/golan/ -Blue Yeti : http://www.blueyeti.fr/Grapholine.html -Aura : http://www.youtube.com/watch?v=rb-9AWP9RXw&feature=related -Synthpond : http://www.youtube.com/watch?v=mN4Rig_A8lc&feature=related -REZ : http://www.youtube.com/watch?v=2a1qsp9hXMw -Child Of Eden : http://www.youtube.com/watch?v=xuYWLYjOa_0&feature=fvst -Trope : http://www.youtube.com/watch?v=dlgV0X_GMPw -Pasy02 : http://www.youtube.com/watch?v=JmqdvxLpj6g&feature=related -Sonic Wire : http://www.youtube.com/watch?v=ji4VHWTk8TQ&feature=related -Electroplankton : http://www.youtube.com/watch?v=aPkPGcANAIg -Audio table : http://www.youtube.com/watch?v=vHvH-nWH3QM -Nicolas Clauss : http://www.flyingpuppet.com -Cubase : http://www.steinberg.fr/fr/produits/cubase/start.html -Nodal : http://www.csse.monash.edu.au/~cema/nodal/
We borrow the term « a-musicological» from Francis Rousseau and Alain Bonardi (Bonardi, Rousseau,
2003).
We borrow the term « a-musicological» from Francis Rousseau and Alain Bonardi(Bonardi, Rousseau, 2003).
Referenced above (p.35)
Referenced above (p.78)
Five film exercices (1943 -1944) (http://www.my-os.net/blog/index.php?2006/06/20/330-john-whitney)
Withney J. (1980), Digital harmony on the complementary of music and visual arts, Bytes Books, New Hampshire.
« Studie Nr 7. Poème visuel » (1929-1930), « Studie Nr
» (1931) … 8 « A Colour Box » (1935) …
John Cage but also Earle Brown, Pierre Boulez, André Boucourechliev among others….
http://www.youtube.com/watch?v=rb-9AWP9RXw&feature=related
[2,1] = 2 on the paradigmatic axis and 1 on the syntagmatic axis
http://www.youtube.com/watch?v=mN4Rig_A8lc&feature=related
http://www.youtube.com/watch?v=2a1qsp9hXMw
http://www.youtube.com/watch?v=xuYWLYjOa_0&feature=fvst
Havryliv Mark, Narushima Terumi, « Metris: a game environment for music performance », http://ro.uow.edu.au/era/313/
http://www.youtube.com/watch?v=JmqdvxLpj6g&feature=related
http://www.youtube.com/watch?v=aPkPGcANAIg
http://www.flyingpuppet.com/ |
01759278 | en | [
"phys.cond.gas"
] | 2024/03/05 22:32:10 | 2017 | https://theses.hal.science/tel-01759278/file/LANG_2017_diffusion.pdf | Jean-Christian Angles D'auriac
M Grigori
M Dimitri
Gangardt Professeur
M Jean-Sébastien
Caux Professeur
Anna Mme
Minguzzi
Correlations in low-dimensional quantum gases Thèse soutenue publiquement le 27 octobre 2017, devant le jury composé de
A few collaborations have not been as rewarding in terms of concrete achievements but taught me much as well, I would like to thank in particular Eiji Kawasaki for taking time to think together about spin-orbit coupling in low dimensions, Luigi Amico for private lectures on the XYZ model and his kind offers of collaboration, and Davide Rossini for his kind welcome in Pisa.
Thanks and greetings to all former and current LPMMC members, permanent researchers, visitors, interns, PhD students and Postdocs. I really enjoyed these months or years spent together, your presence to internal seminars and challenging questions, your availability and friendship. In particular, I would like to thank those to whom I felt a little closer: Marco Cominotti first, then Eiji Kawasaki with whom I shared the office and spent good moments out of the lab, Malo Tarpin with whom it always was a pleasure to discuss, and Katharina Rojan who has been so kind to all of us. Thanks to all of those who took time to discuss during conferences and summer schools, in particular Bruno Naylor, David Clément and Thierry Giamarchi, they who made interesting remarks helping improve my works or shared ideas on theirs, in particular Fabio Franchini, Martón Kormos, Maxim Olshanii, Sylvain Prolhac, Zoran Ristivojevic and Giulia de Rosi. Thanks also to my jury members for accepting to attend my defence, and for their useful comments and kind remarks. I also have a thought for my advisor Frank Hekking, who would have been glad to attend my defence as well. I always admired him for his skills as a researcher, a teacher and for his qualities as a human being in general, and would like to devote this work to his memory, and more generally, to all talented people who devoted (part of) their life to the common adventure of science. May their example keep inspiring new generations.
Aside my research activity, I have devoted time and energy to teaching as well, in this respect I would like to thank Jean-Pierre Demange for allowing me to replace him a couple of times, Michel Faye for giving me the opportunity to teach at Louis le Grand for a few months, then my colleagues at Université Grenoble Alpes, in peculiar Sylvie Zanier and, most of all, Christophe Rambaud with whom it was a pleasure to collaborate. Now that I have moved to full-time teaching, thanks to all of my colleagues at Auxerre for their kind welcome, in peculiar to the CPGE teachers with whom I interact most, Clément Dunand for his charism and friendship, and Fanny Bavouzet who is always willing to help me and give me good advice.
To finish with, my way to this point would not have been the same without my family and their support, nor without wonderful people whom I met on the road, among others Lorène, Michel, Joëlle, Jean-Guillaume and Marie-Anne, without whom I would have stopped way before, then Charles-Arthur, Guillaume, Thibault, Cécile, Sébastien, Pierre, Nicolas, Vincent, Clélia, Delphine, Amélie, Iulia, Élodie, Cynthia, Félix and Ariane. Special thanks to Paul and Marc who have been my best friends all along this sneaky and tortuous way to the present.
Most of all, my thoughts go to my sun and stars, pillar and joy of my life, friend and soulmate. Thank you so much, Claire! With love, G. Lang.
This thesis consists of an introductory text, followed by a summary of my research. A significant proportion of the original results presented has been published in the following articles:
(i) Guillaume Lang, Frank Hekking and Anna Minguzzi, Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature, Phys. Rev. A 91, 063619 (2015), Ref. [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF] (ii) Guillaume Lang, Frank Hekking and Anna Minguzzi, Dimensional crossover in a Fermi gas and a cross-dimensional Tomonaga-Luttinger model, Phys. Rev. A 93, 013603 (2016), Ref. [START_REF] Lang | Dimensional crossover in a Fermi gas and a cross-dimensional Tomonaga-Luttinger model[END_REF] (iii) Guillaume Lang, Frank Hekking and Anna Minguzzi, Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution, SciPost Phys. 3, 003 (2017), Ref. [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF] (iv) Guillaume Lang, Patrizia Vignolo and Anna Minguzzi, Tan's contact of a harmonically trapped one-dimensional Bose gas: strong-coupling expansion and conjectural approach at arbitrary interactions, Eur. Phys. J. Special Topics 226, 1583-1591 (2017), Ref. [START_REF] Lang | Tan's contact of a harmonically trapped onedimensional Bose gas: strong-coupling expansion and conjectural approach at arbitrary interactions[END_REF] (v) Maxim Olshanii, Vanja Dunjko, Anna Minguzzi and Guillaume Lang, Connection between nonlocal one-body and local three-body correlations of the Lieb-Liniger model, Phys. Rev. A 96, 033624 (2017), Ref. [START_REF] Olshanii | Connection between nonlocal one-body and local three-body correlations of the Lieb-Liniger model[END_REF] Other publication by the author, not presented in this thesis:
(vi) Guillaume Lang and Emilia Witkowska, Thermodynamics of a spin-1 Bose gas with fixed magnetization, Phys. Rev. A 90, 043609 (2014), Ref. [START_REF] Lang | Thermodynamics of a spin-1 Bose gas with fixed magnetization[END_REF] New topics constantly appear, that bring researchers away from old problems. Mastering the latter, precisely because they have been so much studied, requires an ever increasing effort of understanding, and this is unpleasing. It turns out that most researchers prefer considering new, less developed problems, that require less knowledge, even if they are not challenging. Nothing can be done against it, but formatting old topics with good references. . . so that later developments may follow, if destiny decides so.
Felix Klein, translation from German by the author
Chapter I Introduction: this thesis/cette thèse
This theoretical thesis summarizes the research activity I have performed during the three years of my PhD studies at Laboratoire de Physique et Modélisation des Milieux Condensés (LPMMC) in Grenoble, as a student of the École doctorale de Physique of Université Grenoble Alpes, under the supervision of Dr. Anna Minguzzi, and late Prof. Frank Hekking.
My work deals with ultracold atom physics [START_REF] Bloch | Many-body physics with ultracold gases[END_REF][START_REF] Giorgini | Theory of ultracold atomic Fermi gases[END_REF], where the high versatility of current experiments allows to probe phase diagrams of various systems in detail. I put the emphasis on low-dimensional setups, in particular degenerate quantum gases kinetically confined to one spatial dimension (1D gases), that became available in the early years of the twenty-first century [START_REF] Cazalilla | One dimensional Bosons: From Condensed Matter Systems to Ultracold gases[END_REF], but had already been studied as toy models since the early days of quantum physics.
I have focused on analytical methods and techniques, sometimes at the verge of mathematical physics, and left aside advanced numerical tools in spite of their increasing importance in modern theoretical physics. Experimental aspects are secondary in this manuscript, but have been a guideline to my investigations, as I have taken part to a joint programm with an experimental group at Laboratoire de Physique et des Lasers (LPL) in Villetaneuse, the SuperRing project.
The key notion of this thesis is the one of strongly-correlated systems, that can not be described in terms of weakly-interacting parts. Solving models that feature strong correlations is among the most challenging problems encountered in theoretical physics, since the strong-coupling regime is not amenable to perturbative techniques. In this respect, reduction of dimensionality is of great help as it makes some problems analytically amenable, thanks to powerful tools such as Bethe Ansatz (BA), bosonization [START_REF] Giamarchi | Quantum Physics in One Dimension[END_REF] or conformal field theory (CFT) [START_REF] Francesco | Conformal Field theory[END_REF]. Another interesting point is that parallels between highenergy, condensed-matter and statistical physics are especially strong nowadays, since the theoretical tools involved are of the same nature [START_REF] Gogolin | Bosonization and Strongly Correlated Systems[END_REF][START_REF] Witten | Three Lectures On Topological Phases Of Matter[END_REF][START_REF] Wilczek | Particle physics and condensed matter: the saga continues[END_REF]. I focus on the low-energy sector and use a condensed-matter language, but readers from other communities may find interest in the techniques all the same. I tackle various aspects of the many-body problem, with auto-correlation functions of the many-body wavefunction as a common denominator, and a means to characterize low-dimensional ultracold atoms. The manuscript is composed of four main parts, whose outline follows:
Chapter II is a general introduction to various experimental and theoretical aspects of the many-body problem in reduced dimension. I give a brief account of the main specificities of one-dimensional gases, and introduce correlation functions as a suitable observable to characterize such fluids. Experimental and theoretical studies that allowed this reduction of dimensionality are summarized. I present powerful theoretical tools that are commonly used to solve integrable models, such as Bethe Ansatz, bosonization in the framework of Luttinger liquid theory and Conformal Field Theory. Their common features are put into light and simple illustrations are given. To finish with, I present the main known methods to increase the effective dimension of a system, as an introduction to the vast topic of dimensional crossovers.
Chapter III deals with local and non-local equal-time equilibrium correlations of the Lieb-Liniger model. The latter is the paradigmatic model to describe 1D Bose gases, and has the property of being integrable. It has a long history, and this chapter may serve as an introduction to the topic, but deals with advanced aspects as well. In particular, I have made a few contributions towards the analytical exact ground-state energy, based on the analysis of mathematical structures that emerge in weak-and strong-coupling expansions. Then, I delve into the issue of correlation functions and the means to construct them from integrability in a systematic way. I introduce the notion of connection, that binds together in a single formalism a wide variety of relationships between correlation functions and integrals of motion. Keeping in mind that most experiments involve a trap that confines the atoms, I then show how the Bethe Ansatz formalism can be combined to the local density approximation (LDA) to describe trapped interacting gases in the non-integrable regime of inhomogeneous density, through the so-called BALDA (Bethe Ansatz LDA) formalism.
Chapter IV is devoted to the dynamical correlations of the Lieb-Liniger model. They are investigated in order to discuss the notion of superfluidity, through the concept of drag force induced by a potential barrier stirred in the fluid. The drag force criterion states that a superfluid flow is linked to the absence of a drag force under a critical velocity, generaling Landau's criterion for superfluidity. Computing the drag force in linear response theory requires a good knowledge of the dynamical structure factor, an observable worth studying for itself as well since it is experimentally accessible by Bragg scattering and quite sensitive to interactions and dimensionality. This gives me an opportunity to investigate the validity range of the Tomonaga-Luttinger liquid theory in the dynamical regime, and tackle a few finite-temperature aspects. I also study the effect of a finite width of the barrier on the drag force, putting into light a decrease of the drag force, hinting at a quasi-superfluid regime at supersonic flows.
In chapter V, I study the dimensional crossover from 1D to higher dimensions. A simple case, provided by noninteracting fermions in a box trap, is treated exactly and in detail. The effect of dimensionality on the dynamical structure factor and drag force is investigated directly and through multi-mode structures, the effect of a harmonic trap is treated in the local density approximation.
After a general conclusion, a few Appendices provide details of calculations and introduce transverse issues. I did not reproduce all the derivations published in my articles, the interested reader can find them there and in references therein.
Cette thèse théorique résume les principaux résultats que j'ai obtenus au cours de mes trois années de doctorat au LPMMC, à Grenoble, sous la direction d'Anna Minguzzi et de feu Frank Hekking.
Elle s'inscrit dans le cadre de la physique de la matière condensée, et plus particulièrement des atomes ultrafroids, qui suscite l'intérêt de par la possibilité qu'offrent ces systèmes de simuler toutes sortes de modèles et d'étudier en détail les diagrammes de phase qui leurs sont associés. Je m'intéresse plus particulièrement à des gaz dégénérés dont les degrés de liberté spatiaux transversaux sont entravés par des pièges au point que leur dynamique est strictement unidimensionnelle. Bien qu'ils aient fait l'objet d'études théoriques depuis des décennies, de tels systèmes ont été réalisés pour la première fois au tournant du XXI-ème siècle, ravivant l'intérêt pour ces derniers.
Parmi les nombreuses méthodes disponibles pour décrire les gaz quantiques unidimensionnels, j'ai plus particulièrement porté mon attention sur les techniques analytiques, délaissant volontairement les aspects numériques pour lesquels, en dépit de leur importance croissante et de leur intérêt indéniable, je n'ai pas d'affinité particulière. Je n'insiste pas non plus outre mesure sur les aspects expérimentaux, dont je suis loin d'être expert, mais ils restent présents en toile de fond comme source d'inspiration. En particulier, certaines thématiques que j'ai abordées l'ont été dans le cadre du projet SuperRing, conjoint avec des expérimentateurs du LPL à Villetaneuse.
La notion de système fortement corrélé joue un rôle essentiel dans mon projet de recherche. De tels systèmes ne peuvent être appréhendés en toute généralité par les méthodes perturbatives usuelles, qui ne s'appliquent pas dans le régime de couplage fort. De ce fait, ils constituent un formidable défi pour la physique théorique actuelle. La réduction de dimension le rend abordable, mais pas trivial pour autant, loin s'en faut. Les outils phares qui en permettent l'étude analytique sont connus sous les noms d'Ansatz de Bethe, de bosonisation et de théorie des champs conforme. Une particularité qui me tient particulièrement à coeur est le parallèle fort qui existe actuellement entre la physique des hautes énergies, de la matière condensée et la physique statistique du fait de leurs emprunts mutuels de formalisme et de techniques. Bien que je m'intéresse ici plus spécifiquement à la physique de basse énergie, des chercheurs d'autres communautés sont susceptibles de trouver un intérêt pour les techniques et le formalisme employés. J'aborde divers aspects du problème à N corps, centrés autour des multiples fonctions de corrélation qu'on peut définir à partir de la seule fonction d'onde, qui constituent un formidable outil pour caractériser les systèmes d'atomes froids en basse dimension. J'ai décidé de les présenter dans quatre parties distinctes, qui constituent chacune un chapitre de ce manuscrit.
Le Chapitre II consistue une introduction générale au problème à N corps quantique en dimension réduite. J'y présente quelques caractéristiques spécifiques aux gaz unidimensionnels, puis explique comment les efforts conjoints des théoriciens et expérimentateurs ont permis leur réalisation. Certains modèles phares des basses dimensions s'avèrent être intégrables, aussi je présente les méthodes analytiques qui permettent d'en étudier de manière exacte les propriétés thermodynamiques et les fonctions de corrélation, à savoir l'Ansatz de Bethe, la bosonisation appliquée aux liquides de Tomonaga-Luttinger et la théorie conforme des champs. Ces techniques sont en partie complémentaires, mais j'insiste également sur leurs similarités. Enfin, à rebours de la démarche qui consiste à chercher à réduire la dimension d'un système, je m'intéresse au problème opposé, qui consiste à augmenter la dimension effective de manière graduelle, et présente les quelques méthodes éprouvées à ce jour.
Le Chapitre III traite des corrélations locales et non-locales dans l'espace mais à temps égaux et à l'équilibre du modèle de Lieb et Liniger. Il s'agit là d'un paradigme couramment appliqué pour décrire les gaz de Bose unidimensionnels, et les techniques présentées au chapitre précédent s'y appliquent car ce modèle est intégrable. De par sa longue histoire et le bon millier d'articles qui lui ont été consacrés, il constitue à lui seul un vaste sujet dont ma présentation peut faire guise d'introduction. J'y aborde également des aspects techniques avancés concernant l'énergie exacte du gaz dans son état fondamental. J'ai notamment amélioré les estimations analytiques de cette dernière par une étude fine des structures mathématiques apparaissant dans les développements en couplage fort et faible. Cette étude préliminaire débouche sur celle des fonctions de corrélation, et notamment la fonction de corrélation à un corps que je m'emploie à construire de façon systématique en me fondant sur l'intégrabilité du modèle de Lieb et Liniger. En explicitant les premières étapes de cette construction, j'ai été amené à introduire la notion de connexion, qui englobe dans un formalisme unique l'ensemble des formules connues actuellement qui lient les fonctions de corrélations et les intégrales du mouvement. En fait, la plupart des expériences actuelles font intervenir un piège pour confiner les atomes, ce qui rend le gaz inhomogène et prive le modèle de sa propriété d'intégrabilité. Toutefois, une astu-cieuse combinaison de l'approximation de la densité locale et de l'Ansatz de Bethe permet d'accéder quand même à la solution exacte moyennant des calculs plus élaborés.
Dans le Chapitre IV, je m'intéresse aux corrélations dynamiques du modèle de Lieb et Liniger, qui apportent des informations sur les propriétés de superfluidité à travers le concept de force de traînée induite par une barrière de potentiel mobile. Le critère de superfluidité associé à la force de traînée stipule qu'un écoulement superfluide est associé à une force de traînée rigoureusement nulle. Cette dernière peut être évaluée dans le formalisme de la réponse linéaire, à condition de connaître le facteur de structure dynamique du gaz, une autre observable traditionnellement mesurée par diffusion de Bragg, et très sensible à l'intensité des interactions ainsi qu'à la dimensionnalité. Cette étude me donne une opportunité de discuter du domaine de validité de la théorie des liquides de Tomonaga-Luttinger dans le régime dynamique, et de m'intéresser à quelques aspects thermiques. Enfin, en étudiant plus spécifiquement l'effet de l'épaisseur de la barrière de potentiel sur la force de traînée, je mets en évidence la possibilité d'un régime supersonique particulier, qu'on pourrait qualifier de quasi-superfluide.
Dans le Chapitre V, j'étudie la transition progressive d'un gaz unidimensionnel vers un gaz de dimension supérieur à travers l'exemple, conceptuellement simple, de fermions sans interaction placés dans un piège parallélépipédique. La simplicité du modèle autorise un traitement analytique exact de bout en bout, qui met en évidence les effets dimensionnels sur les observables déjà étudiées dans le chapitre précédent, le facteur de structure dynamique et la force de traînée, tant de façon directe que par la prise en compte d'une structure multimodale en énergie obtenue par ouverture graduelle du piège. L'effet d'un piège harmonique est traîté ultérieurement, toujours à travers l'approximation de la densité locale.
Après une conclusion globale, quelques appendices complètent cette vision d'ensemble en proposant des digressions vers des sujets transverses ou en approfondissant quelques détails techniques inédits.
Chapter II
From 3D to 1D and back to 2D II.1 Introduction
We perceive the world as what mathematicians call three-dimensional (3D) Euclidian space, providing a firm natural framework for geometry and physics until the modern times. Higher-dimensional real and abstract spaces have pervaded physics in the course of the twentieth century, through statistical physics where the number of degrees of freedom considered is comparable to the Avogadro number, quantum physics where huge Hilbert spaces are often involved, general relativity where in addition to a fourth space-time dimension one considers curvature of a Riemannian manifold, or string theory where more dimensions are considered before compactification.
Visualizing a higher-dimensional space requires huge efforts of imagination, for a pedagogical illustration the reader is encouraged to read the visionary novel Flatland [START_REF] Abbott | Flatland: A Romance of Many Dimensions[END_REF]. As a general rule, adding dimensions has dramatic effects due to the addition of degrees of freedom, that we do not necessarily apprehend intuitively. The unit ball has maximum volume in 5D, for instance. This is not the point I would like to emphasize however, but rather ask this seemingly innocent, less debated question: we are obviously able to figure out lower-dimensional spaces, ranging from 0D to 3D, but do we really have a good intuition of them and of the qualitative differences involved? As an example, a random walker comes back to its starting point in finite time in 1D and 2D, but in 3D this is not always the case. One of the aims of this thesis is to point out such qualitative differences in ultracold gases, that will manifest themselves in their correlation functions. To put specific phenomena into light, I will come back and forth from the three-dimensional Euclidian space, to a one-dimensional line-world.
As far as dimension is concerned, there is a deep dichotomy between the experimental point of view, where reaching a low-dimensional regime is quite challenging, and the theoretical side, where 1D models are far easier to deal with, while powerful techniques are scarce in 3D. Actually, current convergence of experimental and theoretical physics in this field concerns multi-mode quasi-one dimensional systems and dimensional crossovers from 1D to 2D or vice-versa. This introductory, general chapter is organized as follows: first, I present a few peculiarities of 1D quantum systems and introduce the concept of correlation functions as an appropriate tool to characterize them, then I present a few experimental breakthroughs involving low-dimensional gases, and the main analytical tools I have used during my thesis to investigate such systems. To finish with, I present a few approaches to the issue of dimensional crossovers to higher dimension.
La dimension d'un espace correspond au nombre de directions indépendantes qui le caractérisent. En ce sens, la facon dont nous percevons le monde par le biais de nos sens amène naturellement à le modéliser par un espace euclidien de dimension trois. La possibilité d'envisager des espaces (réels ou abstraits) de dimension plus élevée a fait son chemin des mathématiques vers la physique, où cette idée est désormais courante dans les théories modernes. En mécanique hamiltonienne et en physique statistique, le nombre de degrés de liberté envisagés est de l'ordre du nombre d'Avogadro, la physique quantique fait appel à des espaces de Hilbert de grande dimension, tandis que la relativité générale considère un espace-temps quadridimensionnel où la courbure locale joue un rôle primordiale, et la théorie des cordes envisage encore plus de dimensions spatiales avant l'étape finale de compactification.
Ces espaces de dimension supérieure soulèvent la problématique de leur visualisation, qui n'a rien de simple. Je recommande à ce sujet la lecture d'un roman visionnaire intitulé Flatland, qui invite à y méditer. Pour les lecteurs francophones intéressés par une approche plus formelle, je conseille également la lecture de la référence [START_REF]L'espace physique entre mathématiques et philosophie[END_REF]. On retiendra qu'en règle générale, une augmentation de la dimension de l'espace s'accompagne d'effets importants et pas nécessairement triviaux du fait de l'accroissement concomitant du nombre de degrés de liberté. Certains de ces effets ne s'appréhendent pas intuitivement, un exemple qui me plaît est le fait qu'un déplacement aléatoire ramène au point de départ en temps fini même si l'espace est infini en une et deux dimensions, ce qui n'est pas nécessairement le cas en trois dimensions. Un des objectifs de cette thèse est de mettre en évidence des effets dimensionnels non-triviaux dans le domaine des gaz d'atomes ultrafroids, notamment en ce qui concerne les fonctions d'auto-corrélation associées à la fonction d'onde. Pour les comprendre, il sera nécessaire d'envisager à la fois un monde linéaire, unidimensionnel, et des espaces euclidiens de dimension supérieure.
La problématique de la dimension d'un système s'appréhende de manière relativement différente selon qu'on est expérimentateur ou théoricien. Dans les expériences, il est difficile de diminuer la dimension d'un système, tandis que du côté théorique, les modèles unidimensionnels sont bien plus faciles à traiter que les modèles 3D du fait du nombre restreint de méthodes efficaces dans ce dernier cas. On assiste aujourd'hui à une convergence des problématiques théoriques et expérimentales au passage de 1D à 2D et vice-versa, à travers la notion de système multi-mode quasi-1D.
Ce chapitre est organisé de la manière suivante: dans un premier temps, je présente quelques particularités des systèmes quantiques 1D et explique que leurs corrélations les caractérisent, puis je récapitule les principales avancées théoriques et expérimentales dans ce domaine, après quoi j'introduis dans les grandes lignes les techniques théoriques que j'ai utilisées dans ma thèse. Enfin, j'aborde la problématique de l'augmentation de la dimension d'un système à travers les quelques techniques connues à ce jour.
II.2 Welcome to Lineland
II.2.1 Generalities on one-dimensional systems
It is quite intuitive that many-particle physics in one dimension must be qualitatively different from any higher dimension whatsoever, since particles do not have the possibility of passing each other without colliding. This topological constraint has exceptionally strong effects on systems of non-ideal particles, however weakly they may interact, and the resulting collectivization of motion holds in both classical and quantum theories.
An additional effect of this crossing constraint is specific to the degenerate regime and concerns quantum statistics. While in three dimensions particles are either bosons or fermions, in lower dimension the situation is more intricate. To understand why, we shall bear in mind that statistics is defined through the symmetry of the many-body wavefunction under two-particle exchange: it is symmetric for bosons and antisymmetric for fermions. Such a characterization at the most elementary level is experimentally challenging [START_REF] Roos | Revealing quantum statistics with a pair of distant atoms[END_REF], but quite appropriate for a Gedankenexperiment. In order to directly probe the symmetry of the many-body wavefunction, one shall engineer a physical process responsible for the interchange of two particles, that would not otherwise disturb the system. A necessary condition is that the particles be kept apart enough to avoid the influence of interaction effects.
In two dimensions, this operation is possible provided that interactions are shortranged, although performing the exchange clockwise or counter-clockwise is not equivalent, leading to the (theoretical) possibility of intermediate particle statistics [START_REF] Leinaas | On the theory of identical particles[END_REF][START_REF] Wilczek | Quantum Mechanics of Fractional-Spin Particles[END_REF][START_REF] Haldane | Fractional statistics" in arbitrary dimensions: A generalization of the Pauli principle[END_REF]. The corresponding particles are called anyons, as they can have any statistics between fermionic and bosonic, and are defined through the symmetry of their many-body wave-function under exchange as ψ(. . . x i , . . . , x j , . . . ) = e iχ ψ(. . . x j , . . . , x i , . . . ), (II. [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF] where χ is real.
In one dimension, such an exchange process is utterly forbidden by the crossing constraint, making particle statistics and interactions deeply intertwined: the phase shifts due to scattering and statistics merge, arguably removing all meaning from the very concept of statistics. I will nonetheless, in what follows, consider particles as fermions or bosons, retaining by convention the name they would be given if they were free in 3D (for instance, 87 Rb atoms are bosons and 40 K atoms are fermions). The concept of 1D anyons is more tricky and at the core of recent theoretical investigations [START_REF] Kundu | Exact Solution of Double δ Function Bose Gas through an Interacting Anyon Gas[END_REF][START_REF] Batchelor | The Bethe ansatz for 1D interacting anyons[END_REF][START_REF] Pâţu | Correlation functions of one-dimensional Lieb-Liniger anyons[END_REF][START_REF] Calabrese | Correlation functions of one-dimensional anyonic fluids[END_REF][START_REF] Piroli | Exact dynamics following an interaction quench in a one-dimensional anyonic gas[END_REF], but I leave this issue aside.
The origin of the conceptual difficulty associated with statistics in 1D is the fact that we are too accustomed to noninteracting particles in 3D. Many properties that are fully equivalent in the three-dimensional Euclidian space, and may unconsciously be put on equal footing, are not equivalent anymore in lower dimension. For instance, in 3D, bosons and fermions are associated to Bose-Einstein and Fermi-Dirac statistics respectively. Fermions obey the Pauli principle (stating that two or more identical fermions can not occupy the same quantum state simultaneously), the spin-statistics theorem implies that bosons have integer spin and fermions an half-integer one [START_REF] Pauli | The Connection Between Spin and Statistics[END_REF], and any of these properties looks as fundamental as any other. In one dimension however, strongly-interacting bosons possess a Fermi sea structure and can experience a kind of Pauli principle due to interactions. These manifestations of a statistical transmutation compel us to revise, or at least revisit, our conception of statistics in arbitrary dimension.
For fermions with spin, the collision constraint has an even more dramatic effect. A single fermionic excitation has to split into a collective excitation carrying charge (a 'chargon', the analog of a sound wave) and another one carrying spin (called spin wave, or 'spinon'). They have different velocities, meaning that electrons, that are fundamental objects in 3D, break into two elementary excitations. As a consequence, in one dimension there is a complete separation between charge and spin degrees of freedom. Stated more formally, the Hilbert space is represented as a product of charge and spin sectors, whose parameters are different. This phenomenon is known as 'spin-charge separation' [START_REF] Kollath | Spin-Charge Separation in Cold Fermi Gases: A Real Time Analysis[END_REF], and is expected in bosonic systems as well [START_REF] Kleine | Spin-charge separation in two-component Bose gases[END_REF].
These basic facts should be sufficient to get a feeling that 1D is special. We will see many other concrete illustrations in the following in much more details, but to make physical predictions that illustrate peculiarities of 1D systems and characterize them, it is first necessary to select a framework and a set of observables. Actually, the intertwined effect of interactions and reduced dimensionality is especially manifest on correlation functions.
II.2.2 Correlation functions as a universal probe for many-body quantum systems
Theoretical study of condensed-matter physics in three dimensions took off after the laws of many-body quantum mechanics were established on firm enough grounds to give birth to powerful paradigms. A major achievement in this respect is Landau's theory of phase transitions. In this framework, information on a system is encoded in its phase diagram, obtained by identifying order parameters that take a zero value in one phase and are finite in the other phase, and studying their response to variations of external parameters such as temperature or a magnetic field in the thermodynamic limit. Laudau's theory is a versatile paradigm, that has been revisited over the years to encompass notions linked to symmetry described through the theory of linear Lie groups. It turns out that symmetry breaking is the key notion underneath, as in particle physics, where the Higgs mechanism plays a significant role.
In one dimension, however, far fewer finite-temperature phase transitions are expected, and none in systems with short-range interactions. This is a consequence of the celebrated Mermin-Wagner-Hohenberg theorem, that states the impossibility of spontaneous breakdown of a continuous symmetry in 1D quantum systems with short-range interactions at finite temperature [START_REF] Mermin | Absence of Ferromagnetism or Antiferromagnetism in One-or Two-Dimensional Isotropic Heisenberg Models[END_REF], thus forbidding formation of off-diagonal long-range order.
In particular, according to the definition proposed by Yang [START_REF] Yang | Concept of Off-Diagonal Long-Range Order and the Quantum Phases of Liquid He and of Superconductors[END_REF], this prevents Bose-Einstein condensation in uniform systems, while this phenomenon is stable to weak interactions in higher dimensions. This example hints at the fact that Landau's theory of phase transitions may not be adapted in most cases of interest involving low-dimensional systems, and a shift of paradigm should be operated to characterize them efficiently.
An interesting, complementary viewpoint suggested by the remark above relies on the study of correlation functions of the many-body wavefunction in space-time and momentum-energy space. In mathematics, the notion of correlation appears in the field of statistics and probabilities as a tool to characterize stochastic processes. It comes as no surprise that correlations have become central in physics as well, since quantum processes are random, and extremely huge numbers of particles are dealt with in statistical physics.
The paradigm of correlation functions first pervaded astrophysics with the Hanbury Brown and Twiss experiment [START_REF] Brown | A Test of a new type of stellar interferometer on Sirius[END_REF], and has taken a central position in optics, with Michelson, Mach-Zehnder and Sagnac interferometers as fundamental setups, where typically electric field or intensity temporal correlations are probed, to quantify the coherence between two light-beams and probe the statistics of intensity fluctuations respectively.
In parallel, this formalism has been successfully transposed and developed to characterize condensed-matter systems, where its modern form partly relies on the formalism of linear response theory, whose underlying idea is the following: in many experimental configurations, the system is probed with light or neutrons, that put it slightly out of equilibrium. Through the response to such external excitations, one can reconstruct equilibrium correlations [START_REF] Pottier | Nonequilibrium Statistical Physics, Linear Irreversible Processes[END_REF].
Actually, the paradigm of correlation functions allows a full and efficient characterization of 1D quantum gases. In particular, it is quite usual to probe how the many-body wavefunction is correlated with itself. For instance, one may be interested in densitydensity correlations, or their Fourier transform known as the dynamical structure factor. It is natural to figure out, and calculations confirm it, that the structure of correlation functions in 1D is actually much different from what one would expect in higher dimensions. At zero temperature, in critical systems correlation functions decay algebraically in space instead of tending to a finite value or even of decaying exponentially, while in energy-momentum space low-energy regions can be kinetically forbidden, and power-law divergences can occur at their thresholds. These hallmarks of 1D systems are an efficient way to probe their effective dimension, and will be investigated much in detail throughout this thesis. However, recent developments such as far from equilibrium dynamics [START_REF] Kinoshita | A quantum Newton's cradle[END_REF], thermalization or its absence after a quench [START_REF] Calabrese | Time Dependence of Correlation Functions Following a Quantum Quench[END_REF][START_REF] Rigol | Relaxation in a Completely Integrable Many-Body Quantum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons[END_REF][START_REF] Caux | Time Evolution of Local Observables After Quenching to an Integrable Model[END_REF][START_REF] De Nardis | Solution for an interaction quench in the Lieb-Liniger Bose gas[END_REF][START_REF] Atas | Collective many-body bounce in the breathing-mode oscillations of a Tonks-Girardeau gas[END_REF][START_REF] Nardis | Exact correlations in the Lieb-Liniger model and detailed balance out of equilibrium[END_REF] or periodic driving to a non-equilibrium steady state [START_REF] Eckardt | Colloquium: Atomic quantum gases in periodically driven optical lattices[END_REF] are beyond its scope. More recent paradigms, such as topological matter and information theory (with entanglement entropy as a central notion [START_REF] Eisert | Colloquium: Area laws for the entanglement entropy[END_REF]), will not be tackled neither. I proceed to describe dimensional reduction in ultracold atom systems and the possibilities offered by the crossover from 3D to 1D.
II.3 From 3D to 1D in experiments
While low-dimensional models have had the status of toy models in the early decades of quantum physics, they are currently realized to a good approximation in a wide variety of condensed-matter experimental setups. The main classes of known 1D systems are spin chains, some electronic wires, ultracold atoms in a tight waveguide, edge states (for instance in the Quantum Hall Effect), and magnetic insulators. Their first representatives have been experimentally investigated in the 1980's, when the so-called Bechgaard salts have provided examples of one-dimensional conductors and superconductors [START_REF] Jérome | Superconductivity in a synthetic organic conductor (TMTSF)2PF 6[END_REF]. As far as 2D materials are concerned, the most remarkable realizations are high-temperature superconductors [START_REF] Bednorz | Possible high Tc superconductivity in the Ba-La-Cu-O system[END_REF], graphene [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF] and topological insulators [START_REF] Moore | Topological invariants of time-reversal-invariant band structures[END_REF].
A revolution came later on from the field of ultracold atoms, starting in the 1990's. The main advantage of ultracold atom gases over traditional condensed-matter systems is that, thanks to an exceptional control over all parameters of the gaseous state, they offer new observables and tunable parameters, allowing for exhaustive exploration of phase diagrams, to investigate macroscopic manifestations of quantum effects such as superfluidity, and clean realizations of quantum phase transitions (such transitions between quantum phases occur at zero temperature by varying a parameter in the Hamiltonian, and are driven by quantum fluctuations, contrary to 'thermal' ones where thermal fluctuations play a major role [START_REF] Sachdev | Quantum phase transitions[END_REF]). Ultracold gases are a wonderful platform for the simulation of condensed-matter systems [START_REF] Lewenstein | Ultracold atomic gases in optical lattices: mimicking condensed matter physics and beyond[END_REF] and theoretical toy-models, opening the field of quantum simulation [START_REF] Bloch | Quantum simulations with ultracold quantum gases[END_REF], where experiments are designed to realize models, thus reversing the standard hierarchy between theory and experiment [START_REF] Feynman | Simulating Physics with Computers[END_REF].
With ultracold atoms, the number of particles and density are under control, allowing for instance to construct a Fermi sea atom per atom [START_REF] Wenz | From Few to Many: Observing the Formation of a Fermi Sea One Atom at a Time[END_REF]. The strength and type of interactions can be modified as well: tuning the power of the lasers gives direct control over the hopping parameters in each direction of an optical lattice, whereas Feshbach resonance allows to tune the interaction strength [START_REF] Courteille | Observation of a Feshbach Resonance in Cold Atom Scattering[END_REF][START_REF] Chin | Feshbach resonances in ultracold gases[END_REF]. Neutral atoms interact through a shortranged potential, while dipolar atoms and molecules feature long-range interactions [START_REF] Griesmaier | Bose-Einstein Condensation of Chromium[END_REF].
Particles are either bosons or fermions, but any mixture of different species is a priory feasible. Recently, a mixture of degenerate bosons and fermions has been realized using the lithium-6 and lithium-7 isotopes of Li atoms [START_REF] Ferrier-Barbut | A mixture of Bose and Fermi superfluids[END_REF], and in lower dimensions, anyons may become experimentally relevant. Internal atomic degrees of freedom can be used to produce multicomponent systems in optical traps, the so-called spinor gases, where a total spin F leads to 2F +1 components [START_REF] Ho | Spinor Bose Condensates in Optical Traps[END_REF][START_REF] Stamper-Kurn | Optical Confinement of a Bose-Einstein Condensate[END_REF].
Current trapping techniques allow to modify the geometry of the gas through lattices, i.e. artificial periodic potentials similar to the ones created by ions in real solids, or rings and nearly-square boxes that reproduce ideal theoretical situations and create periodic and open boundary conditions respectively [START_REF] Gaunt | Bose-Einstein Condensation of Atoms in a Uniform Potential[END_REF]. Although (nearly) harmonic traps prevail, double-well potentials and more exotic configurations yield all kinds of inhomogeneous density profiles. On top of that, disorder can be taylored, from a single impurity [START_REF] Zipkes | A trapped single ion inside a Bose-Einstein condensate[END_REF] to many ones [START_REF] Palzer | Quantum Transport through a Tonks-Girardeau Gas[END_REF], to explore Anderson localization [START_REF] Anderson | Absence of Diffusion in Certain Random Lattices[END_REF][START_REF] Billy | Direct observation of Anderson localization of matter-waves in a controlled disorder[END_REF] or many-body localization [START_REF] Basko | Metal-insulator transition in a weakly interacting many-electron system with localized single-particle states[END_REF][START_REF] Choi | Exploring the many-body localization transition in two dimensions[END_REF].
As far as thermal effects are concerned, in condensed-matter systems room temperature is usually one or two orders of magnitude lower than the Fermi temperature, so one can consider T = 0 as a very good approximation. In ultracold atom systems, however, temperature scales are much lower and span several decades, so that one can either probe thermal fluctuations, or nearly suppress them at will to investigate purely quantum fluctuations [START_REF] Dettmer | Observation of Phase Fluctuations in Elongated Bose-Einstein Condensates[END_REF][START_REF] Esteve | Observations of Density Fluctuations in an Elongated Bose Gas: Ideal Gas and Quasicondensate Regimes[END_REF].
Recently, artificial gauge fields similar to real magnetic fields for electrons could be applied to these systems [START_REF] Lin | Synthetic magnetic fields for ultracold neutral atoms[END_REF][START_REF] Dalibard | Colloquium: Artificial gauge potentials for neutral atoms[END_REF], giving access to the physics of ladders [START_REF] Atala | Observation of chiral currents with ultracold atoms in bosonic ladders[END_REF], quantum Hall effect [START_REF] Price | Four-Dimensional Quantum Hall Effect with Ultracold Atoms[END_REF] and spin-orbit coupling [START_REF] Galitski | Spin-orbit coupling in quantum gases[END_REF].
The most famous experimental breakthrough in the field of ultracold atoms is the demonstration of Bose-Einstein condensation, a phenomenon linked to Bose statistics where the lowest energy state is macroscopically occupied [START_REF] Anderson | Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor[END_REF][START_REF] Davis | Bose-Einstein Condensation in a Gas of Sodium Atoms[END_REF][START_REF] Bradley | Evidence of Bose-Einstein Condensation in an Atomic Gas with Attractive Interactions[END_REF], 70 years after its prediction [START_REF] Einstein | Quantentheorie des einatomigen idealen Gases[END_REF][START_REF] Einstein | Quantentheorie des einatomigen idealen Gases[END_REF]. This tour de force has been allowed by continuous progress in cooling techniques (essentially by laser and evaporation [START_REF] Ketterle | Evaporative Cooling of Trapped Atoms[END_REF]) and confinement. Other significant advances are the observation of the superfluid-Mott insulator transition in an optical lattice [START_REF] Greiner | Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms[END_REF], degenerate fermions [START_REF] Demarco | Onset of Fermi Degeneracy in a Trapped Atomic Gas[END_REF], the BEC-BCS crossover [START_REF] Regal | Observation of Resonance Condensation of Fermionic Atom Pairs[END_REF], and of topological defects such as quantized vortices [START_REF] Matthews | Vortices in a Bose-Einstein Condensate[END_REF][START_REF] Madison | Vortex Formation in a Stirred Bose-Einstein Condensate[END_REF] or solitons [START_REF] Fleischer | Observation of two-dimensional discrete solitons in optically induced nonlinear photonic lattices[END_REF].
Interesting correlated phases appear both in two-dimensional and in one-dimensional systems, where the most celebrated achievements are the observation of the Berezinskii-Kosterlitz-Thouless (BKT) transition [START_REF] Hadzibabic | Berezinskii-Kosterlitz-Thouless crossover in a trapped atomic gas[END_REF], an unconventional phase transition in 2D that does not break any continuous symmetry [START_REF] Kosterlitz | Ordering, metastability and phase transitions in two-dimensional systems[END_REF], and the realization of the fermionized, strongly-correlated regime of impenetrable bosons in one dimension [START_REF] Paredes | Tonks-Girardeau gas of ultracold atoms in an optical lattice[END_REF][START_REF] Kinoshita | Observation of a one-dimensional Tonks-Girardeau gas[END_REF], the so-called Tonks-Girardeau gas [START_REF] Girardeau | Relationship between Systems of Impenetrable Bosons and Fermions in One Dimension[END_REF].
Such low-dimensional gases are obtained by a strong confinement along one (2D gas) or two (1D gas) directions, in situations where all energy scales of the problem are smaller than the transverse confinement energy, limiting the transverse motion of atoms to zero point oscillations. This tight confinement is experimentally realized through very anisotropic trapping potentials.
The crossover from a 2D trapped gas to a 1D one has been theoretically investigated in [START_REF] Olshanii | Atomic Scattering in the Presence of an External Confinement and a Gas of Impenetrable Bosons[END_REF], under the following assumptions: the waveguide potential is replaced by an axially symmetric two-dimensional harmonic potential of frequency ω ⊥ , and the forces created by the potential act along the x-y plane. The atomic motion along the z-axis is free, in other words no longitudinal trapping is considered. As usual with ultracold atoms, interactions between the atoms are modeled by Huang's pseudopotential [START_REF] Huang | Quantum-Mechanical Many-Body Problem with Hard-Sphere Interaction[END_REF]
U (r) = g δ(r) ∂ ∂r (r • ), (II.2)
where g = 4π 2 a s /m, a s being the s-wave scattering length for the true interaction potential, δ the dirac function and m the mass of an atom. The regularization operator ∂ ∂r (r • ), that removes the 1/r divergence from the scattered wave, plays an important role in the derivation. The atomic motion is cooled down below the transverse vibrational energy ω ⊥ . Then, at low velocities the atoms collide in the presence of the waveguide and the system is equivalent to a 1D gas subject to the interaction potential U 1D (z) = g 1D δ(z), whose interaction strength is given by [START_REF] Olshanii | Atomic Scattering in the Presence of an External Confinement and a Gas of Impenetrable Bosons[END_REF] In subsequent studies, the more technical issue of the crossover from 3D to 1D for a trapped Bose gas has also been discussed [START_REF] Lieb | One-Dimensional Behavior of Dilute, Trapped Bose Gases[END_REF][START_REF] Lieb | The Mathematics of the Bose Gas and its Condensation[END_REF]. Recently, the dimensional crossover from 3D to 2D in a bosonic gas through strengthening of the transverse confinement, has been studied by renormalization group techniques [START_REF] Lammers | Dimensional crossover of nonrelativistic bosons[END_REF].
g 1D = 2 2 ma ⊥ a s /a ⊥ 1 -Ca s /a ⊥ . (II.
The experimental realization of the necessary strongly-anisotropic confinement potentials is most commonly achieved via two schemes. In the first one, atoms are trapped in 2D optical lattices that are created by two orthogonal standing waves of light, each of them obtained by superimposing two counter-propagating laser beams. The dipole force acting on the atoms localizes them in the intensity extrema of the light wave, yielding an array of tightly-confining 1D potential tubes [START_REF] Bloch | Ultracold quantum gases in optical lattices[END_REF].
In the second scheme, atoms are magnetically trapped on an atom chip [START_REF] Van Amerongen | Thermodynamics on an Atom Chip[END_REF], where magnetic fields are created via a current flowing in microscopic wires and electrodes, that are micro-fabricated on a carrier substrate. The precision in the fabrication of such structures allows for a very good control of the generated magnetic field, that designs the potential landscape via the Zeeman force acting on the atoms. In this configuration, a single 1D sample is produced, instead of an array of several copies as in the case of an optical lattice.
Both techniques are used all around the world. The wire configuration thereby obtained corresponds to open boundary conditions, but there is also current interest and huge progress in the ring geometry, associated to periodic boundary conditions. This difference can have a dramatic impact on observables at the mesoscopic scale, especially if there are only a few particles. The effect of boundary conditions is expected to vanish in the thermodynamic limit.
The ring geometry has already attracted interest in condensed-matter physics in the course of last decades: supercurrents in superconducting coils are used on a daily basis to produce strong magnetic fields (reaching several teslas), and superconducting quantum interference devices (SQUIDs), based on two parallel Josephson junctions in a loop, allow to measure magnetic flux quanta [START_REF] Doll | Experimental Proof of Magnetic Flux Quantization in a Superconducting Ring[END_REF]. In normal (as opposed to superconducting) systems, mesoscopic rings have been used to demonstrate the Aharonov-Bohm effect [START_REF] Webb | Observation of h e Aharonov-Bohm Oscillations in Normal-Metal Rings[END_REF] (a charged particle is affected by an electromagnetic potential despite being confined to a space region where both magnetic and electric fields are zero, as predicted by quantum physics [START_REF] Aharonov | Significance of Electromagnetic Potentials in the Quantum Theory[END_REF]), and persistent currents [START_REF] Bluhm | Persistent Currents in Normal Metal Rings[END_REF].
Ring geometries are now investigated in ultracold gases as well. Construction of ringshaped traps and study of the superfluid properties of an annular gas is receiving increasing attention from various groups worldwide. The driving force behind this development is its potential for future applications in the fields of quantum metrology and quantum information technology, with the goal of realising high-precision atom interferometry [START_REF] Arnold | Large magnetic storage ring for Bose-Einstein condensates[END_REF] and quantum simulators based on quantum engineering of persistent current states [START_REF] Cominotti | Optimal Persistent Currents for Interacting Bosons on a Ring with a Gauge Field[END_REF], opening the field of 'atomtronics' [START_REF] Seaman | Atomtronics: Ultracold-atom analogs of electronic devices[END_REF].
Among the ring traps proposed or realized so far, two main categories can be distin- guished. In a first kind of setup, a cloud of atoms is trapped in a circular magnetic guide of a few centimeters [START_REF] Sauer | Storage Ring for Neutral Atoms[END_REF], or millimeters in diameter [START_REF] Gupta | Bose-Einstein Condensation in a Circular Waveguide[END_REF][START_REF] Pritchard | Demonstration of an inductively coupled ring trap for cold atoms[END_REF]. Such large rings can be described as annular wave-guides. They can be used as storage rings, and are preferred when it comes to developing guided-atom, large-area interferometers designed to measure rotations.
The second kind of ring traps, designed to study quantum fluid dynamics, has a more recent history, and associated experiments started with the first observation of a persistent atomic flow [START_REF] Ryu | Observation of Persistent Flow of a Bose-Einstein Condensate in a Toroidal Trap[END_REF]. To maintain well-defined phase coherence over the whole cloud, the explored radii are much smaller than in the previous configuration. A magnetic trap is pierced by a laser beam, resulting in a radius of typically 10 to 20µm [START_REF] Ramanathan | Superflow in a Toroidal Bose-Einstein Condensate: An Atom Circuit with a Tunable Weak Link[END_REF][START_REF] Ryu | Experimental Realization of Josephson Junctions for an Atom SQUID[END_REF][START_REF] Kumar | Minimally destructive, Doppler measurement of a quantized flow in a ring-shaped Bose-Einstein condensate[END_REF]. The most advanced experiments of this category rely mostly on purely optical traps, combining a vertical confinement due to a first laser beam, independent of the radial confinement realized with another beam propagating in the vertical direction, in a hollow mode [START_REF] Moulder | Quantized supercurrent decay in an annular Bose-Einstein condensate[END_REF][START_REF] Marti | Collective excitation interferometry with a toroidal Bose-Einstein condensate[END_REF].
Other traps make use of a combination of magnetic, optical and radio-frequency fields [START_REF] Morizot | Ring trap for ultracold atoms[END_REF][START_REF] Gildemeister | Trapping ultracold atoms in a time-averaged adiabatic potential[END_REF]113,[START_REF] Bell | Bose-Einstein condensation in large time-averaged optical ring potentials[END_REF][START_REF] Chakraborty | A toroidal trap for cold 87 Rb atoms using an rf-dressed quadrupole trap[END_REF][START_REF] Navez | Matterwave interferometers using TAAP rings[END_REF]. They can explore radii between 20 and 500µm, bridging the gap between optical traps and circular waveguides. As an illustration, Fig. II.1 shows in-situ images of trapped gases obtained by the techniques presented above. The tunable parameters in radio-frequency traps are the ring radius and its ellipticity. Moreover, vertical and radial trapping frequencies can be adjusted independently, allowing to explore both the 2D and 1D regime.
In the following, I shall always consider a ring geometry, though in most cases it will be of no importance whatsoever once the thermodynamic limit is taken. In the next section, I present the main analytical tools I have used to study 1D gases on a ring.
II.4 Analytical methods to solve 1D quantum models
Condensed-matter theorists are confronted to the tremendously challenging issue of the description of many-body interacting systems. In three dimensions, in some cases one may eliminate the main complicated terms in a many-electron problem and merely incorporate the effect of interactions into parameters (such as the mass) of new excitations called quasiparticles, that are otherwise similar noninteracting fermions. This adiabatic mapping is the essence of Landau's Fermi liquid theory [START_REF] Landau | The theory of a Fermi liquid[END_REF][START_REF] Landau | Oscillations in a Fermi liquid[END_REF][START_REF] Lifschitz | Statistical Physics Part 2, Theory of Condensed Matter[END_REF], that has been the cornerstone of theoretical solid-state physics for a good part of the 20 th century. This approach provides the basis for understanding metals in terms of weakly-interacting electron-like particles, describes superconductivity and superfluidity, but is restricted to fermions and breaks down in 1D [START_REF] Voit | One-Dimensional Fermi liquids[END_REF]. For these reasons, other tools are needed to study low-dimensional strongly-correlated gases, a fortiori bosonic ones.
Actually, there are three main theoretical approaches to one-dimensional stronglycorrelated systems. Either one tries and find exact solutions of many-body theories, typically using Bethe Ansatz techniques, or reformulate complicated interacting models in such a way that they become weakly-interacting, which is the idea at the basis of bosonization. These techniques are complementary, both will be used throughout this thesis. The third approach is the use of powerful numerical tools and will not be tackled here. Let me only mention that a major breakthrough in this field over the recent years has been the spectacular development of the density matrix renormalization group (DMRG) method [START_REF] White | Density matrix formulation for quantum renormalization groups[END_REF]. It is an iterative, variational method within the space of matrix product states, that reduces effective degrees of freedom to those most important for a target state, giving access to the ground state of 1D models, see e.g. [START_REF] Schollwöck | The density-matrix renormalization group[END_REF]. To study finite-temperature properties and large systems, quantum Monte Carlo (QMC) remains at the forefront of available numerical methods.
In this section, I give an introduction to the notion of (quantum) integrability, a feature shared by many low-dimensional models, including some spin chains and quantum field theories in the continuum in 1D, as well as classical statistical physics models in 2D. There are basically two levels of understanding, corresponding to coordinate Bethe Ansatz and algebraic Bethe Ansatz, that yield the exact thermodynamics and correlation functions respectively. Then, I consider noninteracting systems separately, as trivial examples of integrable systems. They are especially relevant in 1D due to an exact mapping between the Bose gas with infinitely strong repulsive interactions and a gas of noninteracting fermions. I also give a short introduction to the non-perturbative theory of Tomonaga-Luttinger liquids. It is an integrable effective field theory that yields the universal asymptotics of correlation functions of gapless models at large distances and low energies. To finish with, I present conformal field theory as another generic class of integrable models, providing a complementary viewpoint to the Tomonaga-Luttinger liquid theory, and put the emphasis on parallels between these formalisms.
II.4.1 Quantum integrability and Bethe Ansatz techniques
One can ask, what is good in 1+1-dimensional models, when our spacetime is 3+1-dimensional. There are several particular answers to this question.
(a) The toy models in 1+1 dimension can teach us about the realistic field-theoretical models in a nonperturbative way. Indeed such phenomena as renormalisation, asymptotic freedom, dimensional transmutation (i.e. the appearance of mass via the regularisation parameters) hold in integrable models and can be described exactly.
(b) There are numerous physical applications of the 1+1 dimensional models in condensedmatter physics.
(c) [. . . ] conformal field theory models are special massless limits of integrable models.
(d) The theory of integrable models teaches us about new phenomena, which were not appreciated in the previous developments of Quantum Field Theory, especially in connection with the mass spectrum.
(e) [. . . ] working with the integrable models is a delightful pastime. They proved also to be very successful tool for educational purposes.
Ludwig Fadeev
Quantum field theory (QFT) is a generic denomination for theories based on the application of quantum mechanics to fields, and is a cornerstone of modern particle and condensed-matter physics. Such theories describe systems of several particles and possess a huge (often infinite) number of degrees of freedom. For this reason, in general they can not be treated exactly, but are amenable to perturbative methods, based on expansions in the coupling constant. Paradigmatic examples are provided by quantum electrodynamics, the relativistic quantum field theory of electrodynamics that describes how light and matter interact, where expansions are made in the fine structure constant, and quantum chromodynamics, the theory of strong interaction, a fundamental force describing the interactions between quarks and gluons, where high-energy asymptotics are obtained by expansions in the strong coupling constant.
One of the main challenges offered by QFT is the quest of exact, thus non-perturbative, methods, to circumvent the limitations of perturbation theory, such as difficulty to obtain high-order corrections (renormalization tools are needed beyond the lowest orders, and the number of processes to take into account increases dramatically) or to control approximations, restricting its validity range. In this respect, the concept of integrability turns out to be extremely powerful. If a model is integrable, then it is possible to calculate exactly quantities like the energy spectrum, the scattering matrix that relates initial and final states of a scattering process, or the partition function and critical exponents in the case of a statistical model.
The theoretical tool allowing to solve quantum integrable models is called Bethe Ansatz, that could be translated as 'Bethe's educated guess'. Its discovery coincides with the early days of quantum field theory, when Bethe found the exact eigenspectrum of the 1D Heisenberg model (the isotropic quantum spin-1/2 chain with nearest-neighbor interactions, a.k.a. the XXX spin chain), using an Ansatz for the wavefunction [START_REF] Bethe | Zur Theorie der Metalle I. Eigenwerte und Eigenfunktionen der linearen Atomkette[END_REF]. This solution, provided analytically in closed form, is highly impressive if one bears in mind that the Hamiltonian of the system is a 2 N ×2 N matrix, technically impossible to diagonalize by brute force for long chains. Bethe's breakthrough was followed by a multitude of exact solutions to other 1D models, especially flourishing in the 1960's. Most of them enter the three main categories: quantum 1D spin chains, low-dimensional QFTs in the continuum or on a lattice, and classical 2D statistical models.
The typical form for the Hamiltonian of spin chains with nearest-neighbor interactions is
ĤSC = - N i=1 J x Ŝx i Ŝx i+1 + J y Ŝy i Ŝy i+1 + J z Ŝz i Ŝz i+1 , (II.4)
where the spin operators satisfy local commutations
[ Ŝa k , Ŝb l ] = i δ k,l a,b,c Ŝc k , (II.5)
with δ and the Kronecker and Levi-Civita symbols respectively ( a,b,c takes the value 0 if there are repeated indices, 1 if (a, b, c) is obtained by an even permutation of (1, 2, 3) and -1 if the permutation is odd).
In the case of a spin-1/2 chain, spin operators are usually represented by the Pauli matrices. The XXX spin chain solved by Bethe corresponds to the special case where J x = J y = J z in Eq. (II.4), and the anisotropic XXZ model, solved later on by Yang and Yang [START_REF] Yang | One-Dimensional Chain of Anisotropic Spin-Spin Interactions. I. Proof of Bethe's Hypothesis for Ground State in a Finite System[END_REF][START_REF] Yang | One-Dimensional Chain of Anisotropic Spin-Spin Interactions. II. Properties of the Ground-State Energy Per Lattice Site for an Infinite System[END_REF], to J x = J y . A separate thread of development began with Onsager's solution of the two-dimensional, square-lattice Ising model [START_REF] Onsager | Crystal Statistics. I. A Two-Dimensional Model with an Order-Disorder Transition[END_REF]. Actually, this solution consists of a Jordan-Wigner transformation to convert Pauli matrices into fermionic operators, followed by a Bogoliubov rotation to diagonalize the quadratic form thereby obtained [START_REF] Schultz | Two-Dimensional Ising Model as a Soluble Problem of Many Fermions[END_REF]. Similar techniques allow to diagonalize the XY spin chain Hamiltonian, where J z = 0 [START_REF] Lieb | Two soluble models of an antiferromagnetic chain[END_REF].
As far as QFT models in the continuum are concerned, the most general Hamiltonian for spinless bosons interacting through a two-body potential is
ĤSB = N i=1 p2 i 2m + V ext (x i ) + {i =j} V int (x i -xj ), (II.6)
where pi and xi are the momentum and position operators, V ext is an external potential, while V int represents inter-particle interactions. A few integrable cases have been given special names. The most famous ones are perhaps the Lieb-Liniger model [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF], defined by
V LL ext (x) = 0, V LL int (x) = g 1D δ(x), (II.7)
with δ the dirac function and g 1D the interaction strength, and the Calogero-Moser model [START_REF] Calogero | Solution of a Three-Body Problem in One Dimension[END_REF][START_REF] Calogero | Solution of the One-Dimensional N-Body Problems with Quadratic and/or Inversely Quadratic Pair Potentials[END_REF], associated to the problem of particles interacting pairwise through inverse cube forces ('centrifugal potential') in addition to linear forces ('harmonic potential'), i.e. such that
V CM ext (x) = 1 2 mω 2 x 2 , V CM int (x) = g x 2 .
(II.8)
The Lieb-Liniger model has been further investigated soon after by McGuire [START_REF] Mcguire | Study of Exactly Soluble One-Dimensional N-Body Problems[END_REF] and Berezin et al. [START_REF] Berezin | Schroedinger equation for the system of one-dimensional particles with point interaction[END_REF]. Its spin-1/2 fermionic analog has been studied in terms of the number M of spins flipped from the ferromagnetic state, in which they would all be aligned. The case M = 1 was solved by McGuire [START_REF] Mcguire | Interacting Fermions in One Dimension. I. Repulsive Potential[END_REF], M = 2 by Flicker and Lieb [START_REF] Flicker | Delta-Function Fermi Gas with Two-Spin Deviates[END_REF], and the arbitrary M case by Gaudin [START_REF] Gaudin | Un système à une dimension de fermions en interaction[END_REF] and Yang [START_REF] Yang | Some Exact Results for the Many-Body Problem in one Dimension with Repulsive Delta-Function Interaction[END_REF][START_REF] Yang | S-Matrix for the One-Dimensional N-Body Problem with Repulsive or Attractive δ-Function Interaction[END_REF], which is the reason why spin-1/2 fermions with contact interactions in 1D are known as the Yang-Gaudin model. Higher-spin Fermi gases have been investigated by Sutherland [START_REF] Sutherland | Further Results for the Many-Body Problem in One Dimension[END_REF].
The models presented so far in the continuum are Galilean-invariant, but Bethe Ansatz can be adapted to model with Lorentz symmetry as well, as shown by its use to treat certain relativistic field theories, such as the massive Thirring model [START_REF] Bergknoff | Structure and solution of the massive Thirring model[END_REF], and the equivalent quantum sine-Gordon model [START_REF] Coleman | Quantum sine-Gordon equation as the massive Thirring model[END_REF], as well as the Gross-Neveu [START_REF] Andrei | Diagonalization of the Chiral-Invariant Gross-Neveu Hamiltonian[END_REF] (a toy-model for quantum chromodynamics) and SU(2)-Thirring models [START_REF] Belavin | Exact solution of the two-dimensional model with asymptotic freedom[END_REF]. A recent study of the non-relativistic limit of such models shows the ubiquity of the Lieb-Liniger like models for non-relativistic particles with local interactions [START_REF] Bastianello | Non-relativistic limit of integrable QFT and Lieb-Liniger models[END_REF][START_REF] Bastianello | Non relativistic limit of integrable QFT with fermionic excitations[END_REF].
The last category, i.e. classical statistical physics models in 2D, is essentially composed of classical 2D spin chains, and of ice-type models. When water freezes, each oxygen atom is surrounded by four hydrogen ions. Each of them is closer to one of its neighboring Due to the ice rule, each vertex is surrounded by two arrows pointing towards it, and two away: this constraint limits the number of possible vertex configurations to six, thus the model is known as the 6-vertex model. Its solution has been obtained stepwise [START_REF] Lieb | Residual Entropy of Square Ice[END_REF][START_REF] Yang | Exact Solution of a Model of Two-Dimensional Ferroelectrics in an Arbitrary External Electric Field[END_REF]. Baxter's solution of the 8-vertex model includes most of these results [START_REF] Baxter | Eight-Vertex Model in Lattice Statistics[END_REF] and also solves the XYZ spin chain, that belongs to the first category.
The general approach introduced by Hans Bethe and refined in the many works cited above is known as coordinate Bethe Ansatz. It provides the excitation spectrum, as well as some elements of thermodynamics. The non-trivial fact that Bethe Ansatz provides solutions to both 1D quantum and 2D classical models is due to an exact, general mapping between dD quantum models at zero temperature and (d+1)D classical models with infinitely many sites, since the imaginary time τ in the path integral description of quantum systems plays the role of an extra dimension [START_REF] Suzuki | Relationship between d-Dimensional Quantal Spin Systems and (d+1)-Dimensional Ising Systems[END_REF]. This quantum-classical mapping implies that studying quantum fluctuations in 1D quantum systems amounts to studying thermal fluctuations in 2D classical ones, and is especially useful as it allows to solve quantum models with numerical methods designed for classical ones.
Computing the exact correlation functions of quantum-integrable models is a fundamental problem in order to enlarge their possibilities of application, and the next step towards solving them completely. Unfortunately, coordinate Bethe Ansatz does not provide a simple answer to this question, as the many-body wavefunction becomes extremely complicated when the number of particles increases, due to summations over all possible permutations.
The problem of the construction of correlation functions from integrability actually opened a new area in the field in the 1970's, based on algebraic Bethe Ansatz, that is essentially a second-quantized form of the coordinate one. A major step in the mathematical discussion of quantum integrability was the introduction of the quantum inverse scattering method (QISM) [START_REF] Korepin | Quantum Inverse Scattering Method and Correlation Functions[END_REF] by the Leningrad group of Fadeev [START_REF] Sklyanin | Quantum version of the method of inverse scattering problem[END_REF]. Roughly, this method relies on a spectrum-generating algebra, i.e. operators that generate the eigenvectors of the Hamiltonian by successive action on a pseudo-vacuum state, and provides an algebraic framework for quantum-integrable models. Its development has been fostered by an advanced understanding of classical integrability (for an introduction to this topic, see e.g. [START_REF] Torrielli | Classical Integrability[END_REF]), soliton theory (see e.g. [START_REF] Dauxois | Physics of Solitons[END_REF]), and a will to transpose them to quantum systems.
The original work of Gardner, Greene, Kruskal and Miura [START_REF] Gardner | Method for Solving the Korteweg-de Vries Equation[END_REF] has shown that the initial value problem for the nonlinear Korteweg-de Vries equation of hydrodynamics (describing a wave profile in shallow water) can be reduced to a sequence of linear problems. The relationship between integrability, conservation laws, and soliton behavior was clearly exhibited by this technique. Subsequent works revealed that the inverse scattering method is applicable to a variety of non-linear equations, including the classical versions of the non-linear Schrödinger [START_REF] Zakharov | Exact Theory of Two-Dimensional Self-Focusing and One-Dimensional Self-Modulation of Waves in Nonlinear Media[END_REF] and sine-Gordon [START_REF] Ablowitz | Method for Solving the Sine-Gordon Equation[END_REF] equations. The fact that the quantum non-linear Schrödinger equation could also be exactly solved by Bethe Ansatz suggested a deep connection between inverse scattering and Bethe Ansatz. This domain of research soon became an extraordinary arena of interactions between various branches of theoretical physics, and has strong links with several fields of mathematics as well, such as knot invariants [START_REF] Wu | Knot theory and statistical mechanics[END_REF], topology of low-dimensional manifolds, quantum groups [START_REF] Drinfeld | Quantum groups[END_REF] and non-commutative geometry.
I will only try and give a glimpse of this incredibly vast and complicated topic, without entering into technical details. To do so, following [START_REF] Witten | Integrable Lattice Models From Gauge Theory[END_REF], I will focus on integrable models that belong to the class of continuum quantum field theories in 1D.
Figure II.3 shows a spacetime diagram, where a particle of constant velocity is represented by a straight line. It shows the immediate vicinity of a collision process involving two particles. Due to energy and momentum conservation, after scattering, the outgoing particles go off at the same velocities as the incoming ones. In a typical relativistic quantum field theory (such theories are sometimes relevant to condensed matter), particle production processes may be allowed by these symmetries. In a N = 2 → N = 3 scattering event (where N represents the number of particles), the incoming and outgoing lines can be assumed to all end or begin at a common point in spacetime. However, integrable models have extra conserved quantities that commute with the velocity, but move a particle in space by an amount that depends on its velocity.
Starting with a spacetime history in which the incoming and outgoing lines meet at a common point in space-time, a symmetry that moves the incoming and outgoing lines by a velocity-dependent amount creates an history in which the outgoing particles could have had no common origin in spacetime, leading to a contradiction. This means that particle production is not allowed in integrable models. By contrast, two-particle scattering events happen even in integrable systems, but are purely elastic, in the sense that the initial and final particles have the same masses. Otherwise, the initial and final velocities would be different, and considering a symmetry that moves particles in a velocity-dependent way would again lead to a contradiction. In other words, the nature of particles is also unchanged during scattering processes in integrable models.
The situation becomes more interesting when one considers three particles in both the initial and final state. Since we can be moved relative to each other, leaving their slopes fixed, the scattering process is only composed of pairwise collisions. There are two ways to do this, as shown in Fig. II.4, and both must yield the same result. More formally, the equivalence of these pictures is encoded in the celebrated Yang-Baxter equation [START_REF] Baxter | Eight-Vertex Model in Lattice Statistics[END_REF], that schematically reads
S(1, 2, 3) = S(1, 2)S(1, 3)S(2, 3) = S(2, 3)S(1, 3)S(1, 2), (II.9)
in terms of scattering matrices, where S(1, 2, . . . ) is the coefficient relating the final-and inital-state wavefunctions in the collision process involving particles 1, 2 . . . The Yang-Baxter equation (II.9) guarantees that a multi-body scattering process can be factorized as a product of two-body scattering events, in other words, that scattering is not diffractive. Two-body reducible dynamics (i.e. absence of diffraction for models in the continuum) is a key point to quantum integrability, and may actually be the most appropriate definition of this concept [160].
To sum up with, a N -particle model is quantum-integrable if the number and nature of particles are unchanged after a scattering event, i.e. if its S-matrix can be factorized into a product of N 2 two-body scattering matrices, and satisfies the Yang-Baxter equation (II.9). I proceed to consider the most trivial example of integrable model: a gas of noninterating particles, whose relevance in 1D stems from an exact mapping involving a strongly-interacting gas.
II.4.2 Exact solution of the Tonks-Girardeau model and Bose-
Fermi mapping
In the introduction to this section devoted to analytical tools, I mentioned that a possible strategy to solve a strongly-interacting model is to try and transform it into a noninteracting problem. Actually, there is a case where such a transformation is exact, known as the Bose-Fermi mapping [START_REF] Girardeau | Relationship between Systems of Impenetrable Bosons and Fermions in One Dimension[END_REF]. It was put into light by Girardeau in the case of a one-dimensional gas of hard-core bosons, the so-called Tonks-Girardeau gas (prior to Girardeau, Lewi Tonks had studied the classical gas of hard spheres [161]). Hard-core bosons can not pass each other in 1D, and a fortiori can not exchange places. The resulting motion can be compared to a traffic jam, or rather to a set of 1D adjacent billiards whose sizes vary with time, containing one boson each.
The infinitely strong contact repulsion between the bosons imposes a constraint to the many-body wave function of the Tonks-Girardeau gas, that must vanish whenever two particles meet. As pointed out by Girardeau, this constraint can be implemented by writing the many-body wavefunction as follows:
ψ T G (x 1 , . . . , x N ) = A(x 1 , . . . , x N )ψ F (x 1 , . . . , x N ), (II.10)
where
A(x 1 , . . . , x N ) = {i>j} sign(x i -x j ), (II.11)
where ψ F is the many-body wavefunction of a fictitious gas of noninteracting, spinless fermions. The antisymmetric function A takes values in {-1, 1} and compensates the sign change of ψ F whenever two particles are exchanged, yielding a wavefunction that obeys Bose permutation symmetry, as expected. Furthermore, eigenstates of the Tonks-Girardeau Hamiltonian must satisfy the same Schrödinger equation as the ones of a noninteracting spinless Fermi gas when all coordinates are different. The ground-state wavefunction of the free Fermi gas is a Slater determinant of plane waves, leading to a Vandermonde determinant in 1D, hence the pair-product, Bijl-Jastrow form [START_REF] Girardeau | Relationship between Systems of Impenetrable Bosons and Fermions in One Dimension[END_REF]
ψ T G (x 1 , . . . , x N ) = 2 N (N -1) N !L N {i>j} sin π L (x i -x j ) . (II.12)
This form is actually generic of various 1D models in the limit of infinitely strong repulsion, such as the Lieb-Liniger model [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF].
The ground-state energy of the Tonks-Girardeau gas in the thermodynamic limit is then [START_REF] Girardeau | Relationship between Systems of Impenetrable Bosons and Fermions in One Dimension[END_REF]
E T G 0 = N (π n 0 ) 2 6m = E F 0 , (II.13)
thus it coincides with the one of N noninteracting spinless fermions, which is another important feature of the Bose-Fermi mapping. More generally, their thermodynamics are utterly equivalent. Even the excitation spectrum of the Tonks-Girardeau gas, i.e. the set of its excitations above the ground state, coincides with the one of a noninteracting spinless Fermi gas. The total momentum P and energy E of the model are given by P =
N j=1 k j and E = 2 2m N j=1 k 2
j respectively, where the set of quasi-momenta {k j } j=1,...,N satisfies
k j = 2π L I j . (II.14)
The Bethe numbers {I j } j=1,...,N are integer for odd values of N and half-odd if N is even. The quasi-momenta can be ordered in such a way that
k 1 < k 2 < • • • < k N , or equivalently I 1 < I 2 < • • • < I N .
The ground state corresponds to I j = -N +1 2 + j, its total momentum P GS = 0. I use the notations p = P -P GS and = E -E GS to denote the total momentum and energy of an excitation with respect to the ground state, so that the excitation spectrum is given by (p). For symmetry reasons, I only consider excitations such that p ≥ 0, those with -p having the same energy.
The Tonks-Girardeau gas features two extremal excitation branches, traditionally called type I and type II. Type-I excitations occur when the highest-energy particle with j = (N -1)/2 gains a momentum p n = 2πn/L and an energy
I n = 2 π 2 2mL 2 [(N -1 + 2n) 2 - (N -1) 2 ]. The corresponding continuous dispersion relation is [162] I (p) = 1 2m 2p F p 1 - 1 N + p 2 , (II.15)
where p F = π N/L is the Fermi momentum.
Type-II excitations correspond to the case where a particle inside the Fermi sphere is excited to occupy the lowest energy state available, carrying a momentum p n = 2π n/L. This type of excitation amounts to shifting all the quasi-momenta with j > n by 2π /L, thus leaving a hole in the Fermi sea. This corresponds to an excitation energy II n =
.5 -Excitation energy of the Tonks-Girardeau gas in units of the Fermi energy in the thermodynamic limit, as a function of the excitation momentum in units of the Fermi momentum, for N = 4 (black squares), N = 10 (brown triangles) and N = 100 hard-core bosons (red dots). The last case is quasi-indistinguishable from the thermodynamic limit (solid, blue).
2 π 2 2mL 2 [(N + 1) 2 -(N + 1 -2n) 2 ]
, yielding the type-II excitation branch [START_REF] Brand | A density-functional approach to fermionization in the 1D Bose gas[END_REF]
II (p) = 1 2m 2p F p 1 + 1 N -p 2 , (II.16)
that acquires the symmetry p ↔ 2p F -p at large number of bosons. Any combination of one-particle and one-hole excitations is also possible, giving rise to intermediate excitation energies between I (p) and II (p), that form a continuum in the thermodynamic limit, known as the particle-hole continuum. Figure II.5 shows the type-I and type-II excitation spectra of the Tonks-Girardeau gas. Below II , excitations are kinematically forbidden, which is another peculiarity of dimension one.
The Bose-Fermi mapping offers a possibility of investigating exactly and relatively easily a peculiar point of the phase diagram of 1D models, and in particular of calculating even-order auto-correlation functions of the wavefunction. I illustrate this point on the example of the density-density correlation function of the Tonks-Girardeau gas, using the mapping onto noninteracting fermions in the form
ψ T G = |ψ F |.
(II.17)
In particular,
(n T G ) k = |ψ T G | 2k = |ψ F | 2k = (n F ) k , (II.18)
where n is the density. As a consequence of Wick's theorem [START_REF] Wick | The Evaluation of the Collision Matrix[END_REF], the quantum-statistical average of the equal-time density correlations of a Tonks-Girardeau gas at zero tempera-ture is
n(x)n(0) T G = n 2 0 + 1 L 2 k,k e -i(k-k )x Θ(k F -|k|)Θ(|k | -k F ), (II.19)
where x is the distance between the two probed points, k and k are the quantized momenta, integer multiples of 2π/L, Θ is the Heaviside step function, n 0 = N/L is the density of the homogeneous gas, and k F = πn 0 the norm of the Fermi wavevector.
In the thermodynamic limit, Eq. (II. [START_REF] Wilczek | Quantum Mechanics of Fractional-Spin Particles[END_REF]) transforms into
n(x)n(0) T G n 2 0 = 1+ 1 (2k F ) 2 k F -k F dk e -ikx -k F -∞ + +∞ k F dk e ik x = 1- sin 2 (k F x) (k F x) 2 . (II.20)
This quantity represents the probability of observing simultaneously two atoms separated by a distance x. The fact that it vanishes at x = 0 is a consequence of Pauli's principle, known as the Pauli hole, and the oscillating structure is typical of Friedel oscillations.
Actually, one can even go a step further and treat time-dependent correlations, since the Bose-Fermi mapping remains exact even in the time-dependent problem [START_REF] Yukalov | Fermi-Bose mapping for one-dimensional Bose gases[END_REF][START_REF] Minguzzi | Exact Coherent States of a Harmonically Confined Tonks-Girardeau Gas[END_REF]. It yields
n(x, t)n(0, 0) T G = n 2 0 + 1 L 2 k,k e -i(k-k )x e i 2m t(k 2 -k 2 ) Θ(k F -|k|)Θ(|k | -k F ), (II.21)
and in the thermodynamic limit I obtain
n(x, t)n(0, 0) T G n 2 0 = 1+ 1 4k 2 F k F -k F dk e i k 2 t 2m -kx +∞ -∞ dk e -i k 2 t 2m -k x - k F -k F dk e -i k 2 t 2m -k x . (II.22)
To evaluate it, I define
I(x, t) = +∞ -∞ dk e -i 2 k 2 2m t-kx , J(x, t) = k F -k F dk e i 2 k 2 2m t-kx . (II.23)
Then, doing natural changes of variables and using the property This term represents a decaying wave packet, and is equal to 2π times the propagator of free fermions.
The total correlation function can be split into two parts, one 'regular' and real-valued, the other complex and associated to the wave packet, such that n(x, t)n(0, 0) T G = n(x, t)n(0, 0) T G reg + n(x, t)n(0, 0) T G wp .
(II.26)
Then, defining the Fresnel integrals as
S(x) = 2 √ 2π x 0 dt sin(t 2 ), C(x) = 2 √ 2π x 0 dt cos(t 2 ), (II.27)
focusing on the regular part I find
n(x, t)n(0, 0) T G reg n 2 0 = 1 - 1 4k 2 F |J(x, t)| 2 = 1- π 8 1 ω F t C m 2 t (x+v F t) -C m 2 t (x-v F t) 2 - π 8 1 ω F t S m 2 t (x+v F t) -S m 2 t (x-v F t) 2 (II.28)
where v f = k F m is the Fermi velocity, and
ω F = k 2 F
2m . The Tonks-Girardeau case will serve as a comparison point several times in the following, as the only case where an exact closed-form solution is available. One should keep in mind, however, that any observable involving the phase of the wavefunction, although it remains considerably less involved than the general finitely-interacting case, is not as easy to obtain.
Another advantage of the Bose-Fermi mapping is that it holds even in the presence of any kind of trapping, in particular if the hard-core bosons are harmonically trapped, a situation that will be encountered below, in chapters III and V. It has also been extended to spinor fermions [START_REF] Girardeau | Tonks-Girardeau and super-Tonks-Girardeau states of a trapped one-dimensional spinor Bose gas[END_REF] and Bose-Fermi mixtures [START_REF] Girardeau | Soluble Models of Strongly Interacting Ultracold Gas Mixtures in Tight Waveguides[END_REF], and another generalization maps interacting bosons with a two-body δ-function interaction onto interacting fermions, except that the roles of strong and weak couplings are reversed [START_REF] Cheon | Fermion-Boson Duality of One-dimensional Quantum Particles with Generalized Contact Interaction[END_REF]. This theorem is peculiarly important, in the sense that any result obtained for bosons is also valid for fermions. An extension to anyons has also been considered [START_REF] Girardeau | Anyon-Fermion Mapping and Applications to Ultracold Gases in Tight Waveguides[END_REF].
To sum up with, the hard-core Bose gas is known as the Tonks-Girardeau gas, and according to the Bose-Fermi mapping, it is partially equivalent to the noninteracting spinless Fermi gas, in the sense that their ground-state wavefunctions differ only by a multiplicative function that assumes two values, ±1. Their energies, excitation spectra and density correlation functions are identical as well.
Along with this exact mapping, another technique exists, where the mapping from an interacting to a noninteracting problem is only approximate, and is called bosonization. I proceed to study its application to interacting fermions and bosons in 1D, yielding the formalism of Tomonaga-Luttinger liquids.
II.4.3 Bosonization and Tomonaga-Luttinger liquids
The first attempts to solve many-body, strongly-correlated problems in one dimension have focused on fermions. It turns out that a non-perturbative solution can be obtained by summing an infinite number of diverging Feynman diagrams, that correspond to particlehole and particle-particle scattering [170], in the so-called Parquet approximation. This tour de force, supplemented by renormalization group techniques [START_REF] Shankar | Renormalization-group approach to interacting fermions[END_REF], is known as the Dzyaloshinskii-Larkin solution.
There is, actually, a much simpler approach to this problem. It is based on a procedure called bosonization, introduced independently in condensed-matter physics [START_REF] Luther | Single-particle states, Kohn anomaly, and pairing fluctuations in one dimension[END_REF] and particle physics [START_REF] Mandelstam | Soliton operators for the quantized sine-Gordon equation[END_REF] in the 1970's. In a nutshell, bosonization consists in a reformulation of the Hamiltonian in a more convenient basis involving free bosonic operators (hence the name of the method), that keeps a completely equivalent physical content. To understand the utility of bosonization, one should bear in mind that interaction terms in the fermionic problem are difficult to treat as they involve four fermionic operators. The product of two fermions being a boson, it seems interesting to expand fermions on a bosonic basis to obtain a quadratic, and thus diagonalizable, Hamiltonian.
The main reason for bosonization's popularity is that some problems that look intractable when formulated in terms of fermions become easy, and sometimes even trivial, when formulated in terms of bosonic fields. The importance and depth of this change of viewpoint is such, that it has been compared to the Copernician revolution [START_REF] Gogolin | Bosonization and Strongly Correlated Systems[END_REF], and to date bosonization remains one of the most powerful non-perturbative approches to many-body quantum systems.
Contrary to the exact methods discussed above, bosonization is only an effective theory, but has non-negligible advantages as a bosonized model is often far easier to solve than the original one when the latter is integrable. Moreover, the bosonization technique yields valuable complementary information to Bethe Ansatz, about its universal features (i.e., those that do not depend on microscopic details), and allows to describe a wide class of non-integrable models as well.
Tomonaga was the first to identify boson-like behavior of certain elementary excitations in a 1D theory of interacting fermions [START_REF] Tomonaga | Remarks on Bloch's Method of Sound Waves applied to Many-Fermion Problems[END_REF]. A precise definition of these bosonic excitations in terms of bare fermions was given by Mattis and Lieb [START_REF] Mattis | Exact Solution of a Many-Fermion System and Its Associated Boson Field[END_REF], who took the first step towards the full solution of a model of interacting 1D fermions proposed by Luttinger [START_REF] Luttinger | An Exactly Soluble Model of Many-Fermion System[END_REF]. The completion of this formalism was realized later on by Haldane [START_REF] Haldane | Luttinger liquid theory' of one-dimensional quantum fluids. I. Properties of the Luttinger model and their extension to the general 1D interacting spinless Fermi gas[END_REF], who coined the expression 'Luttinger liquid' to describe the model introduced by Luttinger, exactly solved by bosonization thanks to its linear dispersion relation.
Actually, 1D systems with a Luttinger liquid structure range from interacting spinless fermions to fermions with spin, and interacting Bose fluids. Condensed-matter experiments have proved its wide range of applicability, from Bechgaard salts [START_REF] Schwartz | On-chain electrodynamics of metallic (T M T SF ) 2 X salts: Observation of Tomonaga-Luttinger liquid response[END_REF] to atomic chains and spin ladders [START_REF] Klanjšek | Controlling Luttinger Liquid Physics in Spin Ladders under a Magnetic Field[END_REF][START_REF] Bouillot | Statics and dynamics of weakly coupled antiferromagnetic spin-1/2 ladders in a magnetic field[END_REF][START_REF] Jeong | Dichotomy between Attractive and Repulsive Tomonaga-Luttinger Liquids in Spin Ladders[END_REF], edge states in the fractional quantum Hall effect [START_REF] Wen | Chiral Luttinger liquid and the edge excitations in the fractional quantum Hall states[END_REF][START_REF] Milliken | Indications of a Luttinger liquid in the fractional quantum Hall regime[END_REF], carbon [START_REF] Bockrath | Luttinger-liquid behaviour in carbon nanotubes[END_REF] and metallic [START_REF] Gao | Evidence for Luttingerliquid behavior in crossed metallic single-wall nanotubes[END_REF] nanotubes, nanowires [START_REF] Levy | Experimental evidence for Luttinger liquid behavior in sufficiently long GaAs V-groove quantum wires[END_REF], organic conductors [START_REF] Dardel | Possible Observation of a Luttinger-Liquid Behaviour from Photoemission Spectroscopy of One-Dimensional Organic Conductors[END_REF][START_REF] Lebed | The Physics of Organic Superconductors and Conductors[END_REF] and magnetic insulators [189].
The harmonic fluid approach to bosonic Luttinger liquids, following Cazalilla [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF], is adapted to any statistics at the price of tiny modifications in the fermionic case. This approach operates a change of viewpoint compared to the historical development, as it defines a Tomonaga-Luttinger liquid as a 1D model described by the generic Hamiltonian
H T L = v s 2π L 0 dx K (∂ x Φ) 2 + 1 K (∂ x θ) 2 , (II.29)
where θ and Φ are scalar fields that satisfy the commutation relation
[∂ x θ(x), Φ(x )] = iπδ(x-x ).
(II.30)
Note that this commutation relation is anomalous, as it involves a partial derivative of one of the fields. The motivation behind this definition is that the Tomonaga-Luttinger Hamiltonian is the simplest that can be obtained by expanding an interaction energy in the deviations from constant density and zero current. The cross-term ∂ x θ∂ x φ does not arise, as it could be removed by Galilean transformation. Usually, the fields ∂ x θ and Φ correspond to local density fluctuations around the mean value,
1 π ∂ x θ(x) = n(x)-n 0 , (II.31)
and to phase respectively. The positive coefficients K and v s in Eq. (II.29) are phenomenological, model-dependent parameters. To be more specific, K is a dimensionless stiffness and v s represents the sound velocity.
Qualitatively, two limiting regimes are expected. If K is large, density fluctuations are important and phase fluctuations are reduced. It corresponds to a classical regime that looks like a BEC phase, but can not be so due to the impossibility of symmetry breaking [START_REF] Mermin | Absence of Ferromagnetism or Antiferromagnetism in One-or Two-Dimensional Isotropic Heisenberg Models[END_REF]. If K is small, the system looks like a crystal to some extent. Note also the Φ ↔ θ and K ↔ 1/K duality, suggesting that the value K = 1 has a special meaning. Actually, it corresponds to noninteracting fermions, as will be shown below.
Since the Tomonaga-Luttinger Hamiltonian (II.29) is a bilinear form, it can be diagonalized. A convenient basis is provided by bosonic creation and annihilation operators with standard commutation relations, [b q , b † q ] = δ q,q . Neglecting topological terms that are crucial at the mesoscopic scale but irrelevant in the thermodynamic limit [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF] and using periodic boundary conditions, the original fields are expressed in this basis as
θ(x) = 1 2 q =0 2πK qL 1/2 (e iqx b q + e -iqx b † q ) (II.32)
and
Φ(x) = 1 2 q =0 2π qLK 1/2
sign(q)(e iqx b q + e -iqx b † q ). (II.33)
Inserting these normal mode expansions into Eq. (II.29) and using the bosonic commutation relations yields the diagonalized form of the Hamiltonian,
H T L = q =0 ω(q)b † q b q , (II.34)
where Another striking point in Eq. (II. [START_REF] Rigol | Relaxation in a Completely Integrable Many-Body Quantum System: An Ab Initio Study of the Dynamics of the Highly Excited States of 1D Lattice Hard-Core Bosons[END_REF]) is that it does not explictly depend on the parameter K, suggesting that v s and K are linked together. This property can be shown using the density-phase approach. Writing the wavefunction as
ω(q) = v s |q| (II.
ψ(x) = n(x)e iΦ(x) , (II.36)
and inserting this identity into the kinetic part of the microscopic Hamiltonian, that reads [START_REF] Schotte | Tomonaga's Model and the Threshold Singularity of X-Ray Spectra of Metals[END_REF], is another key ingredient of bosonization. In principle, it allows to justify the form of Eq. (II.29) for quantum field theories in the continuum, starting from the microscopic level.
H kin = 2 2m dx ∂ x ψ † ∂ x ψ
To illustrate the predictive power of the Tomonaga-Luttinger liquid formalism on a concrete example, I proceed to study its density correlations. It is well-known (I refer to Appendix A.1 for a detailed derivation) that the density-density correlations of a Tomonaga-Luttinger liquid have the following structure in the thermodynamic limit [START_REF] Efetov | Correlation functions in one-dimensional systems with a strong interaction[END_REF][START_REF] Haldane | Effective Harmonic-Fluid Approach to Low-Energy Properties of One-Dimensional Quantum Fluids[END_REF]:
n(x)n(0) T L n 2 0 = 1 - K 2 1 (k F x) 2 + +∞ m=1 A m (K) cos(2mk F x) (k F x) 2Km 2 , (II.41)
where {A m (K)} m>0 is an infinite set of model-dependent functions of K called form factors. Equation (II.41) is one of the greatest successes of the Tomonaga-Luttinger liquid theory, as it explicitly yields the large-distance structure of a non-local correlation function, that would otherwise be difficult to obtain by Bethe Ansatz techniques.
Since Eq. (II.29) is an effective Hamiltonian, its validity range is not clear when it is used to describe a given model, such as the Lieb-Liniger model. A good starting point to investigate this crucial issue is to check whether Eq. (II.41) is compatible with the known exact result in the Tonks-Girardeau regime. Although Eq. (II.41) looks far more complicated than Eq. (II.20), setting K = 1, A 1 (1) = 1 2 , and A m (1) = 0, ∀m > 1, yields
n(x)n(0) T L K=1 n 2 0 = 1- 1 k 2 F x 2 1-cos(2k F x) 2 = 1 - sin 2 (k F x) (k F x) 2 = n(x)n(0) T G n 2 0 . (II.42)
Therefore, the Tomonaga-Luttinger liquid theory is able to reproduce the exact static density correlations of a Tonks-Girardeau gas, or equivalently a gas of noninteracting fermions, albeit at the price of fine-tuning an infinite set of coefficients. Note that one could have found the value of the Luttinger parameter K associated to noninteracting fermions by deriving the Luttinger Hamiltonian for the latter. Another possibility is to use Eq. (II.38) and bear in mind that v s = v F , which is even more straightforward. None of these approaches, however, yields the values of {A m }.
This fine-tuning is assuredly a considerable shortcoming, unless it is possible to find the whole, infinite set of unknown coefficients in non-trivial cases (i.e. at finite interaction strength), by a systematic procedure that does not require thorough knowledge of the exact solution. Fortunately, the large-distance decay of the power law contributions in Eq. (II.41) becomes faster with increasing order m, and coefficients {A m (K)} m>1 are known to be negligible compared to A 1 (K) in the thermodynamic limit for the Lieb-Liniger model with repulsive interactions [START_REF] Shashi | Nonuniversal prefactors in the correlation functions of one-dimensional quantum liquids[END_REF][START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF]. Thus, only two coefficients are needed in practice to describe the statics at large distances: the Luttinger parameter K, and the first form factor A 1 (K).
The explicit expression of K in the effective Hamiltonian in terms of the microscopic parameters of the model it is derived from is sometimes found constructively, e.g. for noninteracting fermions [START_REF] Haldane | Luttinger liquid theory' of one-dimensional quantum fluids. I. Properties of the Luttinger model and their extension to the general 1D interacting spinless Fermi gas[END_REF] or interacting fermions in the g-ology context [START_REF] Sólyom | The Fermi gas model of one-dimensional conductors[END_REF], where Eq. (II.29) can be obtained from a more fundamental analysis, starting from the microscopic Hamiltonian. It has also been derived in peculiar from the hydrodynamic Hamiltonian of a one-dimensional liquid in the weakly-interacting case [START_REF] Lifschitz | Statistical Physics Part 2, Theory of Condensed Matter[END_REF][START_REF] Bovo | Nonlinear Bosonization and Refermionization in One Dimension with the Keldysh Functional Integral[END_REF]. In most other contexts, a constructive derivation is lacking, but is not required to make quantitative predictions, as long as the two necessary parameters can be obtained from outside considerations, stemming from Bethe Ansatz, DMRG or experiments. As an example, for the Lieb-Liniger model, K can be extracted by coordinate Bethe Ansatz using thermodynamic relations [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF]. For this model, it varies between K = 1 in the infinitely-interacting regime and K → +∞ for vanishing interactions. The form factor A 1 (K) has been obtained in the repulsive regime, based on algebraic Bethe Ansatz [START_REF] Shashi | Nonuniversal prefactors in the correlation functions of one-dimensional quantum liquids[END_REF][START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF]. I have shown above that the Tomonaga-Luttinger liquid formalism reproduces exactly the static density correlations of the Tonks-Girardeau gas. However, for many purposes, one may be interested in time-dependent correlations as well. Time dependence is taken into account in the Schrödinger picture by A(x, t) = e iHt A(x)e -iHt , where A is any observable and H is the Hamiltonian. Using the equations of motion or the Baker-Campbell-Haussdorff lemma, from Eqs. (II.32), (II.34) and (II.35), one finds
θ(x, t) = 1 2 q =0 2πK qL 1/2 e i[qx-ω(q)t] b q + e -i[qx-ω(q)t] b † q , (II.43)
and after some algebra (details can be found again in Appendix A.1),
n(x, t)n(0, 0) T L n 2 0 = 1- K 4k 2 F 1 (x-v s t) 2 + 1 (x+v s t) 2 + +∞ m=1 A m (K) cos(2mk F x) k 2Km 2 F (x 2 -v 2 s t 2 ) Km 2 .
(II.44) When truncated to any finite order, Eq. (II.44) is divergent on the mass shell, defined by x = ±v s t, and is usually regularized as x = ±(v s t -i ) where is a short-distance cut-off, that mimics a lattice regularization. Sometimes, the term light-cone is also used instead of mass shell, in analogy with special relativity. Indeed, the bosons describing the dispersion are massless, since they verify the relativistic dispersion (p) = c √ M 2 c 2 +p 2 , where (q) = ω(q), with a mass term M = 0, and v s plays the same role as c, the speed of light.
In the Tonks-Girardeau regime, corresponding to K = 1, the whole set of coefficients {A m } m≥1 has already been obtained from the static treatment. Equation (II.44) then reduces to
n(x, t)n(0, 0) T L K=1 n 2 0 = 1 - 1 4k 2 F 1 (x -v F t) 2 + 1 (x + v F t) 2 + 1 2 cos(2k F x) k 2 F (x 2 -v 2 F t 2 )
. (II.45) The conclusion of this first-order study is that the Tomonaga-Luttinger liquid theory captures the short-time, long-distance dynamics of the Tonks-Girardeau gas, i.e. far from the 'light-cone', except for the contribution of the decaying wave packet, that vanishes at long times. A quantitative validity criterion is thus k F |x| ω F |t| 1, and in a relativistic language Eq. (II.45) holds deep in the space-like region. Recalling the splitting of the Tonks-Girardeau correlation function into regular and wavepacket part in Eq. (II.26), I find that the Tomonaga-Luttinger liquid fails to describe the regular part of the timedependent density-density correlations of the Tonks-Girardeau gas at larger time scales, as can be seen at next order already. Using expansions of the Fresnel integrals S and C to higher orders around t = 0 in Eq. (II.28), keeping terms in the light-cone variables u = x-v F t and v = x+v F t to same order, I find
A
n(x, t)n(0, 0) T G reg n 2 0 = n(x, t)n(0, 0) T L,T G n 2 0 + ω F t k 4 F sin(2k F x) x 2 -v 2 F t 2 1 v 2 - 1 u 2 + 2(ω F t) 2 k 6 F 5 2 1 v 6 + 1 u 6 + cos(2k F x) (x 2 -v 2 F t 2 ) 3 - 3 cos(2k F x) x 2 -v 2 F t 2 1 v 4 + 1 u 4 + 12(ω F t) 3 k 8 F - 5 sin(2k F x) x 2 -v 2 F t 2 1 v 6 - 1 u 6 + sin(2k F x) (x 2 -v 2 F t 2 ) 3 1 v 2 - 1 u 2 + . . . (II.46)
I have checked that Eq. (II.46) is equivalent to the series expansion obtained in [START_REF] Its | Differential Equations for Quantum Correlation Functions[END_REF].
The new terms in the density-density correlations compared to the first-order expansion described by the Tomonaga-Luttinger theory are all proportional to a power of ω F t and, as such, vanish at equal times, as expected. None of them is reproduced by the effective field theory.
To obtain better agreement with the exact expansion, a generalized effective theory should predict higher-order terms as well. To do so, it would be natural to include nonlinearities in the Tomonaga-Luttinger Hamiltonian, that correspond to curvature of the dispersion relation. Note, however, that the expression Eq. (II.46) diverges at all orders on the mass shell, except when they are all resummed, plaguing this approach at the perturbative level in the vicinity of the mass shell [START_REF] Aristov | Luttinger liquids with curvature: Density correlations and Coulomb drag effect[END_REF]. Similar conclusions have been drawn from the comparison of the Tomonaga-Luttinger liquid theory and exact Tonks-Girardeau results, focusing on other observables, such as the Green function [START_REF] Pereira | Long time correlations of nonlinear Luttinger liquids[END_REF].
The Tomonaga-Luttinger result Eq. (II.45) also misses the part of the exact densitydensity correlation function associated to the wave packet, whose expansion reads
n(x, t)n(0, 0) T G wp n 2 0 = π 4 e -iπ/4 √ 2 1 ω F t C m 2 t v -C m 2 t u +i S m 2 t v -S m 2 t u i √ π 4 e -iπ/4 1 √ ω F t e -i (k F x) 2 4ω F t e i(k F x-ω F t) k F u - e -i(k F x+ω F t) k F v + . . . (II.47)
It has been shown in [START_REF] Kozlowski | Long-time and large-distance asymptotic behavior of the current-current correlators in the non-linear Schrödinger model[END_REF] that in the general case (i.e. for arbitrary interaction strengths in the microscopic model), the wave packet term coincides with the saddle point.
The unpleasant conclusion is that, as far as dynamics of density-density correlations is concerned, the standard Tomonaga-Luttinger liquid approach presented here misses on the one hand an infinite number of regular terms, and on the other hand the wave packet term. Thus, it is not adapted to investigate long-time dynamics.
To sum up with, in this section I have presented the Tomonaga-Luttinger Hamiltonian, its diagonalization via the bosonization procedure, and given the structure of its densitydensity correlation function. Even at zero temperature, correlators decay as power laws, that indicate the absence of a characteristic length scale. The tendency towards certain ordering is defined by the most weakly-decaying correlation and this, in turn, is determined by the sole Luttinger parameter K, renormalized by interactions as would be the case for a Fermi liquid. However, the Tomonaga-Luttinger liquid is a paradigmatic example of a non-Fermi liquid [202], and unlike the latter applies to bosons and insulating magnetic materials as well.
The main conundrums of bosonization are the built-in ultraviolet cut-off, that calls for external form-factor calculations, and its limitation to low energy due to the linear spectrum assumption. These points will be investigated in chapter IV, partly devoted to the dynamical correlations of Tomonaga-Luttinger liquids in momentum-energy space.
To circumvent limitation to low energies, an intuitive approach would be to try and include terms describing curvature of the dispersion relation. However, upon closer in-spection, such terms would break Lorentzian invariance and doom this technique at the perturbative level. The extended Tomonaga-Luttinger model that has emerged as the mainstream paradigm in the first decade of the twenty-first century is the Imambekov-Glazman formalism of 'beyond Luttinger liquids' (see [START_REF] Imambekov | One-dimensional quantum liquids: Beyond the Luttinger liquid paradigm[END_REF] for a review), that is based on a multiband structure and an impurity formalism, instead of including curvature. In particle physics, bosonization has also been extended to new formalisms where the bosons of the new basis are interacting, and non-abelian bosonization has been developed [START_REF] Witten | Non-Abelian Bosonization in Two Dimensions[END_REF].
Another major problem of the Tomonaga-Luttinger liquid theory in its standard form is that proving the validity of the bosonization formalism in explicit detail and ironing out its subtleties is considerably harder than merely applying it. This issue has been widely ignored and may look even more obsolete nowadays regarding the success of the Imambekov-Glazman paradigm, but has not been investigated deeply enough, in my opinion. In the next chapters, I shall come back to this issue regularly and try and fill a few gaps in the previous literature.
The lack of obvious generalization to higher dimensions is also often deplored. Despite numerous efforts and reflexions in this direction [START_REF] Wen | Metallic non-Fermi-liquid fixed point in two and higher dimensions[END_REF][START_REF] Bartosch | Correlation functions of higher-dimensional Luttinger liquids[END_REF], a general construction of an efficient Tomonaga-Luttinger liquid formalism in higher dimensions is still lacking. I shall come back to this issue in chapter V, where I construct a higher-dimensional Tomonaga-Luttinger model in a peculiar case.
II.4.4 Conformal Field Theory
To conclude this section on theoretical tools, I give the basics of conformal field theory (CFT). This research topic is extremely wide and active, so I will not attempt to introduce all of its (even elementary) aspects, but rather select those, that are useful to my purpose, i.e. essentially the ones linked to finite-size and temperature thermodynamics and correlation functions.
My first motivation is that conformal invariant systems are a subclass of integrable models, the second is that CFT provides an alternative to bosonization when it comes to evaluate finite-size and finite-temperature effects on correlation functions. Conformal field theory has also become essential in its role of a complementary tool to numerical methods, as it enables to extrapolate results obtained at finite particle number (typically from exact diagonalization) to the thermodynamic limit.
As a starting point, let me introduce the notion of conformal transformation. Since it has a geometric nature, the tensorial approach to differential geometry, also used in general relativity, provides compact notations for a general discussion [START_REF] Francesco | Conformal Field theory[END_REF]. In arbitrary dimension, the space-time interval is written in terms of the metric tensor g µν as ds 2 = g µν dx µ dx ν , (II. [START_REF] Bloch | Quantum simulations with ultracold quantum gases[END_REF] where I use Einstein's convention that a pair of identical covariant and contravariant indices represents a summation over all values of the index. The metric tensor is assumed to be symmetric, g µν = g νµ , and non-degenerate, det(g µν ) = 0, thus the pointwise metric tensor has an inverse, g µν (x), such that
g µν (x)g νλ (x) = δ µ λ , (II.49)
where δ µ λ represents the identity tensor. A coordinate transformation x → x yields a covariant transformation of the metric tensor,
g µν (x ) = ∂x α ∂x µ ∂x β ∂x ν g αβ (x). (II.50)
An infinitesimal transformation of the coordinates x µ → x µ = x µ + µ (x) can be inverted as
x µ = x µ -µ (x ) + O( 2 ), hence ∂x ρ ∂x µ = δ ρ µ -∂ µ ρ , (II.51)
transforming the metric according to
g µν → g µν = g µν + δg µν = g µν -(∂ µ ν + ∂ ν µ ). (II.52)
By definition, a conformal transformation preserves the angle between two vectors, thus it must leave the metric invariant up to a local scale factor:
g µν (x ) = Ω(x)g µν (x). (II.53)
For this condition to be realized, the transformation described by Eq. (II.52) must be such, that the variation of the metric is proportional to the original metric itself. A more explicit expression of this constraint is obtained by taking the trace, that corresponds to a contraction in the tensorial formalism:
g µν g µν = D, (II.54)
where D = d+1 is the space-time dimension, yielding
g µν (∂ µ ν + ∂ ν µ ) = 2∂ ν ν . (II.55)
In the end, the constraint Eq. (II.53) has been transformed into
∂ µ ν + ∂ ν µ = 2 D ∂ ρ ρ g µν , (II.56)
the so-called conformal Killing equation. Its solutions in Euclidian space, the conformal Killing vectors, are of the form
µ = a µ + ω µν x ν + λx µ + b µ x 2 -2( b • µ)x µ , (II.57)
where ω µν is antisymmetric. It can be shown that in space-time dimension strictly larger than two, the allowed conformal transformations, found by exponentiation of infinitesimal ones described by Eq. (II.57), are of four types.
They correspond to translations, such that
x µ = x µ + a µ , (II.58)
dilations such that
x µ = λx µ , (II.59)
where λ is a non-negative number, rotations
x µ = (δ µ ν + ω µ ν )x ν = M µ ν x ν (II.60)
and the less intuitive 'special conformal transformations' that correspond to a concatenation of inversion, translation and inversion:
x µ = x µ -b µ x 2 1 -2 b • x + b 2 x 2 . (II.61)
The conclusion is that the group of conformal transformations is finite in this case.
Space-time dimension two, however, appears to be special, as the constaint (II.53) reduces to
∂ 1 1 = ∂ 2 2 , ∂ 1 2 = -∂ 2 1 .
(II.62) Equation (II.62) is nothing else than the well-known Cauchy-Riemann condition that appears in complex analysis, and characterizes holomorphic functions. In other words, since any holomorphic function generates a conformal transformation in a (1+1)D QFT, the dimension of the conformal group is infinite. Actually, this property is of considerable help to solve models that feature conformal invariance.
In field theory, the interest started in 1984 when Belavin, Polyakov and Zamolodchikov introduced CFT as a unified approach to models featuring gapless linear spectrum in (1+1)D [START_REF] Belavin | Infinite Conformal Symmetry in Two-Dimensional Quantum Field Theory[END_REF]. This property implies that this formalism shares its validity range with the Tomonaga-Luttinger liquid theory, hinting at an intimate link between CFT and bosonization, as first noticed in [START_REF] Dotsenko | Conformal algebra and multipoint correlation functions in 2D statistical models[END_REF]. The infinite-dimensional conformal symmetry ac-tually stems from the spectrum linearity. From the point of view of integrability, the most important result of CFT is that correlation functions of critical systems obey an infinite-dimensional number of so-called Ward identities. Their solution uniquely determines all correlation functions, and in this respect CFT is a substitute to the Hamiltonian formalism to exactly solve a gapless model.
Let us follow for a while the analogy with bosonization. Within the CFT formalism, correlation functions are represented in terms of correlators of bosonic fields. The twopoint correlation function is defined from the action S in the path-integral formalism as
φ 1 (x 1 )φ 2 (x 2 ) = 1 Z D[φ] φ 1 (x 1 )φ 2 (x 2 )e -S[φ] (II.63)
where φ] is the partition function of the model. Equation (II.63) is then simplified, using the properties of the conformal transformations listed above [START_REF] Polyakov | Conformal Symmetry of Critical Fluctuations[END_REF].
Z = D[φ] e -S[
Enforcing rotational and translational invariance imposes that the two-point correlation function depends only on |x 1 -x 2 |. Scale invariance in turn yields
φ 1 (x 1 )φ 2 (x 2 ) = λ ∆ 1 +∆ 2 φ 1 (λx 1 )φ 2 (λx 2 ) , (II.64)
where ∆ 1,2 are the dimensions of the fields φ 1,2 , and combining these invariances yields
φ 1 (x 1 )φ 2 (x 2 ) = C 12 |x 1 -x 2 | ∆ 1 +∆ 2 .
(II.65)
Applying also conformal invariance, one obtains
φ 1 (x 1 )φ 2 (x 2 ) = δ(∆ 1 -∆ 2 ) C 12 |x 1 -x 2 | 2∆ 1 .
(II.66)
More generally, all correlation functions of a model described by CFT decay like power law at large distance, as in the Tomonaga-Luttinger framework, as a consequence of the operator product expansion.
Equation (II.66) would be of little importance, however, if it were not supplemented by an extremely useful result. There is a connection between finite-size scaling effects and conformal invariance [START_REF] Cardy | Conformal invariance and universality in finite-size scaling[END_REF][START_REF] Blöte | Conformal invariance, the central charge, and universal finite-size amplitudes at criticality[END_REF], allowing to investigate mesoscopic effects from the knowledge of the thermodynamic limit. For instance, the first-order finite-size correction to the energy with respect to the thermodynamic limit is
δE = - πcv s 6L , (II.67)
where c is another key concept of CFT known as the conformal charge, interpreted in this context as the model-dependent proportionality constant in the Casimir effect.
Actually, conformal field theories are classified through the conformal dimensions of their primary fields, {∆ i }, and their conformal charge. When 0 < c < 1, critical exponents of the correlation functions are known exactly, and due to unitarity conformal charge can only take quantized, rational values [START_REF] Friedan | Conformal Invariance, Unitarity, and Critical Exponents in Two Dimensions[END_REF],
c = 1 - 6 m(m + 1)
, m ≥ 3.
(II.68)
When c ≥ 1 exponents of the large-distance asymptotics of the correlation functions may depend on the parameters of the model. This implies that Tomonaga-Luttinger liquids enter this category. As central charge also corresponds to the effective number of gapless degrees of freedom, Tomonaga-Luttinger liquids have a central charge c = 1, and lie in the universality class of free fermions and bosons.
As far as correlation functions are concerned, primary fields are defined by their transformation
φ(z, z) → ∂w ∂z ∆ ∂w ∂z ∆ φ [w(z), w(z)] , (II.69)
under conformal transformations of the complex variable w = v s τ +ix. For instance, finitesize effects are obtained through the transformation from the infinite punctured z-plane to the w-cylinder:
w(z) = L 2π ln(z) ↔ z = e 2πw L .
(II.70)
Mesoscopic physics is, however, not always far from macroscopic one. More interestingly, so, this correspondence also yields finite-temperature corrections. In particular, CFT allows to evaluate finite-size and finite-temperature correlations of a Tomonaga-Luttinger liquid. The most relevant terms read [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF] n(x, t)n(0, 0)
T L L<+∞ n 2 0 = 1 - K 4 π k F L 2 1 sin 2 π(x-vst) L + 1 sin 2 π(x+vst) L +A 1 (K) π k F L 2K cos(2k F x) sin K π(x-vst) L sin K π(x+vst) L (II.71)
at finite size, and at finite temperature, [START_REF] Giamarchi | Quantum Physics in One Dimension[END_REF] n(x, t)n(0, 0) T L T >0
n 2 0 = 1 - K 4 π k F L T 2 1 sinh 2 π(x-vst) L T + 1 sinh 2 π(x+vst) L T +A 1 (K) π k F L T 2K cos(2k F x) sinh K π(x-vst) L T sinh K π(x+vst) L T , (II.72)
where L T = β v s plays the role of a thermal length, with β = 1/k B T the inverse temperature.
Equations (II.71) and (II.72) are written in a way that puts the emphasis on their similar structure, with the correspondence L ↔ L T and sin ↔ sinh, as expected from the mirror principle. In both cases a stripe in the complex plane is mapped onto a cylinder, the introduction of an imaginary time and the property sin(ix) = i sinh(x) are the reasons for the similitudes and the slight differences between the two expressions. Note that I have not specified the dependence of A 1 on L and T . Actually, the theory does not predict whether one should write ), T ] at finite temperature, for instance.
A 1 (K, T ), A 1 [K(T )] or A 1 [K(T
I have also recovered Eqs. (II.71) and (II.72) by generalizations of the bosonization procedure to finite system size and temperature [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF] (elements of derivation are given in Appendix A.2). These equations are valid in the scaling limit, i.e. for x , L (T ) -x , where is the short-distance cut-off, of the order of 1/n 0 , and L (T )
x. Far from the light-cone, Eq. (II.72), scales exponentially, but in both cases, the thermodynamic limit result is recovered at short distance and time. Actually, CFT even allows to go a step further, and investigate the intertwined effects of finite temperature and system size. This topic is far more advanced, however. The underlying idea consists in folding one of the cylinders at finite size or temperature into a torus, as illustrated in Fig. II.6. One can anticipate that the structure of the density-density correlation function will be similar to Eq. (II.71), with the sine function replaced by a doubly-periodic function. This double periodicity is at the heart of the field of elliptic functions, a wide subclass of special functions.
The final result reads (cf Appendix A.3 for elements of derivation)
n(x, t)n(0, 0) T L L<+∞,T >0 n 2 0 = 1- K 4 π k F L 2 θ 1 ( πu L , e -πL T L ) θ 1 ( πu L , e -πL T L ) - θ 1 ( πu L , e -πL T L ) 2 θ 1 ( πu L , e -πL T L ) 2 +u ↔ v +A 1 (K) π k F L 2K θ 1 0, e -π L T L 2K cos(2k F x) θ 1 πu L , e -π L T L θ 1 πv L , e -π L T L K , (II.73)
where u = x-v s t and v = x+v s t are the light-cone coordinates,
θ 1 (z, q) = 2q 1/4 +∞ k=0 (-1) k q k(k+1) sin[(2k+1)z], |q| < 1, (II.74)
is the first elliptic theta function, and denotes derivation with respect to the variable z. The first two terms in Eq. (II.73) agree with the result of [START_REF] Del Maestro | 4 He Luttinger Liquid in Nanopores[END_REF] as can be checked by easy algebraic manipulations. Note that all the expressions above have been obtained assuming periodic boundary conditions, i.e. a ring geometry, but space-time correlations depend on boundary conditions at finite size [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF]. To conclude, in this section I have presented conformal field theory as a formalism that allows to deal with critical 2D classical or 1D quantum models, where physics is scale-invariant. It can be viewed as an alternative to the bosonization procedure to derive the correlation functions of Luttinger liquids at finite size and temperature. Beyond this basic aspect, CFT remains central to the current understanding of fundamental physics. In recent years, attention has switched to conformal field theories in higher dimensions. The conformal bootstrap provides the most accurate evaluations of the critical exponents of the 3D Ising model [START_REF] El-Showk | Solving the 3d Ising Model with the Conformal Bootstrap II. c-Minimization and Precise Critical Exponents[END_REF], and the discovery of the AdS/CFT duality has provoked a revolution in theoretical high-energy physics [START_REF] Maldacena | The Large N Limit of Superconformal Field Theories and Supergravity[END_REF][START_REF] Witten | Anti De Sitter Space And Holography[END_REF] that may propagate to condensedmatter physics as well [START_REF] Zaanen | Holographic Duality in Condensed Matter Physics[END_REF].
II.5 From Lineland to Flatland: multi-component systems and dimensional crossovers
To finish the dimensional roundabout, I shall come back to higher dimensions. Actually, in the aforementioned issue of bridging 1D and 2D, theory and experiment face opposite difficulties. In an experimental context, reducing the dimension of a system of ultracold atoms by lowering temperature and strengthening confinement is challenging. In theoret-ical physics, both analytically and numerically, one-dimensional models are considerably easier to deal with, as the number of available tools is far greater. These techniques can often be adapted to multi-component systems, obtained by adding a degree of freedom, at the price of much more efforts and cumbersome summation indices in the equations. Actually, multi-component models are often better suited to describe experiments than strictly-1D models.
Two deep questions are in order: first, can this degree of freedom realize or simulate an additional space dimension? Second, is it possible to deal with the limit where the parameter associated to this degree of freedom allows to approximate a higher-dimensional system? This phenomenon is generically called dimensional crossover, and from a theoretical perspective, it would be highly appreciated if it would give access to a regime where no efficient analytical tool is available to tackle the problem in a direct way.
One of the aims of studying dimensional crossovers is to gain insight into dimensionaldependent phenomena. The two most relevant examples I have in mind are the following: one-dimensional waveguides are not a priviledged place to observe Bose-Einstein condensation, but the latter is allowed in 3D. If one-dimensional atomic wires are created so that they are dense enough in space, then Bose-Einstein condensation occurs at a critical density [START_REF] Vogler | Dimensional Phase Transition from an Array of 1D Luttinger Liquids to a 3D Bose-Einstein Condensate[END_REF][START_REF] Irsigler | Dimensionally induced one-dimensional to threedimensional phase transition of the weakly interacting ultracold Bose gas[END_REF]. As a second illustration, in two and three dimensions, interacting fermions are often described by the Fermi liquid theory. The latter fails in 1D, where low-energy physics is described by Luttinger liquid theory, that fails in higher dimensions. Dimensional crossovers may shed light on the subtleties of the Luttinger-Fermi liquid crossover. I will give a simple, qualitative explanation in chapter V.
There are actually many approaches and tools to treat the problem of dimensional crossovers, and not all of them are equivalent. One can think of gradually changing the dimension of the system, in such a way that its dimension d takes non-integer values. This tomographic approach is taken formally within renormalization group techniques, but in this context non-integer values do not really have a physical meaning. This approach has been used to investigate the Luttinger-Fermi liquid transition [START_REF] Castellani | Dimensional crossover from Fermi to Luttinger liquid[END_REF][START_REF] Bellucci | Crossover from marginal Fermi liquid to Luttinger liquid behavior in carbon nanotubes[END_REF]. One can also imagine that the fractal dimension of the system is tunable. Although fractal systems are being devoted attention to [222,[START_REF] Aidelsburger | Realization of the Hofstadter Hamiltonian with Ultracold Atoms in Optical Lattices[END_REF], this possibility does not sound realistic with current experimental techniques.
More promising is the idea of coupling one-dimensional gases, and paving space to obtain a higher-dimensional system in the limit where the number of components becomes infinite. This seductive idea, perhaps the most suited to explore the Luttinger to Fermi liquid crossover [START_REF] Guinea | Luttinger liquids in higher dimensions[END_REF][START_REF] Arrigoni | Crossover from Luttinger-to Fermi-Liquid Behavior in Strongly Anisotropic Systems in Large Dimensions[END_REF][START_REF] Biermann | Deconfinement Transition and Luttinger to Fermi Liquid Crossover in Quasi-One-Dimensional Systems[END_REF], is actually quite difficult to formalize, but this interactiondriven scenario has been implemented experimentally [START_REF] Armijo | Mapping out the quasicondensate transition through the dimensional crossover from one to three dimensions[END_REF][START_REF] Revelle | 1D to 3D Crossover of a Spin-Imbalanced Fermi Gas[END_REF].
The most attractive trend in recent years is to use an internal degree of freedom, such as spin or orbital angular momentum, to simulate higher-dimensional systems through the so-called synthetic dimensions [START_REF] Boada | Quantum Simulation of an Extra Dimension[END_REF][START_REF] Celi | Synthetic Gauge Fields in Synthetic Dimensions[END_REF][START_REF] Zeng | Charge Pumping of Interacting Fermion Atoms in the Synthetic Dimension[END_REF][START_REF] Luo | Quantum simulation of 2D topological physics in a 1D array of optical cavities[END_REF][START_REF] Barbarino | Synthetic gauge fields in synthetic dimensions: interactions and chiral edge modes[END_REF]. A variant relies on the use a dynamical process to simulate an additional dimension [START_REF] Price | Synthetic dimensions for cold atoms from shaking a harmonic trap[END_REF].
The approach I will use in chapter V is different, and consists in releasing a transverse confinement to generate new degrees of freedom [START_REF] Cheng | Fermi gases in the twodimensional to quasi-two-dimensional crossover[END_REF], associated to a multi-mode structure in energy space [START_REF] Lang | Dimensional crossover in a Fermi gas and a cross-dimensional Tomonaga-Luttinger model[END_REF].
II.6 Summary of this chapter/résumé du chapitre
In this introductory chapter, I have tried to give a glimpse of the conceptual difficulties associated to the notion of space dimension in physics. While dimensions larger than three are obviously difficult to apprehend, the lower number of degrees of freedom in a 1D world also leads to fascinating effects on ultracold gases, such as collectivization of motion, fermionization of bosons, or spin-charge separation. The Mermin-Wagner theorem prevents thermal phase transitions to occur in 1D, urging for a shift of paradigm to describe low-dimensional gases, better characterized through their correlation functions.
Ultracold atoms are a versatile tool to simulate condensed-matter physics, as virtually every parameter is tunable, from particle number and density, to the strength and type of interactions, geometry of the gas and internal degrees of freedom. Moreover, temperature scales span several decades, allowing to probe thermal or purely quantum fluctuations and their intertwined effects. Their effects are enhanced in low-dimensional gases, obtained through highly-anisotropic trapping potentials, and the ring geometry is at the core of current attention.
I have presented the main analytical tools that allow to study this experimental configuration. A fair number of low-dimension models are integrable, from quantum onedimensional spin chains and quantum field theories, to two-dimensional classical spin chains and ice-type models, allowing to obtain their exact thermodynamics and excitation spectrum by coordinate Bethe Ansatz, or their correlation functions by algebraic Bethe Ansatz.
A trivial peculiar case of integrable model is provided by noninteracting spinless fermions. They are in a one-to-one correspondence to the Tonks-Girardeau gas of hardcore bosons according to Girardeau's Bose-Fermi mapping, allowing to obtain the exact energy, even-order correlation functions and excitation spectrum of this stronglyinteracting model, even in the non-integrable case of a harmonic trap.
At finite interaction strength, bosonization provides a means to obtain the exact largedistance asymptotics of correlation functions of gapless models. I have applied it to the Tomonaga-Luttinger liquid, an effective model that universally describes many gapless microscopic models in 1D, and have discussed its validity range on the example of timedependent density-density correlations of the Tonks-Girardeau gas. For this observable, the Tomonaga-Luttinger prediction is exact in the static case, and correct to first order far from the mass shell in the dynamical case.
I have also given a brief introduction to conformal field theory, as a peculiar case of integrable theory, and an alternative formalism to obtain the finite-size and finitetemperature correlation functions of Tomonaga-Luttinger liquids.
To finish with, I have introduced the problematics of dimensional crossovers to higher dimensions, and the different ways to realize them in ultracold atom setups, often based on multi-mode structures.
Dans ce chapitre introductif, j'ai laissé entrevoir certaines des difficultés conceptuelles soulevées par la notion de dimension en physique. Le nombre restreint de degrés de liberté dans un espace unidimensionnel donne lieu à de fascinants effets, parfois non-triviaux, dans les systèmes de gaz ultrafroids, parmi lesquels l'émergence d'un comportement dynamique collectif, la fermionisation des bosons et la séparation des excitations de densité et de spin. Le théorème de Mermin-Wagner empêche les transitions de phase traditionnelles dans grand nombre de systèmes unidimensionnels, appelant à changer de paradigme pour décrire au mieux les gaz de basse dimension, qui sont caractérisés bien plus efficacement, dans l'ensemble, par leurs fonctions de corrélation que par leur diagramme de phase.
Les gaz ultrafroids constituent un support polyvalent pour la simulation de systèmes issus de la physique de la matière condensée, dans la mesure où chaque paramètre y est ajustable, du nombre de particules à la densité, en passant par le type et l'intensité des interactions, ainsi que la géométrie du gaz et ses degrés de liberté internes. Qui plus est, les échelles de température mises en jeu varient sur plusieurs décades, ce qui permet de sonder aussi bien les fluctuations thermiques que quantiques, en modifiant leur rapport. Ces fluctuations sont encore plus intenses dans les gaz de basse dimension, obtenus par un confinement hautement anisotrope, qui permet désormais d'accéder à des géométries annulaires.
Un certain nombre de modèles sont intégrables en basse dimension, des chaînes de spins quantiques aux théories quantiques des champs unidimensionnelles, mais aussi des chaînes de spins classiques et des modèles de glace issus de la physique statistique, ce qui permet d'obtenir leurs propriétés thermodynamiques, ainsi que leur spectre d'excitation, par Ansatz de Bethe. La version algébrique de cet outil donne même accès aux fonctions de corrélation.
Le gaz de fermions libres est un exemple trivial de modèle intégrable. Il est en bijection avec le gaz de bosons de coeur dur, dit de Tonks-Girardeau, pour un certain nombre d'observables sujettes à la correspondance bosons-fermions, qui donne facilement accès à l'énergie, au spectre d'excitations et aux fonctions de corrélation en densité de ce modèle fortement corrélé, et ce même en présence d'un piège harmonique.
Quand l'intensité des interactions est finie dans un modèle sans gap, on peut obtenir la forme exacte de ses corrélations asymptotiques par bosonisation. J'ai appliqué cette méthode au modèle de Tomonaga-Luttinger, qui est un modèle effectif universel, et discuté son domaine de validité concernant les corrélations en densité. Le modèle de Tomonaga-Luttinger s'avère exact pour les corrélations statiques, et correct au premier ordre loin de la couche de masse pour les corrélations dynamiques. J'ai aussi donné une brève introduction à la théorie conforme des champs, que j'envisage en tant qu'exemple de modèle intégrable et de formalisme alternatif pour obtenir les fonctions de corrélation des liquides de Tomonaga-Luttinger à taille et température finies. Pour finir, j'ai introduit la problématique de l'augmentation progressive de la dimension d'un gaz ultrafroid, et exposé les différentes manières d'y parvenir, qui se fondent le plus souvent sur une structure multi-mode.
Chapter III
Ground-state static correlation functions of the Lieb-Liniger model III.1 Introduction
In this chapter, I characterize a strongly-correlated, ultracold one-dimensional Bose gas on a ring through its equilibrium, static correlation functions. The gas is described by the Lieb-Liniger model, that corresponds to contact interactions. This model is arguably the most conceptually simple in the class of continuum quantum field theories, and the most studied. It is integrable, equivalent to the Tonks-Girardeau gas in the strongly-interacting regime, its low-energy sector lies in the universality class of Tomonaga-Luttinger liquids, and it can be seen as a conformal field theory with unit central charge. These properties allow for a quite thorough theoretical investigation, involving all the analytical tools presented in chapter II. This chapter is organized as follows: first, I present the Lieb-Liniger model, and explain the main steps of its solution by the coordinate Bethe Ansatz technique at finite number of bosons. This method yields the exact many-body wavefunction, and a set of coupled equations (called Bethe Ansatz equations), whose solution yields the exact ground-state energy. The Bethe Ansatz equations can be solved numerically up to a few hundreds of bosons. In the thermodynamic limit, the infinite set of coupled equations can be recast in closed form as a set of three integral equations. Not only are they amenable to numerical techniques, but approximate analytical solutions can be obtained in a systematic way in the weak-and strong-coupling regimes. However, finding the exact ground-state energy at arbitrary interaction strength is a long-standing open problem. More pragmatically, a reasonable aim was to bridge the weak-and strong-coupling expansions at intermediate interaction strengths, with an accuracy that would compete with state-of-the-art numerical methods. I summarize the main historical breakthroughs in both regimes, and my own contributions to the problem. Once the energy is known with satisfying accuracy, various thermodyamic quantities can be extracted through thermodynamic relations.
Then in a second time, I delve into the issue of correlation functions. The most simple ones are the local auto-correlations of the many-body wavefunction. Actually, one does not need to know the many-body wavefunction explicitly to evaluate them, as they are related to the moments of the density of pseudo-momenta, a quantity already evaluated to obtain the ground-state energy. This allows me to investigate the local first-, second-and third-order correlation functions, that are experimentally relevant with current methods.
Adding one level of complexity, I address the issue of non-local correlations at short and large distance. I focus on the one-body correlation function, whose asymptotics are known to relatively high orders in the Tonks-Girardeau regime. In the general case of finite interaction strength, the Tomonaga-Luttinger liquid theory allows to tackle the largedistance regime. I construct short-distance expansions using Bethe Ansatz techniques, through relations that I have called 'connections'.
The Fourier transform of the one-body correlation function, known as the momentum distribution, is also amenable to ultracold atom experiments, through ballistic expansion. Once again, its asymptotics can be calculated exactly, and the dominant term of the largemomentum tail is universal as it always correspond to an inverse quartic power law at finite interaction strength. Its numerical coefficient, however, depends on the interaction strength and is known as Tan's contact. I use this observable to illustrate an extension of the Bethe Ansatz technique to the inhomogeneous, harmonically-trapped system, whose integrability is broken, by combining it to the local-density approximation scheme.
All along the discussion, several technical details, transverse issues and interesting alternative approaches are left aside, but a few of them are evoked in a series of appendices. de Lieb et Liniger, et détaille les principales étapes de sa résolution par Ansatz de Bethe. Cette méthode conduit à un système d'équations transcendantes couplées, en nombre égal à celui de bosons, dont la solution donne l'énergie exacte de l'état fondamental, et qui peuvent être résolues numériquement. Par passage à la limite thermodynamique, ce système d'équations se ramène à un ensemble de trois équations intégrales. Ces dernières peuvent être résolues numériquement, mais de surcroît, des méthodes analytiques permettent d'en construire des solutions approchées de manière systématique sans les régimes de couplage faible et fort. La question de la solution analytique exacte reste entière, mais plus pragmatiquement, j'ai consacré beaucoup de temps et d'énergie à la recherche et au développement de solutions approchées, jusqu'à atteindre un degré de précision comparable à celui des méthodes numériques les plus avancées. Une fois l'énergie connue avec une précision satisfaisante, d'autres observables peuvent en être déduites, au travers d'identités thermodynamiques.
Dans un second temps, je me plonge dans la problématique des fonctions de corrélation. Les plus simples d'entre elles sont les fonctions locales d'autocorrélation de la fonction d'onde. Nul besoin de connaître explicitement cette dernière pour les évaluer, car elles s'obtiennent directement à partir des moments de la distribution des pseudo-impulsions, déjà évaluée lorsqu'il s'agissait d'en déduire l'énergie de l'état fondamental. Cela me permet d'évaluer les fonctions de corrélation locales à un, deux ou trois corps, que les méthodes expérimentales actuelles permettent de mesurer.
III.2 Exact ground-state energy of the Lieb-Liniger model
I start by reviewing a few known results concerning the Lieb-Liniger model. For introductory texts and reviews, I refer to [START_REF] Panfil | Density fluctuations in the 1D Bose gas[END_REF]237,[START_REF] Franchini | An Introduction to Integrable Techniques for One-Dimensional Quantum Systems[END_REF].
III.2.1 Ground-state energy in the finite-N problem
The Lieb-Liniger model describes a given number N of identical bosons, confined to one spatial dimension. It assumes that they are point-like and interact through a two-body, zero-range, time-and velocity-independent potential. If m denotes the mass of each boson, and {x i } i=1,...,N label their positions, then the dynamics of the system is given by the Lieb-Liniger Hamiltonian, that reads [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF]
H LL = N i=1 - 2 2m ∂ 2 ∂x 2 i + g 1D {j =i} δ(x i -x j ) . (III.1)
The first term is associated to the kinetic energy, the second one represents the contact interactions, where g 1D is the interaction strength or coupling constant, whose sign is positive if interactions are repulsive, as in the case considered by Lieb and Liniger, and negative otherwise.
I will not consider this opportunity in the following, so let me give a brief account of the main known results. The attractive regime is unstable, owing to its negative ground-state energy, and does not possess a proper thermodynamic limit [START_REF] Mcguire | Study of Exactly Soluble One-Dimensional N-Body Problems[END_REF]. However, the first excited state, known as the super Tonks-Girardeau (sTG) gas [START_REF] Astrakharchik | Beyond the Tonks-Girardeau Gas: Strongly Correlated Regime in Quasi-One-Dimensional Bose Gases[END_REF][START_REF] Batchelor | Evidence for the super Tonks-Girardeau gas[END_REF], has attracted enough attention to be be realized experimentally [241]. In the cold atom context, this metastable state is mainly studied in quench protocols, where thermalization is the question at stake. The sTG gas also maps onto the ground state of attractive fermions, that is stable [START_REF] Chen | Realization of effective super Tonks-Girardeau gases via strongly attractive one-dimensional Fermi gases[END_REF], and signatures of a sTG regime are expected in dipolar gases [START_REF] Astrakharchik | Super-Tonks-Girardeau regime in trapped one-dimensional dipolar gases[END_REF][START_REF] Girardeau | Super-Tonks-Girardeau State in an Attractive One-Dimensional Dipolar Gas[END_REF]. More generally, the attractive regime of the Lieb-Liniger model is the seat of a variety of mappings, onto a Bardeen-Cooper-Schrieffer (BCS) model [START_REF] Fuchs | Exactly Solvable Model of the BCS-BEC Crossover[END_REF][START_REF] Batchelor | Ground state of 1D bosons with delta interaction: link to the BCS model[END_REF][START_REF] Iida | Exact Analysis of δ-Function Attractive Fermions and Repulsive Bosons in One-Dimension[END_REF], the Kardar-Parisi-Zhang (KPZ) model [START_REF] Calabrese | Interaction quench in a Lieb-Liniger model and the KPZ equation with flat initial conditions[END_REF], directed polymers [START_REF] De Luca | Crossing probability for directed polymers in random media[END_REF] or three-dimensional black holes [START_REF] Panchenko | The Lieb-Liniger model at the critical point as toy model for Black Holes[END_REF]. Moreover, in a peculiar limit, the attractive Bose gas becomes stable and features the Douglas-Kazakov, third-order phase transition [START_REF] Flassig | Large-N ground state of the Lieb-Liniger model and Yang-Mills theory on a two-sphere[END_REF][START_REF] Piroli | Local correlations in the attractive one-dimensional Bose gas: From Bethe ansatz to the Gross-Pitaevskii equation[END_REF].
Let us come back to the case of repulsive interactions. I use units where 2 /(2m) = 1, and the mathematical physics notation c for the interaction strength, instead of g 1D . The Schrödinger time-independent equation associated to the Hamiltonian Eq. (III.1) is
H LL ψ N (x) = E 0 ψ N (x), (III.2)
x = (x 1 , . . . , x N ). As an eigenvalue problem, Eq. (III.2) can be solved exactly by explicit construction of ψ N . To do so, Lieb and Liniger applied, for the first time to a model defined in the continuum, the coordinate Bethe Ansatz [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF].
According to Eq. (III.1), interactions only occur when two bosons are at contact. Outside this case, one can split the support of the N -body wavefunction into N ! sectors, that correspond to all possible spatial orderings of N particles along a line. Since the wavefunction is symmetric with respect to any permutation of the bosons, let us arbitrarily consider the fundamental simplex R, such that 0
< x 1 < x 2 < • • • < x N < L,
where L is the length of the atomic waveguide.
In R, the original Schrödinger equation (III.2) is replaced by an Helmoltz equation for the wavefunction ψ N | R restricted to the fundamental simplex, namely
- N i=1 ∂ 2 ψ N | R ∂x 2 i = E 0 ψ N | R , (III.3)
together with the Bethe-Peierls boundary conditions,
∂ ∂x j+1 - ∂ ∂x j ψ N | x j+1 =x j = c ψ N | x j+1 =x j . (III.4)
The latter are mixed boundary conditions, whose role is to keep track of the interactions at the internal boundaries of R. They are obtained by integration of Eq. (III.2) over an infinitesimal interval around a contact [START_REF] Gaudin | La fonction d'onde de Bethe[END_REF].
Boundary conditions, assumed to be periodic here due to the ring geometry, are taken into account through
ψ N (0, x 2 , . . . , x N ) = ψ N (L, x 2 , . . . , x N ) = ψ N (x 2 , . . . , x N , L), (III.5)
where the exchange of coordinates is performed to stay in the simplex R. There is also a continuity condition on the derivatives:
∂ ∂x ψ N (x, x 2 , . . . , x N )| x=0 = ∂ ∂x ψ N (x 2 , . . . , x N , x)| x=L . (III.6)
Equations (III.4, III.5, III.6) represent the full set of boundary conditions associated to the differential equation (III.3), so that the problem is well defined by now, and simpler than the original Schrödinger equation (III.2).
To solve it, the starting point (Ansatz) consists in guessing the structure of the wavefunction inside the fundamental simplex:
ψ N | R = P ∈S N a(P )e i N j=1 k P (j) x j , (III.7)
where P are elements of the symmetric group S N , i.e. permutations of N elements, {k i } i=1,...,N are the pseudo-momenta carried by the individual bosons (called so because they are not observable and should not be confused with the physical momentum), and a(P ) are scalar coefficients that takes interactions into account. In other words, one postulates that the wavefunction can be written as a weighted sum of plane waves (in analogy to the noninteracting problem), and {a(P )} and {k i } are then determined so as to verify Eqs. (III.3), (III.4), (III.5) and (III.6).
The two-body scattering matrix S is defined through
a(P ) = Sa(P ), (III.8)
where P is a permutation obtained by exchanging P (j) and P (j +1), i.e. P = {P (1), . . . , P (j -1), P (j +1), P (j), P (j +2), . . . , P (N )}.
(III.9)
The cusp condition Eq. (III.4) is satisfied provided
a(P ) = k P (j) -k P (j+1) + ic k P (j) -k P (j+1) -ic a(P ). (III.10)
Thus, this peculiar scattering process leads to an antisymmetric phase shift, and the scattering matrix, that has unit modulus according to Eq. (III.10), can be written as a pure phase. It reads
S = e -iθ[k P (j) -k P (j+1) ;c] , (III.11)
where
θ(k; c) = 2 arctan k c (III.12)
is the function associated to the phase shift due to a contact interaction. In the limit c → +∞, the scattering phase is the same as the one of noninteracting fermions, which is a signature of the Bose-Fermi mapping, and of a Tonks-Girardeau regime. Furthermore, as consequence of Eq. (III.11), the Yang-Baxter equation (II.9) is satisfied. Thus, the Lieb-Liniger model is integrable.
Actually, the pseudo-momenta {k i } i=1,...,N satisfy the following set of equations [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF]:
e ik i L = {j =i} k i -k j + ic k i -k j -ic = - N j=1 k i -k j + ic k i -k j -ic , (III.13)
where the global minus sign in the right-hand side is a signature of the periodic boundary condition, and would become a plus for an anti-periodic one. Using the property arctan(x) = i 2 ln i+x i-x and a few algrebraic transformations, Eq. (III.13) is then recast in logarithmic form in terms of the phase-shift function θ as
2π L I i = k i + 1 L N j=1 θ(k i -k j ; c) . (III.14)
The N coupled equations (III. [START_REF] Wilczek | Particle physics and condensed matter: the saga continues[END_REF], where the unknowns are the pseudo-momenta, are the Bethe Ansatz equations. The Bethe numbers {I i } i=1,...,N are integers if the number of bosons is odd and half-odd integers if N is even. They play the role of quantum numbers, and characterize the state uniquely.
The Bethe Ansatz equations (III.14) are physically interpreted as follows [254]: a particle i moving along the circle of circumference L to which the gas is confined acquires, during one turn, a phase determined by its momentum k i , as well as a scattering phase from interactions with the N -1 other bosons on the ring. Since scattering is diffractionless, as a consequence of the Yang-Baxter equation, the whole scattering phase is a sum of two-body phase shifts. Rephrased once more, in order to satisfy the periodicity condition, the phase associated to the momentum plus the total scattering phase shall add up to 2π times an (half-odd) integer.
In the limit c → +∞, Eq. (III.14) simplifies dramatically and it becomes obvious that if two quantum numbers are equal, say I i = I j , then their corresponding quasi-momenta coincide as well, i.e. k i = k j . Since in such case the Bethe wavefunction vanishes, the Bethe numbers must be distinct to avoid it. As a consequence, the ground state, that minimizes energy and momentum, corresponds to a symmetric distribution of quantum numbers without holes, i.e. a Fermi sea distribution, and then
I i = - N + 1 2 + i, (III.15)
as already obtained in the previous chapter for the Tonks-Girardeau gas.
If the coupling c becomes finite, a scattering phase is slowly turned on so that, for fixed I i , the solution {k i } i=1,...,N to the Bethe Ansatz equations (III.14) moves away from the regular distribution. However, since level crossings are forbidden (there is no symmetry to protect a degeneracy and accidental ones can not happen in an integrable model), the state defined by (III.15) remains the lowest-energy one at arbitrary interaction strength. Changing c modifies the quasi-momenta, but has no effect on the quantum numbers, that are quantized. Each choice of quantum numbers yields an eigenstate, provided that all Bethe numbers are different. This rule confers a fermionic nature to the Bethe Ansatz solution in quasi-momentum space whenever c > 0, although the system is purely bosonic in real space.
The momentum and energy of the Lieb-Liniger model in the ground state are obtained by summing over pseudo-momenta, or equivalently over Bethe numbers:
P 0 = N i=1 k i = 2π L N i=1 I i = 0. (III.16)
The second equality follows from the Bethe Ansatz Equations (III.14) and the property
θ(-k) = -θ(k),
showing that momentum is quantized, and independent of the interaction strength. The last equality is a direct consequence of Eq. (III.15). Analogously, according to Eq. (III.3) the ground-state energy is given by
E 0 = N i=1 k 2 i , (III.17)
and the eigenvalue problem is solved, after extension of the wavefunction to the full
domain x ∈ [0, L] N , obtained by symmetrization of ψ N | R : ψ N (x) = j>i (k j -k i ) N ! j>i [(k j -k i ) 2 + c 2 ] P ∈S N j>i 1 - i c sign(x j -x i ) k P (j) -k P (i) N j=1 e ik j x j . (III.18)
Note that the derivation, as presented here, does not prove that the Bethe Ansatz form of the wavefunction, Eq. (III.7), minimizes the energy. This important point has been checked in [START_REF] Dorlas | Orthogonality and Completeness of the Bethe Ansatz Eigenstates of the nonlinear Schrödinger model[END_REF], where the construction of Lieb and Liniger has been rigorously justified.
Equations (III.14) and (III.17) yield the exact ground state properties of the finite N problem. The equations are transcendent (i.e., not equivalent to the problem of finding roots of polynomials with integer coefficients), but can be solved numerically at arbitrary interaction strength up to the order of a hundred bosons [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF][START_REF] Sakmann | Exact ground state of finite Bose-Einstein condensates on a ring[END_REF]. Actually, the thermodynamic limit is directly amenable to Bethe Ansatz. The construction, also due to Lieb and Liniger, is the object of next section.
Before proceeding, I shall summarize a few arguments and interpret them in the more general context of integrable systems. In a one-dimensional setting, when two particles scatter, conservation of energy and momentum constrain the outgoing momenta to be equal to the incoming ones. Thus, the effect of interaction is reduced to adding a phase shift to the wavefunction.
The first step of the resolution consists in identifying the two-particle phase-shift, given here by Eq. (III.12). Having determined the two-particle scattering phase, one checks that the Yang-Baxter equation holds, by verifying that an ansatz wavefunction constructed as a superposition of plane-wave modes with unknown quasi-momenta as in Eq. (III.7), is an eigenstate of the Hamiltonian. The Yang-Baxter equation constrains the coefficients of the superposition, so that the eigenstate depends uniquely on the quasi-momenta.
One also needs to specify boundary conditions. For a system of N particles, the choice of periodic boundary conditions generates a series of consistency conditions for the quasi-momenta of the eigenstate, known as the Bethe Ansatz equations (III.14). This set of N algebraic equations depends on as many quantum numbers, that specify uniquely the quantum state of the system. For each choice of these quantum numbers, one solves the set of Bethe Ansatz equations (being algebraic, they constitute a much lighter task than solving the original partial derivative Schrödinger equation) to obtain the quasi-momenta, and thus the eigenstate wavefunction. These states have a fermionic nature, in that all quantum numbers have to be distinct. This is a general feature of the Bethe Ansatz solution.
Further simplifications are obtained by considering the thermodynamic limit. Then, one is interested in the density of quasi-momenta and the set of algebraic equations (III.14) can be recast into the form of an integral equation for this distribution. The problem looks deceiptively simpler, then, as I will show in the next paragraph.
III.2.2 Ground-state energy in the thermodynamic limit
In second-quantized form, more appropriate to deal with the thermodynamic limit, the Lieb-Liniger Hamiltonian Eq. (III.1) becomes:
H LL [ ψ] = 2 2m L 0 dx ∂ ψ † ∂x ∂ ψ ∂x + g 1D 2 L 0 dx ψ † ψ † ψ ψ, (III.19)
where ψ is a bosonic field operator that satisfies the canonical equal-time commutation relations with its Hermitian conjugate:
[ ψ(x), ψ † (x )] = δ(x-x ), [ ψ(x), ψ(x )] = [ ψ † (x), ψ † (x )] = 0. (III.20)
The ground-state properties of the Lieb-Liniger model depend on a unique dimensionless parameter, measuring the interaction strength. It is usual, following Lieb and Liniger, to define this coupling as
γ = mg 1D 2 n 0 , (III.21)
where n 0 represents the linear density of the homogeneous gas. It appears at the denominator, which is rather counter-intuitive compared to bosons in higher dimensions. In particular, this means that diluting the gas increases the coupling, which is a key aspect to approach the Tonks-Girardeau regime, that corresponds to the limit γ → +∞.
In the regime of weak interactions, the bosons do not undergo Bose-Einstein condensation, since long-range order is prevented by fluctuations. Nonetheless, one can expect that a large proportion of them is in the zero momentum state, and forms a quasi-condensate. Under this assumption, the problem can be treated semi-classically. The operator ψ is replaced by a complex scalar field ψ(x), and the Euler-Lagrange equation stemming from Eq. (III. [START_REF] Wilczek | Quantum Mechanics of Fractional-Spin Particles[END_REF]) is the 1D Gross-Pitaevskii equation [START_REF] Gross | Structure of a quantized vortex in boson systems[END_REF][START_REF] Pitaevskii | Vortex lines in an imperfect Bose gas[END_REF],
i ∂ψ ∂t = - ∂ 2 ψ ∂x 2 +2c ψ * ψψ. (III.22)
However, this method being of mean field type, one can expect that its validity range is quite limited in low dimension. The exact solution by Bethe Ansatz shall provide a rare opportunity to study this validity range quantitatively.
To obtain the exact solution, let us consider the Bethe Ansatz equations (III.14), and take the thermodynamic limit, i.e. N → +∞, L → +∞, while keeping n 0 = N/L fixed and finite. After ordering the Bethe numbers {I i } (or equivalently the pseudo-momenta {k i }, that are real if γ > 0 [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF]), one can rewrite the Bethe Ansatz equations as
k i - 1 L N j=1 θ(k i -k j ) = y(k i ) (III.23)
where y is a 'counting function', constrained by two properties: it is strictly increasing and satisfies the Bethe Ansatz equations at any of the quasi-momenta, i.e., according to Eq. (III.14), such that
y(k i ) = 2π L I i . (III.24)
The aim is then to go from the discrete to the continuum, defining a density of pseudomomenta such that
ρ(k i ) = lim N,L→+∞,N/L=n 0 1 L(k i+1 -k i ) . (III.25)
It is strictly positive as expected, thanks to the ordering convention. In the thermodynamic limit, the sum in Eq. (III.23) becomes an integral over k,
N j=1 → L dk ρ(k), (III.26)
and the derivative of y with respect to k,
y (k i ) = lim N,L→+∞,N/L=n 0 y(k i+1 ) -y(k i ) k i+1 -k i = 2πρ(k i ), (III.27) so that 1 2π y(k) = k dk ρ(k ). (III.28)
With these definitions, the set of Bethe Ansatz equations (III.14) becomes a single integral equation, relating the counting function to the distribution of quasi-momenta:
y(k) = k - kmax k min dk θ(k-k )ρ(k ), (III.29)
where k min and k max represent the lowest and highest quasi-momenta allowed by the Fermi sea structure. They are finite in the ground state, and the limits of integration are symmetric as a consequence of Eq. (III.15): k min = -k max . Differentiating Eq. (III.29) with respect to k yields, by combination with Eq. (III.28),
ρ(k) = 1 2π - 1 2π kmax -kmax dk K(k -k )ρ(k ), (III.30)
where
K(k) = θ (k) = -2c c 2 +k 2 .
In view of a mathematical treatment of Eq. (III.30), it is convenient to perform the following rescalings:
k = k max z, c = k max α, ρ(k) = g(z; α), (III.31)
where z is the pseudo-momentum in reduced units such that its maximal value is 1, α is a non-negative parameter, and g(z; α) denotes the distribution of quasi-momenta expressed in these reduced units. Finally, in the thermodynamic limit, the set of Bethe Ansatz equations (III.14) boils down to a set of three equations only, namely
g(z; α) - 1 2π 1 -1 dy 2αg(y; α) α 2 + (y -z) 2 = 1 2π , (III.32)
where α is in one-to-one correspondence with the Lieb parameter γ introduced in Eq. (III.21) via a second equation,
γ 1 -1 dy g(y; α) = α. (III.33)
The third equation yields the dimensionless average ground-state energy per particle e, linked to the total ground-state energy E 0 expressed in the original units, and to the reduced density of pseudo-momenta g by
e(γ) = 2m 2 E 0 (γ) N n 2 0 = 1 -1 dyg[y; α(γ)]y 2 { 1 -1 dyg[y; α(γ)]} 3 .
(III.34) Interestingly, Eq. (III.32) is decoupled from Eqs. (III.33) and (III.34), which is specific to the ground state [START_REF] Yang | Thermodynamics of a One Dimensional System of Bosons with Repulsive Delta Function Interaction[END_REF]. It is a homogeneous type II Fredholm integral equation with Lorentzian kernel, whose closed-form, exact solution is unknown but amenable to various approximation methods.
Before solving these equations, it is convenient to recall a few general properties of density of pseudo-momenta.
(i) The function g is unique [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF].
(ii) It is an even function of z, in agreement with the particle-hole symmetry noticed above. To see this, it is convenient to rewrite Eq. (III.32) as
g(z; α) = 1 2π 1 + 2α 1 0 dy g(y; α) α 2 + (y -z) 2 + 1 0 dy g(-y; α) α 2 + (y + z) 2 .
(III.35)
Then, introducing
g s (z; α) = g(z; α) + g(-z; α) 2 , (III.36)
it is easy to check that g s (z; α) and g(z; α) are both solution to the Lieb equation. However, according to (i) the solution is unique, imposing g(-z; α) = g(z; α).
(iii) The function g is infinitely differentiable (analytic) in z if α > 0 [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF]. This implies in particular that it has an extremum (which turns out to be a minimum) at z = 1. The non-analyticity in the interaction strength at α = 0 is a signature of the absence of adiabatic continuation in 1D between ideal bosons and interacting ones.
(iv) ∀z ∈ [-1, 1], g(z; α) > 0, as a consequence of Eq. (III.25), as expected for a density. Moreover,
∀α > 0, ∀z ∈ [-1, 1], g(z; α) > g T G (z) = 1
2π . This property directly follows from the discussion below Eq. (B.7) and the mapping between Love's equation (B.2) and the Lieb equation (III.32).
(v) ∀z ∈ [-1, 1], g is bounded from above if α > 0 [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF], as expected from the Mermin-Wagner-Hohenberg theorem, that prevents true condensation.
In order to determine the ground-state energy in the thermodynamic limit, the most crucial step is to solve the Lieb equation (III.32). This was done numerically by Lieb and Liniger for a few values of α, spanning several decades. The procedure relies on the following steps: an arbitrary (positive) value is fixed for α, and Eq. (III.32) is solved, i.e. the reduced density of pseudo-momenta g(z; α) is evaluated with the required accuracy as a function of z in the interval [-1, 1]. Then, Eq. (III.33) yields γ(α), subsequently inverted to obtain α(γ). In doing so, one notices that γ(α) is an increasing function, thus interaction regimes are defined the same way for both variables. The ground-state energy is then obtained from Eq. (III.34), as well as many interesting observables, that are combinations of its derivatives. They all depend on the sole Lieb parameter γ, which is the key of the conceptual simplicity of the Lieb-Liniger model.
III.2.3 Weak-coupling regime
Analytical breakthroughs towards the exact solution of the Bethe Ansatz equations have been quite scarce, since Lieb and Liniger derived them. I figure out three possible explanations: first, the Bethe Ansatz equations (III.14) or (III.32) are easily amenable to numerical calculations in a wide range of interaction strengths. Furthermore, simple approximate expressions reach a global 10% accuracy [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF], comparable to error bars in the first generation of ultracold atom experiments. Finally, the set of Lieb equations is actually especially difficult to tackle analytically in a unified way. Indeed, one should keep in mind that, although it consists in only three equations, the latter are just a convenient and compact way to rewrite an infinite set of coupled ones.
In the weakly-interacting regime, finding accurate approximate solutions of Eq. (III.32) at very small values of the parameter α is quite an involved task, both numerically and analytically. This is a consequence of the singularity of the function g at α = 0, whose physical interpretation is that noninteracting bosons are not stable in 1D.
A guess function was proposed by Lieb and Liniger [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF], namely
g(z; α) α 1 √ 1 -z 2 2πα . (III.37)
It is a semi-circle law, rigorously justified in [START_REF] Hutson | The circular plate condenser at small separations[END_REF], and suggests a link with random matrices theory [START_REF] Mehta | Random Matrices[END_REF]. Lieb and Liniger have also shown, increasing the list of constraints on the density of pseudo-momenta g, that the semi-circular law Eq. (III.37) is a strict lower bound for the latter [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF].
Heuristic arguments have suggested the following correction far from the edges in the variable z [START_REF] Hutson | The circular plate condenser at small separations[END_REF]262]:
g(z; α) α 1,α |1±z| √ 1 -z 2 2πα + 1 4π 2 √ 1 -z 2 z ln 1 -z 1 + z + ln 16π α + 1 , (III.38)
rigorously derived much later in [START_REF] Wadati | Solutions of the Lieb-Liniger integral equation[END_REF]. To my knowledge, no further correction is explicitly known to date in this regime. I will not delve into the inversion step γ(α) ↔ α(γ) in the main text, but refer to Appendix B.1, where a link with a classical problem is discussed, namely the calculation of the exact capacitance of a circular plate capacitor.
As far as the ground-state energy is concerned, in their seminal article Lieb and Liniger showed that e(γ) ≤ γ, and obtained the correction
e(γ) γ 1 γ - 4 3π γ 3/2 (III.39)
from a Bogoliubov expansion [START_REF] Bogoliubov | On the theory of superfluidity[END_REF]. This approximation predicts negative energies at high coupling, and must be discarded then, but works surprisingly well at very small interaction strengths (γ 1), given the fact that there is no Bose-Einstein condensation. Actually, the Bogoliubov expansion Eq. (III.39) coincides with the approximate result obtained by inserting Eq. (III.38) in the Lieb equation, as confirmed later in [START_REF] Gaudin | Boundary energy of a Bose gas in one dimension[END_REF], and detailed in [START_REF] Wadati | Solutions of the Lieb-Liniger integral equation[END_REF]266].
It was then inferred on numerical grounds that the next order is such that [START_REF] Takahashi | On the Validity of Collective Variable Description of Bose Systems[END_REF] e(γ) = γ -4 3π
γ 3/2 + 1 6 - 1 π 2 γ 2 + o(γ 2 ), (III.40)
a result that agrees with later indirect (where by 'indirect', I mean that the technique involved does not rely on the Lieb equation) numerical calculations performed in [START_REF] Lee | Ground-State Energy of a Many-Particle Boson System[END_REF], and [START_REF] Lee | Ground-State Energy of a one-dimensional many-boson system[END_REF] where the value 0.06535 is found for the coefficient of γ 2 . Equation (III.40) was derived quasi-rigorously much later in [START_REF] Tracy | On the ground state energy of the δ-function Bose gas[END_REF], also by indirect means. Actually, no fully analytical calculation based on Bethe Ansatz has confirmed this term yet, as the quite technical derivation in Ref. [START_REF] Kaminaka | Higher order solutions of Lieb-Liniger integral equation[END_REF] apparently contains a non-identified mistake.
Next step is
e(γ) = γ - 4 3π γ 3/2 + 1 6 - 1 π 2 γ 2 + a 3 γ 5/2 + O(γ 3 ), (III.41)
where the exact fourth term, derived in closed form as multiple integrals by indirect means, was numerically evaluated to a 3 -0.001597 in [START_REF] Lee | Ground-State Energy of a one-dimensional many-boson system[END_REF]. A similar value was then recovered by fitting accurate numerical data [START_REF] Emig | Probability distributions of line lattices in random media from the 1D Bose gas[END_REF]: a 3 -0.001588, and another had been obtained previously in [START_REF] Takahashi | On the Validity of Collective Variable Description of Bose Systems[END_REF]: a 3 -0.0018.
The general structure of the weak-coupling series is very likely to be [START_REF] Tracy | On the ground state energy of the δ-function Bose gas[END_REF]
e(γ) = +∞ k=0 a k γ 1+k/2 , (III.42)
but until quite recently it was doubtful that the exact value of the coefficient a 3 would be identified in a close future. Ground-breaking numerical results have been obtained in [START_REF] Prolhac | Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation[END_REF], where the few next unknown coefficients a k≥3 have been evaluated with high accuracy, such as a 3 -0.00158769986550594498929, a 4 -0.00016846018782773903545, a 5 -0.0000208649733584017408, (III. [START_REF] Bednorz | Possible high Tc superconductivity in the Ba-La-Cu-O system[END_REF] up to a 10 included, by an appropriate sampling in numerical integration of the Lieb equation, and a method that accelerates the convergence. In particular, a 4 is in relatively good agreement with the approximate value a 4 -0.000171 previously obtained in [START_REF] Emig | Probability distributions of line lattices in random media from the 1D Bose gas[END_REF].
The fabulous accuracy of (III.43) has allowed for and has been increased by an advance from experimental analytical number theory along the three subsequent arXiv versions of Ref. [START_REF] Prolhac | Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation[END_REF].
I guessed the value
a 3 = 3 8 ζ(3) - 1 2 1 π 3 , (III.44)
based on the following heuristic grounds: in [START_REF] Lee | Ground-State Energy of a one-dimensional many-boson system[END_REF], an overall factor 1/π 3 is found, and a 3 is written as a sum of two integrals, possibly corresponding to a sum of two types of terms. Also, in the first arXiv version of [START_REF] Prolhac | Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation[END_REF], Prolhac wrote that he could not identify a 3 as a low-order polynomial in 1/π with rational coefficients. Combining the two previous items, and in view of the relative simplicity of a 0 , a 1 and a 2 , one can legitimely infer that the factor of 1/π using a code that identifies the rational coefficients of a linear combination of peculiar values of the zeta function when given a target value [START_REF] Prolhac | Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation[END_REF]. In principle, numerical values of the next coefficients could be obtained by iterating the procedure further. However, guessing other exact numbers without further insight seems difficult, as the relative accuracy of their numerical values decreases at each step, while the required precision is expected to increase, in view of the apparently increasing number of terms involved in the linear combination. Although the generating function of the exact coefficients of the weakly-interacting expansion still remains quite obscure, it seems reasonable to guess that a k naturally contains a factor 1/π k at all orders k, so that
e(γ) = +∞ k=0 ãk π k γ 1+k/2 . (III.47)
III.2.4 Strong-to intermediate coupling regime
While the weak-coupling regime is tremendously difficult to tackle in the close vicinity of the singularity, at strong coupling the problem, though far from trivial, is much easier to deal with in comparison. In the Tonks-Girardeau regime (γ → +∞), the reduced dimensionless energy is
e T G = π 2 3 , (III.48)
and coincides with the well-known result for spinless noninteracting fermions, Eq. (II.13), due to the Bose-Fermi mapping. It corresponds to a uniform distribution of pseudomomenta,
g T G (z) = 1 2π Θ(1 -|z|). (III.49)
For z ∈ [-1, 1], i.e. inside the pseudo-Fermi sea, finite-interaction corrections to Eq. (III.49) can be expressed as
g(z; α) α 1 km k=0 P k (z) α k , (III.50)
where {P k } k=0,...,km are polynomials and k m is a cut-off. Then, this truncated expansion can be used to approximate α(γ), inverted in γ(α), and yields the corresponding expansion of the ground-state energy:
e(γ) γ 1 km k=0 e k γ k , (III.51)
where {e k } k=0,...,km are real coefficients. Surprisingly, few non-trivial corrections to the Tonks-Girardeau limit are available in the literature, in view of the relative simplicity of the first few steps. In [START_REF] Zvonarev | Correlations in 1D boson and fermion systems: exact results[END_REF], this procedure has been pushed to sixth order.
A systematic method was proposed by Ristivojevic in [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF], where it was used to generate corrections to the Tonks-Girardeau regime up to order 8. In [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF], I have studied this method in detail, and used it to obtain analytical expressions up to order 20 in 1/γ. In a few words, the method, detailed in Appendix B.2, yields an approximation to the density of pseudo-momenta of the form
g(z; α, M ) = 2M +2 k=0 M j=0 g jk z 2j α k , (III.52)
where the matrix coefficients g jk are, by construction, polynomials in 1/π with rational coefficients, and M is an integer cut-off such that the truncated density of pseudo-momenta g(z; α, M ) converges to g(z; α) as M → +∞.
The original article [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF] and ours [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF], together, give a faithful account of the strengths and weaknesses of this method: a major positive trait is that it yields two orders of perturbation theory in 1/α at each step, and is automatically consistent to all orders. The lowest interaction strength attainable within this expansion is α = 2, since the procedure relies on a peculiar expansion of the Lorentzian kernel in Eq. (III.32). This corresponds to a Lieb parameter γ(α = 2) 4.527, which is an intermediate interaction strength. A priori, this value is small enough to recombine with the available expansions in the weakly-interacting regime, and thus obtain accurate estimates of the ground-state energy over the whole range of repulsive interactions. However, it could as well be seen as a strong limitation of the method, all the more so as, since convergence with M to the exact solution is slow, one can not reasonably expect to obtain reliable results below γ 5 even with a huge number of corrections. This drawback stems from the fact that capturing the correct behavior of the density of pseudo-momenta g as a function of z in the whole interval [-1, 1] is crucial to obtain accurate expressions of the ground-state energy, whereas the approximation Eq. (III.52) converges slowly to the exact value close to z = ±1 since the Taylor expansion is performed at the origin. This reflects in the fact that the maximum exponent of z 2 varies more slowly with M than the one of 1/α. What is more, if one is interested in explicit analytical results, at increasing M the method quickly yields too unhandy expressions for the function g, as it generates 1 + (M + 1)(M + 2)(M + 3)/3 terms. To finish with, it is difficult to evaluate the accuracy of a given approximation in a rigorous and systematic way.
Consistency at all orders is obviously the main quality of this method, the other points are rather drawbacks. After putting them into light, I have developed various methods to circumvent them during my thesis, but could not fix them simultaneously. The main improvements I have proposed are the following: a) the huge number of corrections needed to reach α 2 with good accuracy close to the Fermi surface z = ±1 seems redhibitory at first, but I have noticed that the arithmetic average of two consecutive orders in M , denoted by
g m (z; α, M ) = g(z; α, M ) + g(z; α, M -1) 2 , (III.53)
dramatically increases the precision. Figure III.1 illustrates the excellent agreement, at M = 9, with numerical calculations for α ∈ [2, +∞[. Another approach consists in truncating expansions to their highest odd order in 1/α, more accurate than the even one, as already pointed out in [START_REF] Rao | Capacity of the circular plate condenser: analytical solutions for large gaps between the plates[END_REF].
b) I have also found a way to avoid expanding the Lorentzian kernel in Eq. (III.32), and adapt Ristivojevic's method to the whole range of repulsive interaction strengths ]0, +∞[, as detailed in Appendix B.3. However, self-consistency at all orders, that was a quality of the method, is lost. Furthermore, the analytical expressions obtained are quite complicated, urging at a numerical evaluation at the very last step of the procedure. I have pushed the method to order 50 in z, yielding the ground-state energy with machine precision over a very wide range of strong and intermediate interaction strengths and an interesting comparison point for other analytical and numerical approaches. c) Last but not least, I have looked for compact analytical expressions, by identifying structures in the bare result of Ristivojevic's method. As far as the density of pseudomomenta g is concerned, I refer to [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF] for a detailed account of these compact notations. My approach opens a new line of research, but I have not investigated it deeply enough to obtain fully satisfying expressions. Comparatively, the same method revealed quite powerful when applied to the ground-state energy.
Once again, the idea is based on experimental number theory. This time, the aim is not to guess numerical values of unknown coefficients as in the weakly-interacting regime, but rather to put regular patterns into light by scrutinizing the first few terms, and guess subsequent ones without actually computing them. To do so, I have considered all operations that yield the strong-coupling expansion of e(γ), Eq. (III.51), as a black box, and focused on the result. A bit of reflexion hints at writing
e(γ) e T G = +∞ n=0 ẽn (γ), (III.54)
where the index n denotes a somewhat elusive notion of complexity, that corresponds to the level of difficulty to identify the pattern that defines ẽn .
Focusing on the strong-coupling expansion Eq. (III.51), I have identified a first sequence of terms, conjectured that they appear at all higher orders as well, and resummed the series. I obtained
ẽ0 (γ) = +∞ k=0 k + 1 1 - 2 γ k = γ 2 (2 + γ) 2 ,
(III. [START_REF] Ho | Spinor Bose Condensates in Optical Traps[END_REF] and noticed that the final expression for ẽ0 corresponds to Lieb and Liniger's approximate solution, that assumes a uniform density of pseudo-momenta in Eq. (III.32) [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF]. I have also found that the intermediate step in Eq. (III.55) appears in an appendix of Ref. [START_REF] Astrakharchik | Low-dimensional weakly interacting Bose gases: Nonuniversal equations of state[END_REF], but writing k+1 instead of the equivalent binomial interpretation, which is my main step forward, as will be seen below.
Using the strong-coupling expansion of e(γ) up to 20 th order in 1/γ, and guided by the property
+∞ k=0 k+3n+1 3n+1 - 2 γ k = γ γ + 2 3n+2 ,
(III.56)
I have conjectured that the structure of the terms of complexity n ≥ 1 is
ẽn (γ) = π 2n γ 2 L n (γ) (2 + γ) 3n+2 , (III.57)
where L n is a polynomial of degree n-1 with non-zero, rational coefficients of alternate signs. The complexity turns out to be naturally related to the index n in the right-hand side of this equation, and can be re-defined, a posteriori, from the latter. I have identified the first few polynomials as I have also conjectured that the coefficient of the highest-degree monomial of L n , written as
L 1 (γ) = 32
L n (X) = n-1 k=0 l k X k , (III.59) is l n-1 = 3 × (-1) n+1 × 2 2n+3 (n + 2)(2n + 1)(2n + 3) . (III.60)
Consulting the literature once more at this stage, it appears that the first correction ẽ1 had been rigorously predicted in [266], supporting my conjecture on the structure of e(γ) in the strong-coupling regime, Eq. (III.57). Later on, Prolhac checked numerically that all coefficients in Eq. (III.58) are correct, and that Eq. (III.60) is still valid at larger values of n [279].
Innocent as it may look (after all, it is just another way of writing the strong-coupling expansion), the structure provided by Eq. (III.57) has huge advantages. Although the structure Eq. (III. [START_REF] Gaunt | Bose-Einstein Condensation of Atoms in a Uniform Potential[END_REF] was not obvious at first, now that it has been found, identifying the polynomials L n from Eq. (III.51) to all accessible orders becomes a trivial task. The expressions thereby obtained are more compact than the strong-coupling expansion, Eq. (III.51), and correspond to a partial resummation of the asymptotic series. Last but not least, contrary to the 1/γ expansion, the combination of Eqs. (III.54) and (III.57), truncated to the maximal order to which the polynomials are known in Eq. (III.58), does not diverge at low γ. This fact considerably widens the validity range of the expansion.
Nonetheless, a few aspects are not satisfying so far. Progressively higher order expansions in 1/γ are needed to identify the polynomials in Eq. (III.58). The expansion Eq. (III.57) remains conjectural, and although a proof could be given using the techniques of [266], this direct approach looks tremendously complicated. To finish with, I did not manage to identify the generating function of the polynomials in Eq. (III.58), except for the first coefficient, Eq. (III.60), preventing from infering higher-order polynomials in Eq. (III.58) without relying on the 1/γ expansion, Eq. (III.51). Identifying this generating function may allow for a full resummation of the series, and thus to explicitly obtain the exact ground-state energy of the Lieb-Liniger model.
III.2.5 Illustrations
Bridging weak-and strong-coupling expansions, I have obtained the ground-state energy of the Lieb-Liniger model with good accuracy over the whole range of repulsive interactions, as illustrated in Fig.
III.2.
It can be split into two parts, that correspond to kinetic and interaction energy respectively, as [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF] This fact can be qualitatively understood on physical grounds. When γ = 0, i.e. for a noninteracting Bose gas, the density of quasi-momenta is a Dirac-delta function, bosons are individually at rest, and the kinetic energy of the gas is zero. The interaction energy Figure III.2 -Dimensionless ground state energy per particle e normalized to its value in the Tonks-Girardeau limit e T G = π 2 /3 (dotted, blue), as a function of the dimensionless interaction strength γ: conjectural expansion at large γ (solid, red) as given by Eq. (III.57) to sixth order, small γ expansion (dashed, black) and numerics (blue points).
Figure III.3 -Dimensionless ground-state kinetic energy per particle (red) and interaction energy per particle (black), normalized to the total energy per particle in the Tonks-Girardeau limit e T G , as a function of the dimensionless interaction strength γ. The horizontal line (dotted, blue) is a guide to the eye. The results are indistinguishable from the numerical estimation of Ref. [START_REF] Xu | Universal scaling of density and momentum distributions in Lieb-Liniger gases[END_REF].
is null too, by definition. Switching on interactions adiabatically, the interaction energy increases abruptly, but can be treated perturbatively. In the opposite, Tonks-Girardeau limit, in k-space the gas is equivalent to noninteracting fermions due to the Bose-Fermi mapping, thus its interaction energy is also zero. While decreasing interactions, the gas is equivalent to weakly-interacting fermions [START_REF] Cheon | Fermion-Boson Duality of One-dimensional Quantum Particles with Generalized Contact Interaction[END_REF], thus the interaction energy increases, but at a slow pace due to the remnant artificial Pauli principle. By continuity, at intermediate γ the interaction energy must reach a maximum, that corresponds to a subtle interplay of statistics in k-space and interactions in real space. Since the interaction energy is at its apex, this must be the regime where perturbative approaches are not adapted, explaining the counter-intuitive fact that intermediate interactions are the least amenable to analytical methods.
An alternative interpretation is utterly based on the density of quasi-momenta in the original units, using Fig. III.4. This quantity interpolates between a top-hat function in the Tonks-Girardeau regime, and a Dirac-delta for noninteracting bosons. The density of quasi-momenta is relatively flat in a wide range of large interaction strengths, explaining why the approximation e(γ) ẽ0 (γ) works so well from the Tonks-Girardeau regime to intermediate interaction strengths.
To sum up with, in this section I have explained how coordinate Bethe Ansatz gives access to the exact ground-state energy of the Lieb-Liniger model, both at finite boson number and in the thermodynamic limit, as a set of coupled equations. The latter is easily solved numerically for a given value of the interaction strength, but its analytical, exact solution is still unknown. In recent years, both weak-and strong-coupling regimes have been theoretically addressed in a systematic way, allowing in principle to obtain exact expansions to arbitrary order, but the procedures remain quite complicated when high orders are required.
Our understanding of the exact solution is improving too. I have performed a tentative partial resummation of the strong-coupling series expansion, and the weak-coupling one seems to contain a rich and interesting structure involving the Riemann zeta function at odd arguments. These expressions are known with high enough accuracy to numerically match at intermediate coupling. The relative error is of the order of a few per thousands over the whole range of interaction strengths, and semi-analytical techniques allow to reach machine precision if needed. Hence, while the Lieb-Liniger model was 'solved' from the very beginning in the sense that its exact ground-state energy was expressed in closed form as the solution of a set of equations, it is now solved in a stronger sense, both analytically and numerically. Strictly speaking, the problem of the ground-state energy is still open.
An even stronger definition of 'solving' a model includes the knowledge of correlation functions, an issue that I tackle from the next section on.
III.3 Local correlation functions
One of the main limitations of the coordinate Bethe Ansatz approach is that it provides only with an implicit knowledge of the wavefunction close to the thermodynamic limit, since it is a superposition of an exponentially large number of terms as a function of the particle number. Direct calculation of correlation functions based on the explicit manybody wavefunction remains a formidable task and, for many practical purposes, is not attainable.
The algebraic Bethe Ansatz approach, on the other hand, provides a more compact expression for the eigenstates. This formulation in turn allows to express many correlation functions as Fredholm determinants. Although very elegant, these expressions still require some work to provide useful results. Actually, thanks to interesting mathematical properties, the explicit knowledge of the many-body wavefunction is not necessary to obtain the local correlation functions of the Lieb-Liniger model, and coordinate Bethe Ansatz is sufficient.
The k-body, local correlation function in the ground state is defined as
g k = [ ψ † (0)] k [ ψ(0)] k n k 0 , (III.62)
where . represents the ground-state average. With this choice of normalization, g k represents the probability of observing k bosons simultaneously at the same place. As a consequence of this definition, the following equality holds trivially:
g 1 (γ) = 1. (III.63)
Indeed, the numerator of the right-hand side of Eq. (III.62) coincides with the mean linear density since the gas is uniform, and so does its denominator.
It is expected that all higher-order local correlations depend on the interaction strength. The following qualitative properties are quite easily obtained: in the non-interacting gas, i.e. if γ = 0, g k = 1 to all orders k. In the Tonks-Girardeau regime, since interactions mimic the Pauli principle, two bosons can not be observed simultaneously at the same place. A fortiori, larger local clusters are forbidden too, so g T G k = 0 to all orders k > 1. Inbetween, at finite interaction strength the probability to observe k +1 particles at the same place is strictly lower than the one to observe k particles, thus 0 < g k+1 (γ) < g k (γ) for any finite γ. One also expects |g k+1 (γ)| > |g k (γ)| > 0 (every local correlation function, except g 1 , is a strictly-decreasing function of γ). These properties imply that high-order correlation functions are difficult to measure by in-situ observations, in particular close to the Tonks-Girardeau regime.
After these general comments, I turn to more specific cases. I will focus on g 2 and g 3 , that are the most experimentally-relevant local correlation functions [START_REF] Kinoshita | Local Pair Correlations in One-Dimensional Bose Gases[END_REF][START_REF] Tolra | Observation of Reduced Three-Body Recombination in a Correlated 1D Degenerate Bose Gas[END_REF][START_REF] Armijo | Probing Three-Body Correlations in a Quantum Gas Using the Measurement of the Third Moment of Density Fluctuations[END_REF][START_REF] Haller | Three-Body Correlation Functions and Recombination Rates for Bosons in Three Dimensions and One Dimension[END_REF]. From the theoretical point of view, the second-order correlation function is easily obtained from the ground-state energy, as the Hellmann-Feynman theorem yields [START_REF] Gangardt | Stability and Phase Coherence of Trapped 1D Bose Gases[END_REF] g 2 (γ) = e (γ).
(III.64)
According to this equation, the fact that e is an increasing function of the Lieb parameter γ is a direct consequence of the positivity of g 2 .
As a general rule, all local correlation functions are expected to be related (possibly in a fairly non-trivial way) to moments of the density of pseudo-momenta, defined as
e k (γ) = 1 -1 dz z k g[z; α(γ)] { 1 -1 dz g[z; α(γ)]} k+1 .
(III.65)
Note that odd-order moments are null, since g is an even function of z, e 0 = 1, and e 2 (γ) = e(γ). In particular, the third-order local correlation function governs the rates of inelastic processes, such as three-body recombination and photoassociation in pair collisions. It is expressed in terms of the two first non-trivial moments as [START_REF] Cheianov | Exact results for three-body correlations in a degenerate one-dimensional Bose gas[END_REF]287]
g 3 (γ) = 3 2γ de 4 dγ - 5e 4 γ 2 + 1 + γ 2 de 2 dγ -2 e 2 γ -3 e 2 γ de 2 dγ + 9 e 2 2 γ 2 . (III.66)
This expression is significantly more complicated than Eq. (III.64), and the situation is not likely to improve at higher orders, where similar expressions are still unknown.
In [START_REF] Cheianov | Exact results for three-body correlations in a degenerate one-dimensional Bose gas[END_REF], the solution to the Bethe Ansatz equations has been found numerically, and useful approximations to the three-body local correlation function have been obtained by fitting this numerical solution, namely: with a relative error lower than 2% according to the authors. For γ ≥ 30, it is tacitly assumed that the available strong-coupling expansions of g 3 are at least as accurate.
g 3 (γ) 1 -6π -1 γ 1/2 + 1.2656γ -0.2959γ 3/2 1 -0.2262γ -0.1981γ 3/2 , 0 ≤ γ ≤ 1, (III.67) g 3 (γ) 0.705 -0.107γ + 5.08 * 10 -3 γ 2 1 + 3.41γ + 0.903γ 2 + 0.495γ 3 , 1 ≤ γ ≤
Actually, the dominant term of the strong-coupling asymptotic expansion of all local
g k γ 1 k! 2 k π γ k(k-1) I k (III .70)
where
I k = 1 -1 dk 1 • • • 1 -1 dk k {i<j≤k} (k i -k j ) 2 , (III.71)
and has even been generalized to [START_REF] Nandani | Higher-order local and non-local correlations for 1D strongly interacting Bose gas[END_REF]:
g k = γ 1 k j=1 j! 2 1 -2 γ k 2 -1 k-1 j=1 (2j -1)!! 2 (2k -1)!! π γ k(k-1) + . . . (III.72)
The fourth-order local correlation g 4 (γ) has been constructed using a different approach in [START_REF] Pozsgay | Local correlations in the 1D Bose gas from a scaling limit of the XXZ chain[END_REF] (see Appendix B.4), but this correlation function has not been probed experimentally yet.
As one can see on the previous examples, a key ingredient to evaluate a local correlation function g k by coordinate Bethe Ansatz is to evaluate the moments of the density of pseudo-momenta, given by Eq. (III.65), to the corresponding order. Thanks to my good knowledge of g(z; α), I have access to their strongly-interacting expansion. In particular, I could evaluate e 4 , that enters g 3 through Eq. (III.66). Based once more on strong-coupling expansions to order 20, I have conjectured that
e 2k (γ) = γ 2 + γ 2k +∞ i=0 π 2(k+i) (2 + γ) 3i L 2k,i (γ), (III.73)
where L 2k,i are polynomials with rational coefficients, such that
L 2k,0 = 1 2k + 1 , (III.74)
and L 2k,i≥1 is of degree i -1. This generalizes the corresponding conjecture for e 2 , Eq. (III.57). In particular, I have identified: In the weakly-interacting regime, I have conjectured that the even moments have the following structure:
L 4,1 (γ) = 32
e 2k (γ) = +∞ i=0 ã2k,i π i γ k+i/
ã2k,0 = 2k k - 2k k + 1 = 1 k + 1 2k k = C k , (III.77)
where C k denotes the k-th Catalan number. This is in agreement with a well-known result in random-matrix theory. I have also obtained
ã2k,1 = 2k k - 2 4k 2k+1 k 1 k + 1 k i=0 1 2 2i 2i i 2 ,
(III.78) but when both k and i are strictly higher than one, the exact coefficients ã2k,i are still unknown. In the end, to lowest order the k-body local correlation function reads [288]
g k γ 1 1 - k(k -1) π √ γ.
(III.79)
III.4 Non-local correlation functions, notion of connection
By essence, local correlations are far from providing as much information on a system as non-local ones, i.e. at finite spatial separation. It is usual to investigate the k-body density matrices, defined as
ρ k (x 1 , . . . , x k ; x 1 , . . . x k ) = dx k+1 • • • dx N ψ * N (x 1 , . . . , x k , x k+1 , . . . , x N ) ψ N (x 1 , . . . , x N ), (III.80)
and related to the local correlation functions through the relation
g k = N ! (N -k)! ρ k (0, . . . , 0; 0, . . . , 0) n k 0 . (III.81)
Traditionally in condensed-matter physics, one is more interested in their large-distance behavior, since it characterizes the type of ordering. In particular, the one-body correlation function g 1 acquires a non-trivial structure in the relative coordinate, that depends on the interaction strength. As an introduction to this topic, I sum up the main results in the Tonks-Girardeau regime.
III.4.1 One-body non-local correlation function in the Tonks-
Girardeau regime
The one-body, non-local correlation function of a translation-invariant system reads
g 1 (x) = ψ † (x) ψ(0) n 0 , (III.82)
where x denotes the relative coordinate, i.e. the distance between two points. Even in the Tonks-Girardeau regime, its exact closed-form expression is unknown, but it can be studied asymptotically. I use the notation z = k F x, where k F = πn 0 is the Fermi wavevector in 1D. I recall the large-distance expansion derived in [292](with signs of the coefficients corrected as in [START_REF] Gangardt | Universal correlations of trapped one-dimensional impenetrable bosons[END_REF]): where G is the Barnes function, defined by G(1) = 1 and the functional relation G(z + 1) = Γ(z)G(z), Γ being the Euler Gamma function. Since g T G 1 (z) → z→+∞ 0, there is no longrange order. The decay is algebraic, so one speaks of a quasi-long-range order, which is quite slow here.
g T G 1 (z) = G(3/2) 4 2|z| 1- 1 32z 2 - cos(2z) 8z 2 - 3
The general large-distance structure has been identified as [START_REF] Vaidya | One-Particle Reduced Density Matrix of Impenetrable Bosons in One Dimension at Zero Temperature[END_REF]
g T G 1 (z) = G(3/2) 4 2|z| 1+ +∞ n=1 c 2n z 2n + +∞ m=1 cos(2mz) z 2m +∞ n=0 c 2n,m z 2n + +∞ m=1 sin(2mz) z 2m+1 +∞ n=0 c 2n,m z 2n , (III.84)
in agreement with the fact that g 1 (z) is an even function in a Galilean-invariant model. Few coefficients have been explicitly identified, however, and Eq. (III.83) remains the reference to date.
At short distances, using the same technique as in [START_REF] Forrester | Finite one-dimensional impenetrable Bose systems: Occupation numbers[END_REF] to solve the sixth Painlevé equation, I have obtained the following expansion, where I have added six orders compared The first sum is the truncated Taylor series associated to the function sin(z)/z, and corresponds to the one-body correlation function of noninteracting fermions,
g T G 1 (z) = 8 k=0 (-1) k z 2k (2k + 1)! + |z| 3 9π - 11|z| 5 1350π + 61|z| 7 264600π + z 8 24300π 2 - 253|z| 9 71442000π - 163z 10 59535000π 2 + 7141|z| 11 207467568000π + 589z
g F 1 (z) = sin(z) z , (III.86)
while the additional terms are specific to bosons with contact interactions. The one-body correlation function of Tonks-Girardeau bosons differs from the one of a Fermi gas due to the fact that it depends on the phase of the wavefunction, in addition to its modulus. The full structure, that would be the short-distance equivalent of Eq. (III.84), is still unknown.
In the end, expansions at short and large distances are known at high enough orders to overlap at intermediate distances [292,[START_REF] Vaidya | One-Particle Reduced Density Matrix of Impenetrable Bosons in One Dimension at Zero Temperature[END_REF][START_REF] Lenard | Momentum Distribution in the Ground State of the One-Dimensional System of Impenetrable Bosons[END_REF], as can be seen in Fig. III.7.
I turn to the case of finite interactions, where the large-distance regime is amenable to Luttinger liquid theory and its generalizations.
III.4.2 Large-distance, one-body correlation function at finite interaction strength from the Tomonaga-Luttinger liquid formalism
The Tomonaga-Luttinger liquid theory is a suitable framework to obtain the large-distance, one-body correlation function. The result reads [START_REF] Haldane | Luttinger liquid theory' of one-dimensional quantum fluids. I. Properties of the Luttinger model and their extension to the general 1D interacting spinless Fermi gas[END_REF]
g T L 1 (z) = 1 |z| 1 2K +∞ m=0 B m cos(2mz) z 2Km 2 . (III.87)
We already know that the case K = 1 corresponds to the Tonks-Girardeau regime. By comparison with Eq. (III.84) above, it appears that the Tomonaga-Luttinger approach, although it correctly predicts the behaviour of the dominant term, is not able to provide the full structure. This is in contrast with the two-body correlation function, whose equal-time structure is exact in the Tonks-Girardeau regime, as shown in chapter II.
However, Equation (III.84) has been generalized to finite interaction strengths in [START_REF] Didier | Complete series for the off-diagonal correlations of a one-dimensional interacting Bose gas[END_REF], by regularization of the Tomonaga-Luttinger liquid formalism, predicting that
g RT L 1 (z) = G(3/2) 4 2|z| 1 K 1+ +∞ n=1 c n (K) z 2n + +∞ m=1 cos(2mz) z 2Km 2 +∞ n=0 c n,m z 2n + +∞ m=1 sin(2mz) z 2Km 2 +1 +∞ n=0 c n,m z 2n .(III.88)
The problem of the extraction of the amplitudes B m , or {c n , c n,m , c n,m } by Bethe Ansatz or alternative techniques is once more the main difficulty. A few of them have been obtained semi-analytically in [START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF].
III.4.3 Short-distance, one-body correlation function from integrability, notion of connection
At arbitrary interaction strength, due to Galilean invariance, the short-distance series expansion of the one-body density matrix,
ρ 1 (x, x ; γ) = dx 2 • • • dx N ψ * N (x, x 2 , . . . , x N ) ψ N (x , x 2 , . . . , x N ), (III.89)
can be written as
ρ 1 (x, x ; γ) = 1 L +∞ l=0 c l (γ)(n 0 |x-x |) l . (III.90)
The list of coefficients {c l } can be constructed from integrability at arbitrary interaction strength. The procedure relies on conservation laws. The most common are the number of particles, total momentum and energy, that are eigenvalues of their associated operators: particle number, momentum and Hamiltonian. In an integrable model, these quantities are conserved too, as well as infinitely many others, called higher energies and written E n , that are eigenvalues of peculiar operators called higher Hamiltonians, written Ĥn , that have the same Bethe eigenvector ψ N as the Hamiltonian. To obtain the results presented in Ref. [START_REF] Olshanii | Connection between nonlocal one-body and local three-body correlations of the Lieb-Liniger model[END_REF], I have used and compared several strategies, sketched in [START_REF] Gutkin | Conservation laws for the nonlinear Schrödinger equation[END_REF][START_REF] Davies | Higher conservation laws from the quantum non-linear Schrödinger equation[END_REF][START_REF] Davies | Higher conservation laws from the quantum non-linear Schrödinger equation[END_REF]. All of them are quite technical, but a systematic procedure and a few general properties have emerged in the course of the derivation.
I have defined the notion of connection for the one-body density matrix as a functional relation F that connects one of the coefficients c l of Eq. (III.90) to a local correlation function, via moments of the density of pseudo-momenta and their derivatives, that reads
F [c l (γ), g k (γ), {e 2n (γ), e 2n (γ), . . . }, γ] = 0.
(III.91)
Connections encompass many relationships scattered throughout the literature in a unified description. Each of them is unambiguously denoted by the pair of indices (l, k), where by convention an index is set to 0 if the corresponding quantity does not appear in Eq. (III.91). This compact notation is convenient, as it allows to list and classify the connections.
To illustrate this point, I recall the first few connections, obtained from conservation laws. I find
c 0 = g 1 = e 0 = 1, (III.92)
yielding the connections (0,0) and (0,1), as well as
c 1 = 0, (III.93) denoted by (1,0). The connection (2,2) is -2c 2 + γg 2 = e 2 , (III.94)
while (0,2) is obtained by applying the Hellmann-Feynman theorem to the Lieb-Liniger Hamiltonian Eq. (III.1), and is nothing else than Eq. (III.64). Then, combining the connections (2,2) and (0,2) yields (2,0), that reads
c 2 = 1 2 (γe 2 -e 2 ).
(III.95)
The main result of [START_REF] Olshanii | Connection between nonlocal one-body and local three-body correlations of the Lieb-Liniger model[END_REF] is the derivation of the connection (4,3), that reads 24c 4 -2γ 2 g 3 = e 4 -γe 4 .
(III. [START_REF] Webb | Observation of h e Aharonov-Bohm Oscillations in Normal-Metal Rings[END_REF] This derivation involves an operator Ĥ4 that yields, when applied to a Bethe eigenstate ψ N , the fourth integral of motion E 4 , such that
E 4 = N i=1 k 4 i .
(III.97)
The higher Hamiltonian Ĥ4 can be written explicitly as [START_REF] Davies | Higher conservation laws from the quantum non-linear Schrödinger equation[END_REF][START_REF] Davies | Higher conservation laws from the quantum non-linear Schrödinger equation[END_REF][START_REF] Olshanii | Connection between nonlocal one-body and local three-body correlations of the Lieb-Liniger model[END_REF]]
Ĥ4 = N i=1 ∂ 4 ∂x 4 i +12c 2 N -2 i=1 N -1 j=i+1 N k=j+1 δ(x i -x j )δ(x j -x k ) -2c N -1 i=1 N j=i+1 ∂ 2 ∂x 2 i + ∂ 2 ∂x 2 j + ∂ 2 ∂x i ∂x j δ(x i -x j )+δ(x i -x j ) ∂ 2 ∂x 2 i + ∂ 2 ∂x 2 j + ∂ 2 ∂x i ∂x j +2c 2 N -1 i=1 N j=i+1 δ 2 (x i -x j ) = ĥ(1)
4 + ĥ(2) 4 + ĥ(3) 4 + ĥ(4) 4 .
(III.98)
Let me comment on the physical meaning of Eq. (III.96) in view of Eq. (III. [START_REF] Bluhm | Persistent Currents in Normal Metal Rings[END_REF]), from which it is derived. The fact that g 3 appears in the connection (4,3) stems from ĥ(2)
4
in Eq. (III.98), that involves three-body processes provided that N ≥ 3. The coefficient c 4 , that stems from ĥ(1) 4 , is related to the higher kinetic energy in that the momentum operator applied to the density matrix generates the coefficients of its Taylor expansion when taken at zero distance.
In the course of the derivation, the requirement that Ĥ4 is divergence-free, i.e. contains no δ(0) term in spite of ĥ(4) 4 being mathematically ill-defined (it contains an operator δ 2 ), yields the connection (3,2), first obtained in [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF] from asymptotic properties of Fourier transforms:
c 3 = γ 2 12 g 2 .
(III.99)
The connection (3,0) follows naturally by combination with (0,2) and reads:
c 3 = γ 2 12 e 2 .
(III.100)
In Ref. [START_REF] Olshanii | Connection between nonlocal one-body and local three-body correlations of the Lieb-Liniger model[END_REF], we also proposed an alternative derivation of (3, 2), as a corollary of the more general result:
ρ (3) k (0, . . . ; 0, . . . ) = N -k 12 c 2 ρ k+1 (0, . . . ; 0, . . . ), (III.101)
where ρ k is the k-body density matrix expanded as
ρ k (x 1 , . . . , x k ; x 1 , . . . , x k ) = +∞ m=0 ρ (m) k x 1 +x 1 2 , x 2 , . . . , x k ; x 2 , . . . , x k |x 1 -x 1 | m , (III.102)
a form that naturally emerges from the contact condition. Equation (III.101) can be written as
c (k) 3 = γ 2 12 g k+1 , (III.103)
where c
(k) 3
is the third-order coefficient of the Taylor expansion of the k-body density matrix, and provides an example of generalized connection, a notion that remains in limbo.
As a last step, combining the connection (0,3), Eq. (III.66), with the connection (4,3), Eq. (III.96), yields the connection (4,0) first published in [START_REF] Dunjko | A Hermite-Padé perspective on the renormalization group, with an application to the correlation function of Lieb-Liniger gas[END_REF]: More generally, all correlations of the model are encoded in the connections of type (l, 0) and (0, k), as a consequence of integrability.
c 4 (γ) = γe 4 12 - 3
Combining the results given above, I have access to the first few coefficients {c l } l=0,...,4 of the Taylor expansion of g 1 . Contrary to c 0 and c 1 that are constant, c 2 and c 3 that are monotonous, c 4 (γ) changes sign when the interaction strength takes the numerical value γ c = 3.8160616255908 . . .
(III. [START_REF] Ryu | Observation of Persistent Flow of a Bose-Einstein Condensate in a Toroidal Trap[END_REF] obtained with this accuracy by two independent methods, based on a numerical and a semi-analytical solution of the Bethe Ansatz equations respectively. This is illustrated in Figs. III.8 and III.9. It was previously known that c 4 changes sign, as obtained from numerical analysis in [START_REF] Caux | One-particle dynamical correlations in the one-dimensional Bose gas[END_REF], but the only certitude was that 1 < γ c < 8.
III.5 Momentum distribution and Tan's contact
In addition to real-space correlations, through ballistic expansion of the atomic cloud, experimentalists have also access to the Fourier transform of the non-local, static correlation functions. Only the first few orders bear specific names and have been investigated by now. The Fourier transform of the one-body correlation function g 1 is the momentum distribution, while the momentum space representation of g 2 is known as the static structure factor. The momentum distribution is measured with ever increasing accuracy in various systems, from a 3D Fermi gas over the whole range of interaction strengths [304, 305] to Bose-Einstein condensates [START_REF] Wild | Measurements of Tan's Contact in an Atomic Bose-Einstein Condensate[END_REF][START_REF] Chang | Momentum-Resolved Observation of Thermal and Quantum Depletion in a Bose Gas[END_REF] and the 1D Bose gas [START_REF] Jacqmin | Momentum distribution of one-dimensional Bose gases at the quasicondensation crossover: Theoretical and experimental investigation[END_REF].
The momentum distribution of the Lieb-Liniger model, defined as
n(p) = n 0 +∞ -∞ e i p x g 1 (x), (III.106)
is difficult to access from integrability. For this, reason most studies are based on fully numerical methods so far [START_REF] Xu | Universal scaling of density and momentum distributions in Lieb-Liniger gases[END_REF][START_REF] Caux | One-particle dynamical correlations in the one-dimensional Bose gas[END_REF][START_REF] Astrakharchik | Correlation functions and momentum distribution of one-dimensional Bose systems[END_REF]. Analytically, it is quite natural, as usual, to treat the Tonks-Girardeau regime as a warm-up. As illustrated in Fig. III.7, expressions for the one-body correlation function g T G 1 (z) obtained at small and large distances match at intermediate distances, but separately none is appropriate for a direct Fourier transform.
A step forward is made by noticing that the low-and high-momentum expansions of n(p) can be deduced from the large-and short-distance asymptotics of g 1 (z) respectively, according to the following theorem [START_REF] Bleistein | Asymptotic Expansions of Integrals[END_REF]: if a periodic function f is defined on an interval [-L/2, L/2] and has a singularity of the form f (z) = |z -z 0 | α F (z), where F is a regular function, α > -1 and not an even integer, the leading term of the Fourier transform reads
L/2 -L/2 dz e -ikz f (z) = |k|→+∞ 2 cos π 2 (α+1) Γ(α+1)e -ikz 0 F (z 0 ) 1 |k| α+1 + O 1 |k| α+2 .
(III.107)
A legitimate accuracy requirement is that the expansions of n(p) should overlap at intermediate momenta. It is more or less fullfilled in the Tonks-Girardeau regime, but this is not the case yet at finite interaction strengths. It is known, however, that at small momenta the momentum distribution of the Tonks-Girardeau gas, n T G (p), scales like p -1/2 , in strong contrast with a noninteracting Fermi gas, as usual for correlation functions of odd order, linked to the phase observable. This result can be extended to arbitrary interactions using the Tomonaga-Luttinger liquid theory, and one finds that n T L (p) scales like
p 1 2K -1 .
At large momenta and in the Tonks-Girardeau regime, to leading order the momentum distribution scales like 1/p 4 [START_REF] Minguzzi | High-momentum tail in the Tonks gas under harmonic confinement[END_REF], again in contrast with a noninteracting Fermi gas where such a tail does not exist due to the finite Fermi sea structure. This power law associated with the Tonks-Girardeau gas is not affected by the interaction strength, showing its universality, and stems for the |z| 3 non-analyticity in g 1 according to Eq. (III.107). The coefficient of the 1/p 4 tail is called Tan's contact, and is a function of the coupling γ [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF]. As such, it yields an experimental means to evaluate the interaction strength, but as Tan has shown in a series of articles in the case of a Fermi gas [START_REF] Tan | Energetics of a strongly correlated Fermi gas[END_REF][START_REF] Tan | Large momentum part of a strongly correlated Fermi gas[END_REF][START_REF] Tan | Generalized virial theorem and pressure relation for a strongly correlated Fermi gas[END_REF], and others in a Bose gas [START_REF] Braaten | Universal Relation for Identical Bosons from Three-Body Physics[END_REF], it also gives much more information about the system. For instance, according to Tan's sweep relation, Tan's contact in 1D is related to the ground-state
C = - m 2 π 4 ∂E 0 ∂(1/g 1D ) , (III.108)
that can be rewriten in dimensionless units as
C(γ) = n 4 0 L 2π γ 2 g 2 (γ), (III.109)
and is illustrated in Fig. III.10. Written in this form, it becomes clear that this quantity is governed by the two-body correlations, which is a priori surprizing for a quantity associated to a one-particle observable.
III.6 Breakdown of integrability, BALDA formalism III.6.1 Effect of a harmonic trap
In current experimental realizations of 1D gases, external trapping along the longitudinal direction often breaks translational invariance, spoiling integrability. Due to this external potential, real systems are inhomogeneous, and their theoretical description requires modifications. Let us assume that the atoms are further confined in the longitudinal direction by an external potential V ext (x), describing the optical or magnetic trapping present in ultracold atom experiments. Then, as a generalization of the Lieb-Liniger model, the Hamiltonian of the system reads
H = N j=1 - 2 2m ∂ 2 ∂x 2 j + V ext (x j ) + g 1D 2 {l =j} δ(x j -x ) . (III.110)
In the case of a harmonic confinement, the only one I will consider here, V ext (x) = mω 2 0 x 2 /2. Introducing the harmonic-oscillator length a ho = /(mω 0 ) and the one-dimensional scattering length a 1D = -2 2 /(g 1D m), in the inhomogeneous system the dimensionless LDA parameter corresponding to the Lieb parameter γ in the homogeneous gas is
α 0 = 2a ho |a 1D | √ N . (III.111)
Due to the additional term compared to the homogeneous case, new tools are needed to derive the dynamics of a system described by Eq. (III.110). The few-particle problem is exactly solvable for N = 2 [START_REF] Busch | Two Cold Atoms in a Harmonic Trap[END_REF] and N = 3 [START_REF] Brouzos | Construction of Analytical Many-Body Wave Functions for Correlated Bosons in a Harmonic Trap[END_REF] with analytical techniques or using a geometrical ansatz [START_REF] Wilson | A geometric wave function for a few interacting bosons in a harmonic trap[END_REF], but the thermodynamic limit requires a different approach.
III.6.2 Local-density approximation for the density profile in the Tonks-Girardeau regime
In the Tonks-Girardeau regime, characterized by α 0 → +∞, a generalized Bose-Fermi mapping allows for an exact solution of the Schrödinger equation associated to Eq. (III.110) [START_REF] Girardeau | Ground-state properties of a onedimensional system of hard-core bosons in a harmonic trap[END_REF]. However, this exact solution is restricted to infinite interaction strength, and not utterly trivial. It is thus instructing to study an approximate method that would be easier to handle, and generalizable to arbitrary interaction strengths.
The local-density approximation (LDA) provides an approach to this problem. It is expected to be reliable for sufficiently large systems, where finite size corrections and gradient terms in the density profile are negligible. Its interest lies also in its generality, as LDA can be applied to various systems, and does not depend on quantum statistics.
In the Tonks-Girardeau regime, predictions of LDA can be compared to the exact solution, in particular it has been checked numerically in [START_REF] Vignolo | One-dimensional non-interacting fermions in harmonic confinement: equilibrium and dynamical properties[END_REF] that the Thomas-Fermi density profile predicted by the LDA becomes exact in the thermodynamic limit, as illustrated in Fig. III.11. This exact equivalence can be proven rigorously (see Appendix B.5) and reads where the eigenfunctions of the harmonic oscillators are [START_REF] Yukalov | Fermi-Bose mapping for one-dimensional Bose gases[END_REF] φ n (x) = e -x 2 /(2a
n T G (x; N ) = N -1 n=0 |φ n (x)| 2 ∼ N →+∞ n T F (x; N ) = 1 πa ho 2N - x a ho 2 , (III.112)
III.6.3 From local-density approximation to Bethe Ansatz LDA
In order to describe a one-dimensional, harmonically-trapped Bose gas, a possible strategy is to try and combine the local-density approximation, exact for a trapped Tonks-Girardeau gas in the thermodynamic limit, with Bethe Ansatz, exact for the uniform Lieb-Liniger gas at arbitrary interaction strength. This combination leads to the Bethe Ansatz local-density approximation (BALDA) formalism, that predics the thermodynamics of a trapped gas at arbitrary interaction strength.
To do so, in [START_REF] Lang | Tan's contact of a harmonically trapped onedimensional Bose gas: strong-coupling expansion and conjectural approach at arbitrary interactions[END_REF] we employed the density functional approach, previously developed and illustrated in [START_REF] Volosniev | Strongly interacting confined quantum systems in one dimension[END_REF][START_REF] Decamp | Highmomentum tails as magnetic-structure probes for strongly correlated SU (κ) fermionic mixtures in one-dimensional traps[END_REF] in the fermionic case. In detail, it consists in defining an energy functional E 0 [n] of the local density n(x) which, in the local-density approximation, reads
E 0 [n] = dx { (n) + [V ext (x) -µ]n(x)} , (III.114)
where is the ground-state energy density of the homogeneous gas and µ is its chemical potential. Minimizing this functional, i.e. setting δE 0 /δn = 0, yields an implicit equation for the density profile,
3 2 2 m n 2 e - g 1D n 2 e (γ) = µ -V ext (x) = µ 1 - x 2 R 2 T F , (III.115)
where
R T F = 2µ mω 2 0 (III.116)
is the Thomas-Fermi radius in a harmonic trap, and the dimensionless average groundstate energy per particle e defined in Eq. (III.34) is such that as expected from thermodynamics [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. I. The General Solution and the Ground State[END_REF].
(n) =
III.6.4 Tan's contact of a trapped Bose gas
To illustrate the BALDA formalism, in Ref. [START_REF] Lang | Tan's contact of a harmonically trapped onedimensional Bose gas: strong-coupling expansion and conjectural approach at arbitrary interactions[END_REF] we have investigated the Tan contact of a trapped one-dimensional Bose gas. Before explaining my own contributions to the problem, let me summarize a few important analytical results previously obtained by other authors.
The high-momentum tail of a harmonically-trapped Bose gas in the Tonks-Girardeau regime scales like p -4 , as in the homogeneous case [START_REF] Minguzzi | High-momentum tail in the Tonks gas under harmonic confinement[END_REF], and this power law still holds at arbitrary interaction strength [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF]. It has even been shown that such a tail is the exact dominant term at finite temperature in the Tonks-Girardeau regime [START_REF] Vignolo | Universal Contact for a Tonks-Girardeau Gas at Finite Temperature[END_REF].
Motivated by a recent experiment realizing a 1D gas of fermions with SU (κ) symmetry with up to κ = 6 spin components [START_REF] Pagano | A one-dimensional liquid of fermions with tunable spin[END_REF], a step forward has been made for interacting spinbalanced harmonically-trapped Fermi gases of arbitrary spin. The two first corrections to the fermionic Tonks-Girardeau regime have been obtained within the local-density approximation in [START_REF] Decamp | Highmomentum tails as magnetic-structure probes for strongly correlated SU (κ) fermionic mixtures in one-dimensional traps[END_REF]. This readily yields the result for the Lieb-Liniger model with an additional harmonic trap, considering a theorem that states the equivalence between a balanced one-dimensional gas of fermions with SU (κ = +∞) symmetry and a spinless 1D Bose gas [START_REF] Yang | One-dimensional w-component fermions and bosons with repulsive delta function interaction[END_REF]. Another important result of [START_REF] Decamp | Highmomentum tails as magnetic-structure probes for strongly correlated SU (κ) fermionic mixtures in one-dimensional traps[END_REF] relies on comparison of the BALDA result to a numerical exact solution from DMRG, that shows remarkable agreement at large interaction strengths. It was not possible, however, to attain higher orders analytically for this Fermi gas, because the strong-coupling expansion of its ground-state energy is known to third order only [326].
In [START_REF] Lang | Tan's contact of a harmonically trapped onedimensional Bose gas: strong-coupling expansion and conjectural approach at arbitrary interactions[END_REF], I have obtained Tan's contact for the Bose gas to 4 th order in the inverse coupling, and developed a procedure to evaluate this expansion to arbitrary order for a harmonically-trapped gas, from the corresponding asymptotic energy of an homogeneous gas at next order. This procedure can be applied to bosons and fermions alike. Within the LDA, Tan's contact of the inhomogeneous gas reads
C LDA = g 2 1D m 2 2π 4 dx n 2 (x) ∂e ∂γ n 0 =n(x)
.
(III.120)
This expression readily generalizes the known result for the homogeneous gas, Eq. (III.109).
To perform the calculation explicitly, it is necessary to dispose of a model of equation of state e(γ) for the homogeneous gas. For noninteracting spinless fermions or the Tonks-Girardeau gas, the LDA calculation can be performed this way, but requires the knowledge of the first correction in 1/γ. In the case of the Lieb-Liniger model with arbitrary coupling constant, I have relied on the strong-and weak-coupling expansions found in Sec. III.2.4.
First, I derived the strong-coupling expansion of Tan's contact for a harmonicallytrapped gas, based on the corresponding expansion of the ground state-energy of the homogeneous system, Eq. (III.51). To quantify the interaction strength in the trapped gas, I used the dimensionless unit α 0 , such that
g 1D = ω 0 a ho √ N α 0 . (III.121)
I also introduced the rescaled variables
n = n a ho √ N , µ = µ N ω 0 , x = x R T F . (III.122)
Combining Eq. (III.115), the normalization condition Eq. (III.118) and these scalings, I obtained the following set of equations:
π 2 6 +∞ k=0 (k + 3)e k α k 0 n k+2 (x; α 0 ) = (1 -x 2 )µ(α 0 ), (III.123)
where e k is defined as in Eq. (III.51), as well as
1 = √ 2 µ 1 -1 dx n(x). (III.124)
Then, I developed an efficient procedure, that allows to calculate the strong-coupling expansion of Tan's contact to arbitrary order. This procedure relies on the following expansions:
µ = +∞ k=0 c k α k 0 , (III.125)
and
n(x) = +∞ j=0 b j α j 0 f j (x), (III.126)
where {c k } k≥0 and {b j } j≥0 are numerical coefficients, and {f j } j≥0 is a set of unknown functions. Injected into Eqs. (III.123) and (III.124), they yield a consistency condition:
b j f j (x) = j m=0 b mj (1 -x 2 ) (m+1)/2 , (III.127)
where {b mj } are unkwown coefficients of an upper triangular matrix, so that the previous equations become
π 2 6 +∞ k=0 (k + 3)e k α k 0 +∞ j=0 1 α j 0 j m=0 b mj (1 -x 2 ) (m+1)/2 k+2 = 1 -x 2 +∞ k=0 c k α k 0 (III.128) and 1 = 32 +∞ k=0 c k α k 0 +∞ j=0 1 α j 0 j m=0 b mj 2 m B m + 3 2 , m + 3 2 2 , (III.129)
where B is the Euler Beta function.
Equations (III.128) and (III.129) are the final set of equations. Solving them when truncated to order n requires the solution at all lower orders. Moreover, at each step Eq. (III.128) splits into n+1 independent equations, obtained by equating the coefficients of (1 -x 2 ) (1+m)/2 m=0,...,n in the left-and right-hand sides. One thus needs to solve a set of n+2 equations to obtain c n and {b mn } m=0,...,n . Fortunately, n of them, giving b mj , m ≥ 1, are fully decoupled.
As a final step, Eq. (III.120) yields Tan's contact. In natural units imposed by the scaling, i.e. taking [START_REF] Matveeva | One-dimensional multicomponent Fermi gas in a trap: quantum Monte Carlo study[END_REF]
C LDA = C LDA a 3 ho N 5/2 , (III.130)
the final equation reads:
C LDA = - π 3 √ 2 +∞ k =0 c k α k 0 +∞ k=0 (k+1)e k+1 α k 0 1 -1 dx +∞ j=0 1 α j 0 j m=0 b mj (1-x 2 ) (m+1)/2 k+4 . (III.131)
In spite of the global minus sign, Tan's contact is a non-negative quantity because e 1 < 0 and corrections decrease quickly enough. At order n, the condition k + k + j = n, where j is the power of α 0 in the integrand, shows that the coefficient of order n is a sum of n+2 n integrals. One of them involves e n+1 , so e(γ) must be known to order n+1 in 1/γ to obtain the expansion of Tan's contact to order n in 1/α 0 .
Following this approach, the strong-coupling expansion reads: This expression agrees with the zero order one obtained in [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF], and with the one derived for a κ-component balanced spinful Fermi gas to order two [START_REF] Decamp | Highmomentum tails as magnetic-structure probes for strongly correlated SU (κ) fermionic mixtures in one-dimensional traps[END_REF] in the infinite spin limit.
C LDA = 128 √ 2 45π 3 + 1 α 0 - 8192 81π 5 + 70 9π 3 + √ 2 α 2 0 131072 81π 7 - 30656 189π 5 - 4096 525π 3 + 1 α 3 0 - 335544320 6561π
In the weak-coupling regime, I also derived an expression for Tan's contact by combining the weak-coupling expansion of the homogeneous gas to the local-density approximation. Using the same notations as above, I obtained
+∞ k=0 a k 4 (4 -k)n 2-k 2 (x; α 0 )α k+2 2 0 = 1 -x 2 µ(α 0 ). (III.133)
In this regime, it is not obvious to what order truncation should be performed to obtain a consistent expansion at a given order, nor to find the variable in which to expand, as can be seen by evaluating the first orders.
Considering only the k = 0 term in Eq. (III.133) yields The expansion to next order is problematic. If one retains terms up to k = 1, corresponding to the Bogoliubov approximation, since the coefficient a 1 is negative the equation of state becomes negative at sufficiently large density. Then, it is not possible to use it to perform the local-density approximation. One may also recall that the LDA breaks down at very weak interactions, where it is not accurate to neglect the quantum tails in the density profile [START_REF] Petrov | Low-dimensional trapped gases[END_REF][START_REF] Gudyma | Reentrant behavior of the breathing-mode-oscillation frequency in a one-dimensional Bose gas[END_REF][START_REF] Choi | Monopole Excitations of a Harmonically Trapped One-Dimensional Bose Gas from the Ideal Gas to the Tonks-Girardeau Regime[END_REF].
n (x) = 9 32α 0 1/3 (1 -x 2 ) (III.
In the end, the weak-coupling expansion to lowest order reads
C LDA = 1 5π 3 2 2/3 α 5/3 0 , (III.136)
in agreement with [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF].
Figure III.12 summarizes our results for Tan's contact. Notice that, although the contact is scaled by the overall factor N 5/2 /a 3 ho , it still depends on the number of bosons through the factor α 0 /2 = a ho /|a 1D | √ N . We have also applied the LDA numerically to the strong-coupling conjecture, Eq. (III.57). The result is extremely close to the one obtained from the numerical solution of the Bethe Ansatz equation of state in [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF]. By comparing the strong-coupling expansion with the results of the full calculation, we notice that the expansion (III.132) is valid down to α 0 6, and provides a useful analytical expression for Tan's contact in an harmonic trap. In order to accurately describe the regime of lower interactions, a considerable number of terms would be needed in the strong-coupling expansion of the equation of state. The use of the conjecture (III.57, III.58) is thus a valuable alternative with respect to solving the Bethe Ansatz integral equations, the weak coupling expansion being applicable only for very weak interactions α 0 0.1.
III.7 Summary of this chapter/résumé du chapitre
This chapter was devoted to the ground-state energy and static correlations of the Lieb-Liniger model. It began with a relatively detailed account of Lieb and Liniger's beautiful derivation of the integral equations that encode all the ground-state properties of their model in the thermodynamic limit. This procedure, based on Bethe Ansatz, does not rely on any approximation whatsoever. The Lieb-Liniger model is thus 'exactly solved', in the sense that its ground-state energy is expressed in closed form as the solution of relatively few equations, but this is not the end of the story, as this solution is not explicit at that stage.
To make quantitative predictions, the integral equation that governs the exact ground-
C LDA a 3 ho /N 5/2 a ho /|a 1D | √ N Figure III.
12 -Scaled Tan's contact for a 1D Bose gas (in units of N/a ho ) as a function of the dimensionless interaction strength α 0 /2 = a ho /(|a 1D | √ N ). Results from the strong-coupling expansion: Tonks-Girardeau (horizontal long-dashed line, black), 1 st order correction (long-dashed, cyan), 2 nd order correction (short-dashed, purple), 3 rd order correction (dotted, light blue), 4 th order correction (dot-dashed, dark blue). Results at arbitrary interactions: conjecture (blue dots), exact equation of state (data from Ref. [START_REF] Olshanii | Short-Distance Correlation Properties of the Lieb-Liniger System and Momentum Distributions of Trapped One-Dimensional Atomic Gases[END_REF], continuous, blue). The weak-coupling expansion is also shown (double-dashed, green). state energy can be solved numerically, but this is not in the spirit of integrable models, that are supposed to be exactly solvable by fully-analytical techniques. Sophisticated mathematical methods give access to weak-and strong-coupling expansions of the groundstate energy, and are valid at arbitrary order. I have improved on a powerful method based on orthogonal polynomials, designed to study the strongly-interacting regime. I have studied it in detail to put into light its advantages and drawbacks, and proposed an alternative method explained in appendix, that is even more powerful.
The Lieb-Liniger model is thus exactly solved in a broader sense, that does not exhaust the problem however. Indeed, evaluating the numerical coefficients of these expansions is quite tedious, past the very first orders. My contributions to the problem are twofold. I have obtained the exact analytical coefficients of the strong-coupling expansion up to order twenty, while former studies stopped at order eight. In the weak-coupling regime, from numerical data available in the literature, I guessed the exact value of the third-order coefficient of the weak-coupling expansion. I also refined Tracy and Widom's conjecture on the structure of the weak-coupling expansion.
As far as the ground-state energy is concerned, the next and last step towards the exact analytical solution would be to identify the generating function of either of these expansions, and sum the series explicitly. I took the first step by identifying patterns in the strong-coupling expansion, that enabled me to conjecture a partially-resummed form, whose validity range and accuracy are considerably enhanced compared to bare expansions. I even identified one type of coefficients involved. Most of my results are conjectural, which should not be surprizing as the mathematical field of analytical number theory is full of conjectures. This solution, in turn, allowed me to obtain the local correlation functions of the Lieb-Liniger model, that give information on its degree of fermionization of the system and on its stability. Generalizing my previous conjectures on the energy to higher moments of the density of pseudo-momenta, I improved on the analytical evaluations of these quantities.
Then, I turned to non-local correlation functions. While long-range correlations are amenable to the Tomonaga-Luttinger liquid framework, their short-range expansion can be obtained systematically by Bethe Ansatz techniques. Focusing on the one-body correlation function, I constructed higher Hamiltonians and conserved quantities up to order four. I introduced the notion of connection to denote equations that relate local correlation functions, coefficients of their short-range series expansion, and moments of the density of pseudo-momenta. I derived most of them up to order four, simplifying some of the existing derivations, and identified them with most of the celebrated results in the literature, now unified in a single formalism. As a new result, I evaluated the fourthorder coefficient of the one-body correlation function semi-analytically, and found the interaction strength at which it changes sign with extraordinary precision.
The Fourier transform of the one-body correlation function is known as the momentum distribution, and is also amenable to perturbative expansions. The coefficient of its high-momentum universal p -4 tail is known as Tan's contact. I used this observable to illustrate an extension of the Bethe Ansatz formalism to an harmonically trapped gas, in a non-integrable regime. This extension is realized by combination with the local-density approximation. I have developed a procedure that yields the expansion of Tan's contact for the trapped Bose gas to arbitrary order in the strongly-interacting regime, and used it to evaluate the corrections to the Tonks-Girardeau regime up to fourth order. Ce chapitre était exclusivement consacré à l'énergie de l'état fondamental du modèle de Lieb et Liniger, ainsi qu'à ses fonctions de corrélation statiques à l'équilibre thermodynamique. J'y ai détaillé dans un premier temps les principales étapes du raisonnement de Lieb et Liniger, fondé sur l'Ansatz de Bethe, pour ramener la solution exacte de leur modèle à celle d'un ensemble restreint d'équations intégrales. Le modèle est dès lors consiéré comme résolu, mais dans une acception restreinte car l'énergie de l'état fondamental n'est pas exprimée de façon explicite à ce stade.
Les équations qui codent cette dernière peuvent être résolues assez facilement par intégration numérique, dès lors qu'on fixe une valeur de l'intensité des interactions. Toutefois, ce n'est pas vraiment l'esprit de la physique des modèles intégrables, dont on vante souvent le mérite qu'ils ont de pouvoir être résolus analytiquement et de façon exacte. Des méthodes mathématiques relativement sophistiquées permettent de construire le développement limité et asymptotique de l'énergie de l'état fondamental en fonction de l'intensité des interactions. Je me suis concentré sur la méthode développée par Zoran Ristivojevic, l'ai étudiée en détail pour en dévoiler les qualités et défauts, puis me suis attelé à la tâche d'y apporter des améliorations et en proposer une alternative plus efficace.
Le modèle de Lieb et Liniger est dès lors résolu dans un sens plus vaste encore, qui ne suffit néanmoins pas pour clore le problème. En effet, il s'avère difficile, en pratique, de calculer les coefficients des différents développements en série analytiquement au-delà des premiers ordres. J'ai poussé le développement à l'ordre vingt dans le régime des fortes interactions, soit douze ordres plus loin que ce qui avait été fait jusque là, et j'ai identifié le coefficient exact du troisième ordre dans le développement à faible interaction en m'appuyant sur des données numériques existantes. L'ultime étape pour obtenir l'énergie exacte de l'état fondamental serait d'identifier la fonction génératrice de l'un ou l'autre des développements en série, et de sommer cette dernière. J'ai entamé cette ascension finale en identifiant certains schémas dans le développement à forte interaction, qui m'ont permis d'effectuer une resommation partielle de la série et, de ce fait, d'accroître considérablement le domaine de validité de l'approximation par troncation à un ordre donné. Ces résultats sont pour la plupart conjecturaux, ce qui n'est pas particulièrement étonnant au vu de leur appartenance à la théorie analytique des nombres.
Ces solutions approchées extrêmement précises m'ont permis d'obtenir avec une précision équivalente les corrélations locales du deuxième et troisième ordre, qui renseignent sur le degré de fermionisation et la stabilité du gaz, après avoir proposé quelques conjectures supplémentaires concernant les moments d'ordre supérieur de la densité de pseudoimpulsions.
J'ai ensuite tourné mon attention vers les corrélations non-locales. Ces dernières sont bien décrites à grande distance par la théorie des liquides de Tomonaga-Luttinger, tandis que leurs corrélations à courte distance sont accessibles, une fois encore, par Ansatz de Bethe. La construction des Hamiltoniens d'ordre supérieur et des quantités conservées associées m'a mis sur la voie du concept de connexion, que j'ai défini comme étant une équation liant une fonction de corrélation locale, un coefficient du développement en série à courte distance d'une fonction de corrélation non-locale (la fonction de corrélation à un corps dans les cas que j'ai traités), et divers moments de la distribution des quasiimpulsions. J'ai obtenu explicitement la majeure partie des connexions d'ordre inférieur ou égal à quatre, et ai reconnu en ces dernières bon nombre de résultats considérés comme importants dans la littérature, qui n'avaient pas encore été envisagés sous l'angle d'un formalisme unique. J'ai notamment pu évaluer l'intensité des interactions pour laquelle le quatrième coefficient de la fonction de corrélation à un corps s'annule et change de signe, avec une précision inégalée.
La transformée de Fourier de la fonction de corrélation à un corps est plus connue sous le nom de distribution des impulsions. Un développement asymptotique de cette dernière donne accès au coefficient de sa décroissance en p -4 à haute impulsion, connu sous le nom de contact de Tan. Je me suis appuyé sur l'exemple de cette observable pour illustrer une extension de l'Ansatz de Bethe au gaz de Lieb-Liniger placé dans un piège harmonique, dans un régime où il n'est pas intégrable, fondée sur une combinaison avec l'approximation de la densité locale. J'ai développé une procédure qui donne le contact de Tan à un ordre arbitraire, et l'ai utilisée pour le calculer jusqu'à l'ordre quatre.
III.8 Outlook of this chapter
This chapter shows that, even at a basic level, the Lieb-Liniger model has not revealed all its secrets yet.
The exact, analytical expression of the ground-state energy is, more than ever before, a thriving open problem. I am not able to quantify the efforts still required to understand one of the asymptotic structures fairly enough to sum the series, but my feeling is that the problem is easier at strong coupling. In particular, a special role seems to be played by the quantity 1+2/γ, associated to a ratio of Fredholm determinants [START_REF] Nardis | Exact correlations in the Lieb-Liniger model and detailed balance out of equilibrium[END_REF]. However, the new terms identified at weak coupling are far more interesting, as they involve the zeta function, the most celebrated one in the field of analytical number theory.
Several other examples of structures involving zeta functions had already been reported on, in the statistical physics, quantum field theory and string theory literature. This function appears in the Feynman diagrams of quantum electrodynamics, the φ 4 model and correlation functions of the Heisenberg spin chain [START_REF] Boos | Quantum spin chains and Riemann zeta function with odd arguments[END_REF], but its presence in as simple a model as the Lieb-Liniger Bose gas came as a surprise, so I hope that it will attract attention.
Knowledge of the exact ground-state energy might help ingenuous mathematicians to prove new theorems on the zeta function. Examples I have in mind are the irrationality of ζ(3)/π 3 , that explicitly appears in the weak-coupling expansion of e(γ), and of ζ(2n+1) for n ≥ 2 (it is only known that an infinity of them is irrational [START_REF]La fonction zêta de Riemann prend une infinité de valeurs irrationnelles aux entiers impairs[END_REF], and a few improvements thereof). I conjecture that multiple zeta functions are also involved in this expansion at higher orders, ζ(3) 2 to begin with.
The techniques used in the case of the Lieb-Liniger model could also be useful when applied to closely related models such as the Yang-Gaudin model, or for an extension to the super Tonks-Girardeau regime. In particular, again from numerical data of Ref. [START_REF] Prolhac | Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation[END_REF], I have conjectured that the weak-coupling expansion of the ground-state energy of the attractive spin-1/2 δ-Fermi gas reads
e(γ) = +∞ k=0 bk γ k π 2k-2 .
(III.137)
The known exact coefficients are [START_REF] Tracy | On the Ground State Energy of the Delta-Function Fermi Gas II: Further Asymptotics[END_REF] Numerical data may also allow to gain insight in the weak-coupling expansion of the higher moments of the density of pseudo-momenta, i.e. in the coefficients a 2k,i in Eq. (III.76).
Another purely theoretical issue that may allow to gain insight in the model and some mathematical aspects is the equivalence between the approach followed in the main text, and the alternative point of view based on a peculiar nonrelativistic limit of the sinh-Gordon model, investigated in the references associated to Appendix B.4, that involves other integral equations. The equivalence of their formulations of the third-order local correlation functions has not been rigorously verified yet. It is not clear either whether the notion of connection can be adapted to this context.
As far as the BALDA formalism is concerned, the exact thermodynamics of a harmonically trapped gas is not explicitly known, but approximations could be improved by guesses and summations as in the homogeneous case. It could also be extended to other types of trapping, as an alternative to the techniques used in Ref. [START_REF] Brun | One-particle density matrix of trapped one-dimensional impenetrable bosons from conformal invariance[END_REF]. In particular, the term beyond Tan's tail of the momentum distribution is still widely unexplored, both in the homogeneous and trapped case. For a homogeneous gas, it should be derived from higher-order connections.
In an experimental perspective, it is also important to investigate finite temperature thermodynamics of the Lieb-Liniger model [START_REF] Wang | Universal local pair correlations of Lieb-Liniger bosons at quantum criticality[END_REF][START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF][START_REF] De Rosi | Thermodynamic behavior of a one-dimensional Bose gas at low temperature[END_REF], that can be exactly obtained from the thermodynamic Bethe Ansatz approach introduced by Yang and Yang [START_REF] Yang | Thermodynamics of a One Dimensional System of Bosons with Repulsive Delta Function Interaction[END_REF]. Needless to say, analytical approximate solutions are even more difficult to obtain in this case, but the interplay of statistics in k-space and interactions should be more tangible at finite temperature. A series of theoretical works have already tackled thermal correlation functions of the Lieb-Liniger model. Analytical approximate expressions have been obtained for the non-local g 2 correlation function in various regimes, and compared to numerical simulations in [START_REF] Kheruntsyan | Pair Correlations in a Finite-Temperature 1D Bose Gas[END_REF][START_REF] Sykes | Spatial Nonlocal Pair Correlations in a Repulsive 1D Bose Gas[END_REF][START_REF] Deuar | Nonlocal pair correlations in the one-dimensional Bose gas at finite temperature[END_REF], while g 3 has been studied in [START_REF] Kormos | Expectation Values in the Lieb-Liniger Bose Gas[END_REF].
Chapter IV
Dynamical structure factor of the Lieb-Liniger model and drag force due to a potential barrier IV.1 Introduction
In this chapter, whose original results are mostly based on Refs. [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF][START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF], I take the next step towards the full characterization of a 1D Bose gas through its correlation functions. Going beyond static correlation functions, dynamical ones in energy-momentum space provide another possible way to understand a system, but their richer structure makes them harder to evaluate, and their theoretical study may involve fairly advanced techniques. Two observables attract peculiar attention: the Fourier transform of Green's function, a.k.a. the spectral function, and of the density-density correlations, known as the dynamical structure factor. The latter is quite sensitive to both interactions and dimensionality, providing an ideal observable to probe their joint effect.
Another strong motivation lies in the fact that equilibrium dynamical correlation functions yield valuable information about the response of a fluid to a weak external excitation. This response is the central object of linear response theory, that gives insight into slightly out of equilibrium situations. In this perspective, the dynamical structure factor governs the response of a fluid to a weak external potential locally coupled to its density. More precisely, it is linked to the drag force experienced by a single impurity, that characterizes the viscosity of its flow. I take this opportunity to dwell on the issue of superfluidity, a concept associated to the dramatic phenomenon of frictionless flow observed in quantum fluids below a critical velocity. This chapter is organized as follows: first, I recall a few experimental facts related to superfluidity and their historical interpretation, then I present Landau's criterion for superfluidity, and the drag force criterion as a generalization thereof.
Following chapter III, I still consider the Lieb-Liniger model. The Tonks-Girardeau regime is amenable to exact calculations, and at finite interaction strength I use the Tomonaga-Luttinger liquid framework, keeping in mind that its validity range is limited to low energies or small flow velocities. Refining the analysis to get closer to experimental situations, I also investigate finite temperature, as well as the effect of the barrier width on the drag force, putting into light a quasi-superfluid supersonic regime.
To finish with, as a first step towards beyond Luttinger liquid quantitative treatment, I examine the exact excitation spectrum of the Lieb-Liniger model using coordinate Bethe Ansatz techniques, and give quantitative bounds for the validity range of the Tomonaga-Luttinger liquid framework in terms of interaction strength. Dans ce chapitre, je franchis un pas de plus vers la caractérisation complète d'un gaz de Bose unidimensionnel par ses fonctions de corrélation, en considérant une facette supplémentaire de ces dernières, à savoir leur dynamique dans l'espace des énergies et impulsions. Leur structure s'avère plus complexe, et par conséquent plus difficile à obtenir que dans le cas statique. Deux observables attirent particulièrement l'attention: la transformée de Fourier de la fonction de Green, connue sous le nom de fonction spectrale, et celle des corrélations spatio-temporelles en densité, appelée facteur de structure dynamique. Ce dernier est particulièrement sensible à la fois à la dimension du système et aux interactions, ce qui en fait une observable de choix pour sonder leurs effets conjoints.
Une autre motivation, et non des moindres, vient du fait que les fonctions de corrélation dynamiques à l'équilibre renseignent sur la réponse du système à d'éventuelles perturbations de nature externe. La théorie de la réponse linéaire permet en effet d'en déduire la dynamique de situations légèrement hors équilibre. Dans cette perspective, le facteur de structure dynamique gouverne la réponse d'un fluide à une barrière de potentiel de faible amplitude couplée localement à la densité. Plus précisément, la donnée du facteur de structure dynamique et de la forme précise de la barrière de potentiel, qui modélise un faisceau laser ou une impureté, permet d'obtenir la force de traînée, qui caractérise la viscosité de l'écoulement. J'en profite pour m'attarder quelque peu sur la notion de superfluidité, traditionnellement associée à un écoulement parfait en-dessous d'une vitesse critique.
Le chapitre s'organise de la manière suivante: dans un premier temps, je rappelle certains résultats expérimentaux historiques et leur interprétation théorique. Un pas décisif a notamment été franchi par Landau lorsqu'il a énoncé son critère de superfluidité, dans la filiation duquel s'inscrit le critère de la force de traînée, sur lequel se fonde mon étude.
Dans la veine du Chapitre III, je considère encore une fois un gaz de Bose unidimensionnel, décrit par le modèle de Lieb et Liniger. Moyennant l'hypothèse de faible barrière d'épaisseur nulle, je traite le régime de Tonks-Girardeau sans autre approximation. Dans le cas d'une interaction finie, je m'appuie sur le modèle de Tomonaga-Luttinger, tout en gardant en mémoire que son domaine de validité est limité à une zone restreinte de basse énergie ou de faible vitesse.
Afin de gagner en réalisme, je m'intéresse aussi aux effets thermiques et à une barrière de potentiel d'épaisseur non-nulle, ce qui me permet de mettre en évidence un régime supersonique quasi-superfluide, caractérisé par l'évanescence de la force de traînée. L'étape suivante pour raffiner l'analyse serait d'envisager un modèle au-delà du liquide de Luttinger. Afin de faire des prévisions quantitatives dans ce cadre, dans un premier temps il s'avère nécessaire d'évaluer le spectre d'excitation du modèle de Lieb et Liniger, que j'obtiens par Ansatz de Bethe. La comparaison de cette solution exacte et des prédictions du modèle de Tomonaga-Luttinger dans sa version standard met en lumière ses limites, et me permet de donner pour la première fois une borne supérieure quantitative à son domaine de validité.
IV.2 Conceptual problems raised by superfluidity, lack of universal criterion
Attaining a full understanding of the microscopic mechanisms behind superfluidity is among the major challenges of modern physics. An historical perspective shows that experiments constantly challenge theoretical understanding [START_REF] Leggett | Superfluidity[END_REF][START_REF] Balibar | The Discovery of Superfluidity[END_REF][START_REF] Griffin | New light on the intriguing history of superfluidity in liquid 4 He[END_REF][START_REF] Albert | Superfluidité et localisation quantique dans les condensats de Bose-Einstein unidimensionnels[END_REF] and that, although four Nobel Prizes have already been awarded for seminal contributions to this complicated topic (to Landau in 1962, Kapitza in 1978, Lee, Osheroff and Richardson in 1996 and to Abrikosov, Ginzburg and Leggett in 2003), interest in the latter shows no sign of exhaustion whatsoever.
IV.2.1 Experimental facts, properties of superfluids
Superfluids are one of the most appealing manifestations of quantum physics at the macroscopic scale. They seem to defy gravity and surface tension by their ability to flow up and out of a container, or through narrow slits and nanopores at relatively high velocity. Another famous property is the fountain effect [START_REF] Allen | New Phenomena Connected with Heat Flow in Helium II[END_REF]: when heat is applied to a superfluid on one side of a porous plug, pressure increases proportionally to the heat current so that the level of the free surface goes up, and a liquid jet can even occur if pressure is high enough. If the liquid were described by classical hydrodynamics, the vapor pressure would be higher on the warm side so that, in order to maintain hydrostatic equilibrium, the liquid level would have to go down. Superfluidity is also characterized by a sharp drop of viscosity and thermal resistivity at the transition temperature, as shown by the early historical experiments involving liquid 4 He. In this system, the superfluid transition is observed at a temperature T λ 2.2K, the lambda point, separating its phases called He I (above) and He II (below) [START_REF] Keesom | New measurements on the specific heat of liquid helium[END_REF][START_REF] Kapitza | Viscosity of Liquid Helium below the λ-Point[END_REF][START_REF] Allen | Flow of Liquid Helium II[END_REF].
These facts remind of the sudden fall of resistivity previously witnessed in superconductors, hinting at an analogy, or even a deep connection between both phenomena. Superconductivity is traditionally explained by the formation of Cooper pairs of electrons in a metal, as prescribed by the Bardeen-Cooper-Schrieffer (BCS) theory [START_REF] Bardeen | Microscopic Theory of Superconductivity[END_REF][START_REF] Bardeen | Theory of Superconductivity[END_REF]. For this picture to emerge in the context of superfluids, it took the unexpected observation of superfluidity in 3 He [START_REF] Osheroff | Evidence for a New Phase of Solid He 3[END_REF], at temperatures lower than 2.6mK. This historical step bridged the superfluidity of helium and the phenomenon of superconductivity, as the underlying mechanism was identified as the formation of pairs of fermionic 3 He atoms [START_REF] Leggett | A theoretical description of the new phases of 3 He[END_REF].
The superfluidity of 4 He, however, seemed associated to the BEC of these bosonic atoms [START_REF] Tisza | The Theory of Liquid Helium[END_REF]. London was the first to relate superfluidity to Bose-Einstein condensation, through the heuristic observation that the experimental value of the superfluid critical temperature measured in 4 He is close to the theoretical condensation temperature of an ideal Bose gas at the same density (an intuition sometimes referred to as the 'London conjecture') [START_REF] London | The λ-Phenomenon of Liquid Helium and the Bose-Einstein Degeneracy[END_REF]. This picture has to be nuanced as helium is a strongly-interacting liquid, the ideal Bose gas is actually not superfluid, and the superfluid fraction n s /n is equal to one at T = 0, while only ten percent of the atoms are Bose condensed in 4 He. This notion of superfluid fraction (in analogy to the condensate fraction in BEC) stems from the Tisza-Landau two-fluid model [START_REF] Tisza | Transport Phenomena in Helium II[END_REF][START_REF] Landau | The theory of superfluidity of helium II[END_REF], that pictures quantum fluids as containing two impenetrable parts, a normal (associated to the index n) and a fully superfluid one (indexed by s), such that the total density reads n = n n +n s . The normal part behaves like a Newtonian, classical fluid, while the superfluid component does not carry entropy and has zero viscosity. In particular, the two-fluid hydrodynamic second sound velocity is associated to superfluid density. This collective mode is an entropy wave, with constant pressure, where superfluid and normal densities oscillate with opposite phases.
For decades, the two isotopes of helium have been the only known examples of quantum fluids, as Helium is the only element that is naturally liquid at the very low temperatures where quantum effects arise. Much later, starting during the very last years of the 20 th century, superfluidity has also been observed in ultracold gases, at temperatures of the order of a few dozens of nK. Through their high degree of tunability, such systems provide a versatile tool to study superfluidity in simplified situations. A paradigmatic example is the weakly-interacting Bose gas [START_REF] Raman | Evidence for a Critical Velocity in a Bose-Einstein Condensed Gas[END_REF][START_REF] Onofrio | Observation of Superfluid Flow in a Bose-Einstein Condensed Gas[END_REF], which is far less complex than helium as its interactions have a simpler structure, are much weaker, and its excitation spectrum features phonons but no complicated roton excitation. Superfluidity has then been studied along the BEC-BCS crossover [START_REF] Miller | Critical Velocity for Superfluid Flow across the BEC-BCS Crossover[END_REF][START_REF] Weimer | Critical Velocity in the BEC-BCS Crossover[END_REF], and in a Bose-Fermi counterflow, where a Bose-Einstein condensate plays the role of an impurity in a degenerate Fermi fluid [START_REF] Delehaye | Critical Velocity and Dissipation of an Ultracold Bose-Fermi Counterflow[END_REF]. Ultracold atoms also allow to study superfluidity on a lattice [START_REF] Greiner | Quantum phase transition from a superfluid to a Mott insulator in a gas of ultracold atoms[END_REF], where it is opposed to a Mott insulating phase. In the superfluid phase, the atoms can hop from one site to another and each of them spreads out over the whole lattice with a long-range phase coherence. More recently, polaritons in microcavities have provided a new kind of system to explore the very rich physics of non-equilibrium quantum fluids [START_REF] Amo | Superfluidity of polaritons in semiconductor microcavities[END_REF][START_REF] Boulier | Injection of Orbital Angular Momentum and Storage of Quantized Vortices in Polariton Superfluids[END_REF][START_REF] Gallemí | Interaction-enhanced flow of a polariton persistent current in a ring[END_REF], up to room temperature [START_REF] Lerario | Roomtemperature superfluidity in a polariton condensate[END_REF].
Even in view of this collection of experimental results, theoretical characterization of superfluidity remains quite challenging. Frictionless flow is the historical criterion; the existence of quantized vortices, quantized circulation, persistent currents (i.e. metastability of superflow states) or absence of a drag force are also commonly invoked, but to what extent these manifestations are equivalent or complementary to each others is far from being settled. This puzzling situation is even more problematic in cases that have not been considered at the beginning. For instance, superfluidity could exist in 1D, where BEC does not, and out-of-equilibrium gases complexify this picture, leading to scenarios where a few possible definitions are satisfied, and others not [365]. In our current understanding, superfluidity is rather an accumulation of phenomena, preventing, so far, a straighforward and universal description to emerge.
Equilibrium superfluidity can be probed experimentally through the Hess-Fairbank effect: when the walls of a toroidal container or a bucket are set into rotation adiabatically with small tangential velocity, a superfluid inside stays at rest while a normal fluid would follow the container. This leads to a nonclassical rotational inertia, that can be used to determine the superfluid fraction, providing an indirect validation of the two-fluid description [START_REF] Hess | Measurements of Angular Momentum in Superfluid Helium[END_REF]. A superfluid is also described by a macroscopic wave function ψ( r) [START_REF] London | The λ-Phenomenon of Liquid Helium and the Bose-Einstein Degeneracy[END_REF], as are Bose-Einstein condensates and superconductors, which implies phase coherence. The superfluid wave function can be expressed as ψ( r) = |ψ( r)|e iφ( r) in modulus-phase representation, and the superfluid velocity v s is characterized by the gradient of the phase φ through the relation
v s = m ∇φ( r), (IV.1)
where ∇ is the nabla operator. A consequence of Eq. (IV.1), is that the flow is always irrotational (curl( v s ) = 0), a characteristic shared by Bose-Eintein condensates. The phase φ is single-valued, leading to the existence of quantized vortices (as long as the fluid is not confined to 1D), as first predicted in helium [START_REF] Onsager | Statistical hydrodynamics[END_REF] and experimentally observed in the same system, long before the ultracold gases experiments already evoked in chapter II. Among all possible criteria for superfluidity, I will delve deeper into the so-called drag force criterion, which is one of the most recent. Due to its historical filiation, I will first introduce the most celebrated and famous criterion for superfluidity: Landau's criterion.
IV.2.2 Landau's criterion for superfluidity
Why should superfluids flow without friction, while normal fluids experience viscosity?
The first relevant answer to this crucial question was provided by Landau, who proposed a mechanism to explain why dissipation occurs in a normal fluid, and under what condition it is prevented [START_REF] Landau | The theory of superfluidity of helium II[END_REF]. His phenomenological argument, of which I give a simplified account, is based on the following picture: in a narrow tube, fluid particles experience random scattering from the walls, that are rough at the atomic level. This mechanism transfers momentum from the fluid to the walls, leading to friction in a normal fluid.
Formally, in the reference frame moving with the fluid, let us denote by E 0 the energy of the fluid and by P 0 its momentum. If it starts moving with the walls, its motion must begin through a progressive excitation of internal moves, therefore by the appearance of elementary excitations. If p denotes the momentum of an elementary excitation and its energy, then (p) is the dispersion relation of the fluid. Thus, E 0 = (p) and P 0 = p.
Then, going back to the rest frame of the capillary, where the fluid flows with velocity v, the energy E of the fluid in this frame of reference is obtained by means of a Galilean transformation and reads
E = + p • v + M v 2 2 , (IV.2)
where M = N m is the total mass of the fluid and M v 2 /2 its kinetic energy. The energy variation caused by dissipation through an elementary excitation is (p) + p • v, and is necessarily negative. It is minimal when v and p are anti-parallel, imposing -pv ≤ 0, and as a consequence the flow should be dissipationless at velocities lower than
v c = min p (p) p . (IV.3)
Equation (IV.3) links the microscopic observable to a macroscopic one, the critical velocity v c . A direct consequence of this equation is that systems such that min p [ (p)/p] = 0 can not be superfluid. In particular, the minimum of (p)/p is solution to ∂ p ( /p) = 0, hence
∂ ∂p v=vc = p . (IV.4)
Equation (IV.4) means that, at v c , the group and phase velocities of the fluid coincide. Graphically, the tangent to the excitation spectrum coincides with the line between this point and the origin. In particular, for a system to be superfluid, its dispersion relation should not be tangent to the p-axis at the origin. In an ideal Bose gas, where (p) = p 2 2m , the rough walls can always impart momentum to the fluid, leading to viscous friction. More generally, gapless systems with zero slope at the origin in energy-momentum space are not superfluid. On the contrary, helium is superfluid according to Eq. (IV.3) in view of its dispersion relation, and so is the weakly-interacting Bose gas, as shown by Bogoliubov who derived the approximate spectrum [START_REF] Bogoliubov | On the theory of superfluidity[END_REF]
Bog (p) = p 2 c 2 + p 2 2m 2 .
(IV.5)
Predicting the existence of a critical velocity is one of the major contributions of Landau's to the theory of superfluids. It has been observed in Bose gases [START_REF] Raman | Evidence for a Critical Velocity in a Bose-Einstein Condensed Gas[END_REF][START_REF] Onofrio | Observation of Superfluid Flow in a Bose-Einstein Condensed Gas[END_REF], where v c is of the order of a few mm/s, showing that Landau's criterion is qualitatively correct, studied in 2D, both experimentally [START_REF] Desbuquois | Superfluid behaviour of a two-dimensional Bose gas[END_REF] and theoretically [START_REF] Singh | Superfluidity and relaxation dynamics of a laser-stirred two-dimensional Bose gas[END_REF], and then along the BEC-BCS transition [START_REF] Miller | Critical Velocity for Superfluid Flow across the BEC-BCS Crossover[END_REF]. In the BCS regime, pair-breaking excitations are expected to limit v c , while for a Bose-Fermi counterflow, Landau's criterion can be adapted and becomes [START_REF] Castin | La vitesse critique de Landau d'une particule dans un superfluide de fermions[END_REF]
v BF c = min p F (p) + B (p) p . (IV.6)
Upon closer inspection, the critical velocity as predicted by Landau's criterion is overevaluated compared to most experiments, sometimes by one order of magnitude. A first explanation is that nonlinear processes are neglected in Landau's approach, such as vortices in 2D, and vortex rings in 3D [START_REF] Feynman | Chapter II Application of Quantum Mechanics to Liquid Helium[END_REF][START_REF] Stiessberger | Critical velocity of superfluid flow past large obstacles in Bose-Einstein condensates[END_REF]. In 1D also, it would certainly yield a too high value as well, as it neglects solitons [START_REF] Hakim | Nonlinear Schrödinger flow past an obstacle in one dimension[END_REF][START_REF] Pavloff | Breakdown of superfluidity of an atom laser past an obstacle[END_REF]. It is also important to keep in mind that Landau's argument is purely kinematical and classical in its historical formulation. There is no guarantee that one can apply it to understand dynamical and quantum aspects of superfluidity. Another criticism is that Galilean invariance is a crucial assumption thus the criterion does not apply to inhomogeneous systems [START_REF] Fedichev | Critical velocity in cylindrical Bose-Einstein condensates[END_REF].
More generally, correlations, fluctuations and interactions should be addressed correctly to quantitatively understand the mechanisms behind superfluidity. Coming back to Eq. (IV.3), it is possible that even when the line of slope v intersects the spectrum, the transition probability to this state is strongly suppressed due to interactions or to the specific kind of external perturbing potential. These issues can be tackled using a more involved formalism, that I will use hereafter.
IV.3 Drag force as a generalized Landau criterion
In a seminal article [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], Astrakharchik and Pitaevskii developed a quantitative approach to the problem of superfluidity of a generic fluid, whose basic idea relies on an analogy with classical physics, where an object moving in a fluid experiences viscosity. At the classical level, viscous friction is described phenomenologically by a force opposed to the direction of motion, and that depends on its velocity. In first approximation, this force scales linearly with velocity for a laminar flow, and quadratically in the turbulent regime. Its prefactor is usually considered as a phenomenological quantity, that depends on the viscosity of the fluid and on the shape of the object. This drag force is not a fundamental ingredient of the theory as directly defined in its principles, but arises due to collective, extremely complicated phenomena. The quantum framework, however, is appropriate to describe the motion of a single impurity, immerged into a fluid and coupled to its density at the atomic scale. Prior to any calculation, based on classical fluid dynamics and Landau's criterion, one can expect the following behavior: if an impurity moves slowly enough inside a superfluid, then its motion does not lead to friction, and as a consequence, in its frame of reference the velocity of the flow remains constant.
In a setup with periodic boundary conditions (to avoid revivals due to rebounces on walls, that complexify the analysis), a persistent flow should be observed, which is one manifestation of superfluidity. According to Newton's laws of motion, if perchance they hold in this context, the drag force experienced by the impurity must be strictly zero. Above a critical velocity, however, superfluidity can not be sustained anymore. Then, it is expected that the impurity experiences a drag force from the fluid, and slows down.
Defining this drag force at the quantum statistical level was the first challenge to change this intuition into a quantitative theory, since the very notion of force is usually absent from the formalisms of quantum physics. A first possible definition, already proposed in [START_REF] Pavloff | Breakdown of superfluidity of an atom laser past an obstacle[END_REF], is
F = -d d r|ψ( r)| 2 ∇U, (IV.7)
where ψ is the macroscopic wavefunction that describes a Bose-condensed fluid, and U the potential that models to the perturbation due to the impurity. Equation (IV.7) can be seen as the semi-classical analog to the classical definition of a force in terms of the gradient of a potential. Actually, the drag force corresponds to the opposite of the force one should exert on the impurity to keep its velocity constant.
The impurity adds a perturbation term to the Hamiltonian, written as
H pert = d d r|ψ( r)| 2 U ( r -vt), (IV.8)
where v is the constant velocity at which the impurity moves inside the fluid. In the case of a point-like impurity, the potential reads
U ( r) = g i δ( r), (IV.9)
where g i is the impurity-fluid interaction strength. This picture of a point-like polaron is quite realistic if one has in mind an experience involving neutrons as impurities, for instance. In [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], it is also assumed that the impurity is heavy, so that it does not add a significant kinetic energy term to the fluid-impurity Hamiltonian.
Still in [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], this drag force formalism has been applied to the weakly-interacting Bose gas, as described by the Gross-Pitaevskii equation, and the norm of the drag force has been evaluated in dimension one (where the result was already known [START_REF] Pavloff | Breakdown of superfluidity of an atom laser past an obstacle[END_REF]), two and three, using the Born approximation that supposes sufficiently low values of g i . For compactness, I merge the results into a single expression, namely The dimensionless drag force profile is strongly dimension-dependent, except at a special point where all dimensionless drag forces are equal, irrespective of the dimension, when the Mach number is equal to the golden ratio, i.e. v/c d = (1+ √ 5)/2. More importantly, at the mean field level a subsonic flow is superfluid. In this sense, the drag force formalism can be seen as a quantitative extension of Landau's criterion [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF]. Note that, for an ideal Bose gas, the sound velocity vanishes and thus it is not superfluid, confirming that interactions are a crucial ingredient of superfluidity.
F d (v) = s d-1 (2π) d-1 m d n d g 2 i d+1 v 2 -c 2 d v d-1 Θ(v -
We are now equipped with a qualitative criterion for superfluidity [START_REF] Pavloff | Breakdown of superfluidity of an atom laser past an obstacle[END_REF][START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF]: if there exists a non-zero flow velocity such that the drag force is strictly zero, then the superfluid fraction is equal to one. As an addendum to this criterion [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF][START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF]: the higher the drag force, the farther the flow is from being superfluid.
An advantage of the drag force criterion is that it accomodates perfectly well with the historical definition of superfluidity as a flow without viscosity. A major drawback, already noticed by the authors of Ref. [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF] themselves, is the lack of obvious way to define a superfluid fraction from the drag force, that would coincide with the one predicted by the two-fluid approach.
There is also a second definition of the drag force [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], that has been the most popular in the subsequent literature:
Ė = -F • v, (IV.11)
where Ė is the statistical average energy dissipation per unit time. This definition is in analogy with the classical mechanics formula that links energy dissipation per unit time, i.e. power, to the force responsible for energy transfer to the environment.
Within this approach, the energy variation per unit time due to the impurity is calculated first, and the drag force is deduced from the latter. Definition (IV.11) is quite convenient for experimentalists, in the sense that energy dissipation is related to the heating rate, which is a measurable quantity [START_REF] Desbuquois | Superfluid behaviour of a two-dimensional Bose gas[END_REF][START_REF] Weimer | Critical Velocity in the BEC-BCS Crossover[END_REF]. It is quite complicated to probe such tiny drag forces (of the order of a few nN) in a direct way, although a recent proposal to study superfluidity of light based on an optomechanical, cantilever beam device, seems quite promising [START_REF] Larré | Optomechanical signature of a frictionless flow of superfluid light[END_REF].
From the theoretical point of view, Eq. (IV.11) does not provide a simple means to evaluate the drag force analytically in full generality. To go further, a useful approximation was developped in [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF]. If a weak potential barrier or impurity is stirred along the fluid, putting it slightly out of equilibrium, then linear response theory holds, and the average energy dissipation per unit time is linked to the dynamical structure factor (see Appendix C.1 for more details on this observable)
S d ( q, ω) = +∞ -∞ dt d d r e i(ωt-q • r) δn( r, t)δn( 0, 0) , (IV.12)
by the relation, valid in arbitrary dimension [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF]:
Ė = - 1 2π V d +∞ 0 dω d d q (2π) d S d ( q, ω)|U d ( q, ω)| 2 ω, (IV.13)
where
U d ( q, ω) = +∞ -∞ dt d d r e i(ωt-q • r) U d ( r, t) (IV.14)
is the Fourier transform of the potential barrier U d ( r, t), that defines the perturbation part of the Hamiltonian as
H pert = d d r U d ( r, t)n d ( r). (IV.15)
In 1D, equation (IV.13) was obtained in [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], and recovered from the Fermi Golden rule in [START_REF] Cherny | Theory of superfluidity and drag force in the one-dimensional Bose gas[END_REF]. The assumption of a weak fluid-barrier coupling is not well controlled in the derivation, but I will assume that it is fulfilled all the same. Note that two quantities are involved in the integrand of the right-hand side of Eq. (IV.13): the Fourier transform of the potential barrier, and the dynamical structure factor of the gas.
This quantity is worth studying for itself and has been measured by Bragg spectroscopy since the early days of ultracold gases experiments [START_REF] Stenger | Bragg Spectroscopy of a Bose-Einstein Condensate[END_REF][START_REF] Stamper-Kurn | Excitation of Phonons in a Bose-Einstein Condensate by Light Scattering[END_REF]. For this reason, I will devote next paragraph to the dynamical structure factor, starting as usual by considering the Tonks-Girardeau regime to gain some insight, before turning to the more complicated issue of finite interaction strengths. From now on in this chapter, I will focus on the 1D Bose gas as described by the Lieb-Liniger model, postponing the issue of higher dimensions to chapter V.
IV.4 Dynamical structure factor and drag force for a Tonks-Girardeau gas
IV.4.1 Dynamical structure factor
As a first step towards the drag force, I evaluate the dynamical structure factor of a one-dimensional Bose gas,
S(q, ω) = +∞ -∞ +∞ -∞
dx dt e i(ωt-qx) δn(x, t)δn(0, 0) , (IV. [START_REF]L'espace physique entre mathématiques et philosophie[END_REF] in the Tonks-Girardeau regime where the Lieb-Liniger model is equivalent to a gas of noninteracting fermions for this observable, due to the Bose-Fermi mapping. Calculating fermionic density-density correlations using Wick's theorem yields after Fourier transform, as detailed in Appendix C.2, the well-known result [START_REF] Vignolo | Light scattering from a degenerate quasione-dimensional confined gas of noninteracting fermions[END_REF][START_REF] Brand | Dynamic structure factor of the one-dimensional Bose gas near the Tonks-Girardeau limit[END_REF]:
S T G (q, ω) = m |q| Θ[ω + (q) -ω] Θ[ω -ω -(q)] ,
(IV.17)
where
ω + (q) = 2m (q 2 + 2qk F ) (IV.18)
and
ω -(q) = 2m |q 2 -2qk F | (IV.19)
are the limiting dispersion relations. They represent the boundaries of the energymomentum sector where particle-hole excitations can occur according to energy conser- At zero temperature, the dynamical structure factor features jumps from a strictly zero to a finite value at these thresholds, but is regularized by smooth smearing at finite temperature. A natural question is up to what energies and temperatures the phonon-like excitations, characterized by the nearly-linear spectrum at low energy, are well-defined when thermal effects come into play.
The dynamical structure factor can also be obtained from the fluctuation-dissipation theorem as
S(q, ω) = 2 1-e -β ω [χ nn (q, ω)] ,
(IV.20)
relating it to the imaginary part of the linear density-density response function χ nn . Lindhard's expression [START_REF] Ashcroft | Solid State Physics[END_REF] is valid for noninteracting fermions and, consequently, also for the Tonks-Girardeau gas:
χ nn (q, ω) = 1 L k n F (k) -n F (k + q) ω + (k) -(k + q) + i0 + , (IV.21)
where
n F (k) = 1 e β[ (k)-µ] + 1 (IV.22)
is the Fermi-Dirac distribution and (k) = where P.P. is the principal part distribution. I deduce a practical expression of the finitetemperature dynamical structure factor in the thermodynamic limit:
S T G T >0 (q, ω) = +∞ -∞ dk n F (k)-n F (k + q) 1 -e -β ω δ[ω -ω q (k)], (IV.24)
where
ω q (k) = 1 [ (k + q) -(k)]
. This is equivalent to
S T G T >0 (q, ω) = +∞ -∞ dk n F (k) [1 -n F (k + q)] δ [ω -ω q (k)]. (IV.25)
Either of them can be used to obtain, after a few algebraic manipulations, the final expression
S T G T >0 (q, ω) = m |q| n F ω-(q) 2 q/m -n F ω+ (q) 2 q/m 1 -e -β ω . (IV.26)
To complete the calculation and make quantitative predictions, it is first necessary to determine the temperature dependence of the chemical potential, that appears in the Fermi-Dirac distribution. It is obtained by numerical inversion of the equation fixing the density n 0 , 1 2π
+∞ -∞ dk n F (k) = n 0 . (IV.27)
Equation (IV.27) can not be solved analytically in full generality. At low temperature, Sommerfeld's expansion [START_REF] Ashcroft | Solid State Physics[END_REF] yields the approximate result [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF]:
µ(T ) F T T F 1+ π 2 12
T T F 2 , (IV.28)
where F = (k F ) = k B T F is the Fermi energy, k B is Boltzmann's constant, and T F the Fermi temperature of the gas. Note that the first correction to the ground-state chemical potential is exactly opposite to the 3D case. At high temperature, classical expansion yields The chemical potential being negative at high temperature according to Eq. (IV.29), by continuity there must be a temperature T 0 at which it changes sign. The latter is evaluated analytically as Then, I use the numerical solution of Eq. (IV.27) to evaluate the dynamical structure factor of the Tonks-Girardeau gas at finite temperature from Eq. (IV.26). As shown in Fig . IV.4, the dynamical structure factor of the Tonks-Girardeau gas, S T G (q, ω), is quite sensitive to temperature. At finite temperature, the range of allowed excitations spreads beyond the type I and type II spectra, since the dynamical structure factor takes into account thermally-activated excitations. The latter can even occur at ω < 0, meaning that energy can be emitted, but such a case has not been reported on yet in ultracold atom experiments. The quasi-linear shape of the spectrum near the origin and the umklapp point (2k F , 0) fades out at temperatures larger than the order of 0.2 T F . When temperature is of the order of or higher than T F , this theoretical analysis is not quite relevant since the gas is very likely not to be one-dimensional anymore in experiments using current trapping techniques.
µ(T ) F T T F - T 2T F ln T T F . (IV.29)
[1] T 0 T F = 4 π 1 [( √ 2 -1)ζ(1/2)] 2 3.
In Fig. IV.5, for various finite temperatures, I represent sections of the dynamical structure factor at a momentum q = 0.1k F , near the origin. The divergence of the dynamical structure factor at T = 0 and q = 0, and the high values that it takes close to the origin dramatically decrease once temperature is taken into account. An emission peak, whose position is symmetric to the absorption one already present at T = 0 with respect to ω = 0, but whose amplitude is lower, appears at finite temperature. The ratio of their heights is given by the detailed balanced relation S(q, -ω) = e -β ω S(q, ω).
(IV.31)
Both peaks form quite well-defined phonon dispersions at very low temperature, but start overlapping if T 0.2 T F . At higher temperatures, of the order of a few T F , one can not distinguish them anymore and the dynamical structure factor becomes symmetric with respect to ω = 0.
IV.4.2 Drag force due to a mobile, point-like impurity in the
Tonks-Girardeau regime
In 1D and at arbitrary temperature, the drag force reads
F T >0 = 1 2π +∞ 0 dq |U (q)| 2 q S T >0 (q, qv)(1 -e -β qv ).
(IV.32)
The graphical interpretation of Eq. (IV.32) is that the drag force, that measures dissipation, is obtained by integration of the dynamical structure factor weighted by other quantities, along a line of slope v in energy-momentum space. At zero temperature, the lower bound of the dynamical structure factor coincides with the lower excitation spectrum, and the link with the graphical interpretation of Landau's criterion becomes obvious. However, new ingredients are taken into account in the drag force formalism: if the weight of excitations is zero, i.e. for a vanishing dynamical structure factor, excitations do not occur even if the integration line crosses the excitation spectrum. Moreover, the precise shape of the potential barrier is also taken into account here, and plays a major role as will be seen below.
In the Tonks-Girardeau regime, with a potential barrier U (x, t) = U b δ(x -vt), combining Eq. (IV.32) and Eq. (IV.17), a simple expression is obtained at T = 0 [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF][START_REF] Cherny | Theory of superfluidity and drag force in the one-dimensional Bose gas[END_REF], namely
F T G (v) = 2U 2 b n 0 m 2 Θ(v -v F ) + v v F Θ(v F -v) . (IV.33)
Thus, within linear response theory, drag force is a linear function of the barrier velocity if v < v F , and saturates when v > v F . As I will show below, this saturation to a constant finite value is actually an artifact, due to the idealized Dirac-δ shape of the potential barrier. Equation (IV.33) shows that the drag force is non-zero if the velocity of the perturbing potential is finite, meaning that energy dissipation occurs as long as the barrier is driven along the fluid. Thus, according to the drag force criterion, the Tonks-Girardeau gas is not superfluid even at zero temperature.
Equation (IV.32) also allows to discuss thermal effects on the drag force. At finite temperature, it reads [1]
F T G T >0 F T G (v F ) = 1 2 T T F βmv 2 /2 0 d √ (e -βµ(T ) + 1) . (IV.34)
The integral can easily be evaluated numerically. As a main result, thermal effects cause a depletion of the drag force close to the Fermi velocity, while at low velocity the drag force profile remains linear. The intriguing fact that, at fixed velocity, the drag force decreases when temperature increases, might be due to the fact that I do not take any barrier renormalization into account.
IV.4.3 Effect of a finite barrier width on the drag force
In Ref. [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF], I have also investigated the effect of a finite barrier width on the drag force in the Tonks-Girardeau regime. My main theoretical motivation was that Eq. (IV.33) predicts a saturation of the drag force at high velocities, which does not seem realistic. Among the infinitely many possible barrier shapes, experimental considerations suggest to consider a gaussian barrier. This profile models a laser beam, often used as a stirrer in experiments. I have focused on the case of a blue-detuned, repulsive laser beam. Then, the perturbation part of the Hamiltonian reads
H pert = L 0 dx 2 π U b w e -2(x-vt) 2 w 2 ψ † (x)ψ(x), (IV.35)
where U b is the height of the barrier, and w its waist. Prefactors have been chosen to recover a δ-potential in the limit w → 0. The Fourier transform of the potential in Eq. (IV.35) reads
U (q) = U b e -q 2 w 2 8 , (IV.36)
and the drag force at T = 0 is readily obtained as
F w>0 (v) = U 2 b 2π +∞ 0
dq e -q 2 w 2 4 q S(q, qv).
(IV.37)
In the Tonks-Girardeau gas case, I have obtained an explicit expression at T = 0,
F T G w>0 (v) F T G (v F ) = √ π 4 1 wk F erf wk F 1+ v v F -erf wk F 1- v v F , (IV.38)
where erf(x) = 2 √ π x 0 du e -u 2 is the error function. The more general case where both waist and temperature are finite is obtained by inserting Eqs. (IV.26) and (IV.36) into Eq. (IV.32), and the integral is then evaluated numerically.
All these results are illustrated in Fig. IV.6. While for a delta potential the drag force saturates at supersonic flow velocities, for any finite barrier width the frag force vanishes at sufficiently large velocities. According to the drag force criterion, this means that the flow is close to being superfluid in this regime. This important result deserves being put in perspective. The first theoretical consideration of the drag force due to a Gaussian laser beam dates back to Ref. [START_REF] Pavloff | Breakdown of superfluidity of an atom laser past an obstacle[END_REF], that focuses on the weakly-interacting regime of the Lieb-Liniger model, treated through the Gross-Pitaevskii equation. At the time I wrote [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF], I was not aware of this paper. My contribution is still seemingly pioneering in the strongly-interacting regime. velocities is common to all interaction regimes. The range of velocities where this occurs corresponds to a 'quasi-superfluid' regime. More generally, a typical damping profile, sketched in Fig. IV.7, can decently be expected. Actually, it had already been observed in several experimental situations upon close inspection [START_REF] Engels | Stationary and Nonstationary Fluid Flow of a Bose-Einstein Condensate Through a Penetrable Barrier[END_REF][START_REF] Dries | Dissipative transport of a Bose-Einstein condensate[END_REF].
It can can inferred that a strong suppression of the drag force at supersonic barrier
After our work [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF], several works have investigated similar points more in detail. The authors of Ref. [START_REF] Singh | Probing superfluidity of Bose-Einstein condensates via laser stirring[END_REF], using both numerical and analytical methods, have identified nonzero temperature, circular motion of the stirrer, and the density profile of the atomic cloud as additional key factors influencing the magnitude of the critical velocity in a weaklyinteracting Bose gas. According to the terminology introduced above, a quasi-superfluid regime has been predicted in higher dimensions too. In [START_REF] Pinsker | Gaussian impurity moving through a Bose-Einstein superfluid[END_REF], the definition Eq. (IV.7) of the drag force has been used to consider the effect of a Gaussian barrier on a weaklyinteracting Bose gas, using the Bogoliubov formalism in 2D and 3D. Within this approach, the critical velocity still coincides with the sound velocity, and after reaching a peak around twice the sound velocity, in 3D the drag force decreases monotonically. Note, however, that the predicted drag force profile is smooth even at zero temperature, which is not the case in the Tonks-Girardeau gas. This may be due to the fact that a few corrections to linear response are included as well within this approach.
In the next section, I shall investigate the whole region between the Tonks-Girardeau and Gross-Pitaevskii regimes, at intermediate interaction strengths between the bosons of the fluid.
IV.5 Dynamical structure factor and drag force for a 1D Bose gas at finite interaction strength IV.5.1 State of the art
Evaluating the dynamical structure factor of the Lieb-Liniger model at finite interaction strength is challenging, and several approaches have been undertaken along the years.
As was the case for thermodynamic quantities, perturbation theory allows to evaluate dynamical ones in the strongly-interacting regime as corrections to the Tonks-Girardeau regime. Such a perturbative approach has been developed to first order in 1/γ at T = 0 in [START_REF] Brand | Dynamic structure factor of the one-dimensional Bose gas near the Tonks-Girardeau limit[END_REF], and extended to finite temperature in [START_REF] Cherny | Polarizability and dynamic structure factor of the onedimensional Bose gas near the Tonks-Girardeau limit at finite temperatures[END_REF]. By qualitative comparison of the results obtained in these references, and in the Tonks-Girardeau regime studied above, I conclude that the dynamical structure factor at γ 10 still looks pretty much like in the TG regime. In particular, the low-temperature phonon-like tail starting from the origin at ω < 0 can be observed both in Ref. [START_REF] Cherny | Polarizability and dynamic structure factor of the onedimensional Bose gas near the Tonks-Girardeau limit at finite temperatures[END_REF] and in panel a) of Fig. IV.4. A notable difference to the limit γ = +∞ is that excitations are progressively suppressed close to the umklapp point (q = 2k F , ω = 0) when the interaction strength is decreased, and a crude extrapolation suggests that it tends towards a superfluid behavior. However, one should keep in mind that first-order corrections to the Tonks-Girardeau regime are expected to be reliable only as far as γ 10.
In this respect, the Tomonaga-Luttinger liquid formalism, whose use in this context was first suggested in Ref. [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], is more versatile as it can be used at arbitrary interaction strengths. However, it is also expected to be accurate only inside a small, low-energy sector. This is not necessarily redhibitory to study superfluidity, since the latter is defined through the low-velocity behavior of the drag force, that is dominated by the low-energy sector of the dynamical structure factor, close to the umklapp point. However, the quasisuperfluid, supersonic regime, is definitely out of reach with this method.
Finding the exact dynamical structure factor at arbitrary interaction strength and energies actually required the development of fairly involved algebraic Bethe Ansatz techniques. The numerical evaluation of form factors at finite N has been implemented in the Algebraic Bethe Ansatz-based Computation of Universal Structure factors (ABACUS) algorithm [START_REF] Caux | Correlation functions of integrable models: A description of the ABA-CUS algorithm[END_REF], first at zero [START_REF] Caux | Dynamical density-density correlations in the onedimensional Bose gas[END_REF], then even at finite temperature [START_REF] Panfil | Finite-temperature correlations in the Lieb-Liniger onedimensional Bose gas[END_REF]. This was a major breakthrough in the long-standing issue of dynamical correlation functions of integrable models, and is one of the most important theoretical achievements in this field in the early 2000's.
This exact solution tends to validate the main qualitative predictions of the Imambekov-Glazman (IG) liquid theory, developed in parallel as an extension of the standard Tomonaga-Luttinger liquid theory to a wider range of energies [START_REF] Pustilnik | Dynamic Response of One-Dimensional Interacting Fermions[END_REF][START_REF] Khodas | Fermi-Luttinger liquid: Spectral function of interacting one-dimensional fermions[END_REF][START_REF] Khodas | Dynamics of Excitations in a One-Dimensional Bose Liquid[END_REF][START_REF] Imambekov | Exact Exponents of Edge Singularities in Dynamic Correlation Functions of 1D Bose Gas[END_REF][START_REF] Imambekov | Universal Theory of Nonlinear Luttinger Liquids[END_REF]. The dynamical structure factor of a 1D Bose gas features power-law behaviors along the type I and type II excitation branches at T = 0, with a sharp response at the upper threshold in the case of repulsive interactions, and at the lower one if they are attractive [START_REF] Calabrese | Dynamics of the attractive 1D Bose gas: analytical treatment from integrability[END_REF].
Based on numerical data produced with the ABACUS code, a phenomenological expression has been proposed for the dynamical structure factor, that incorporates the TL and IG liquid predictions as special cases [START_REF] Cherny | Approximate expression for the dynamic structure factor in the Lieb-Liniger model[END_REF]. Later on, the dynamical structure factor of 4 He has also been obtained numerically, this time with Quantum Monte Carlo techniques, and has also shown beyond-Luttinger liquid behavior [START_REF] Bertaina | One-Dimensional Liquid 4 He: Dynamical Properties beyond Luttinger-Liquid Theory[END_REF].
Measures of the dynamical structure factor of an array of 1D gases have shown remarkable agreement with the algebraic Bethe Ansatz predictions for the Lieb-Liniger model over a wide range of interaction strengths. They have definitely confirmed the need of a beyond-Luttinger liquid theory approach to the problem at high energies [START_REF] Fabbri | Dynamical structure factor of one-dimensional Bose gases: Experimental signatures of beyond-Luttinger-liquid physics[END_REF][START_REF] Meinert | Probing the Excitations of a Lieb-Liniger Gas from Weak to Strong Coupling[END_REF].
As far as the drag force is concerned, its first evaluation at arbitrary interaction strength was reported on in [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF], and relied on the Tomonaga-Luttinger liquid framework. Then, the dynamical structure factor as predicted by the ABACUS algorithm, once combined with Eq. (IV.32) and numerically integrated, yielded the drag force due to a point-like impurity at arbitrary interaction strength [START_REF] Cherny | Decay of superfluid currents in the interacting one-dimensional Bose gas[END_REF][START_REF] Cherny | Dynamic and static density-density correlations in the one-dimensional Bose gas: Exact results and approximations[END_REF]. The conclusion of this study is that, in this configuration, the Lieb-Liniger model is never strictly superfluid in the thermodynamic limit according to the drag force criterion. I shall study the dynamical structure factor and drag force within the linear Tomonaga-Luttinger liquid theory despite its shortcomings, for the following reasons: first of all, it is currently the simplest approach allowing to make fully analytical, quantitative predictions at finite interaction strength. Moreover, its validity range is still not known quantitatively, and asks for additional studies. It is also the first step towards more accurate predictions, e.g. from the Imambekov-Glazman liquid theory, and towards generalizations of the Tomonaga-Luttinger framework to multi-component gases.
IV.5.2 Dynamical structure factor from the Tomonaga-Luttinger liquid theory
Starting from the real-space density-density correlations of a Tomonaga-Luttinger liquid at T = 0 and in the thermodynamic limit, Eq. (II.44), Fourier transform with respect to time and space yields the dominant terms of the dynamical structure factor of gapless 1D models. Since this quantity is symmetric with respect to q ↔ -q, I shall write the result for q > 0 [376, 1] (I refer to Appendix C.3 for a detailed derivation):
S T L (q, ω) K|q|δ[ω -ω(q)] + B 1 (K) ω 2 -(q -2k F ) 2 v 2 s K-1 Θ[ω -|q -2k F |v s ] = S T L 0 (q, ω) + S T L 1 (q, ω) (IV.39)
when read in the same order, where
B 1 (K) = A 1 (K) (2k F v s ) 2{K-1} 1 Γ(K) 2 1 v s (IV.40)
is a non-universal coefficient. It is the single mode form factor of the dynamical structure factor, and is related to the phonic form factor of the density-density correlation function,
A 1 (K), already defined in Eq. (II.41).
In Eq. (IV.39), S 0 displays a sharp peak, in exact correspondence to the linear phononlike dispersion ω(q) = qv s . Its divergence and zero widths are artifacts due to the spectrum linearization. If v ≤ v s , it does not contribute to the drag force in this framework, and if v v s , it is also true according to more accurate descriptions, so I will not devote much attention to S 0 anymore, but rather focus on the second contribution to the dynamical structure factor, denoted by S 1 .
There are two linear limiting dispersion relations described by S 1 . They are symmetric with respect to q = 2k F , and form a triangular shape above the umklapp point (2k F , 0). Actually, these excitation spectra correspond to the linearization of ω -, so one can write
ω T L -= |q -2k F |v s , and
S T L 1 (q, ω) = B 1 (K)[ω 2 -(ω T L -) 2 ] K-1 Θ(ω -ω T L -). (IV.41)
The slopes of the limiting dispersions in S 0 and S 1 depend on the interaction strength via v s . Hence, measuring the excitation spectrum of a 1D Bose gas at low energy provides an indirect way to determine the sound velocity.
To make quantitative predictions, the first requirement is to evaluate v s , or equivalently K, as well as the form factor A 1 . Both have already been obtained in the Tonks-Girardeau regime in chapter II, readily allowing to make quantitative predictions in this case. If they had not been obtained yet, the Luttinger parameter K could be determined so as to reproduce the phonon-like dispersion relation at the origin, and A 1 so as to reproduce the exact dynamical structure factor at the umklapp point. Comparison between the exact and linearized spectra in the Tonks-Girardeau regime is made in Fig. IV.8, confirming that the Tomonaga-Luttinger liquid formalism is intrinsically limited to low energies.
In particular, within this formalism it is impossible to make quantitative predictions around the top of the type II excitation spectrum, where curvature effects are important. This is not the only problem, actually. In the Tonks-Girardeau regime, the exact dynamical structure factor scales like 1/q inside its definition domain, whereas the Tomonaga-Luttinger liquid prediction of Eq. (IV.39) is constant in the triangular domain of the umklapp region when K = 1. Both results coincide only along a vertical line starting from the umklapp point, and this line is finite since the TLL formalism utterly ignores the upper excitation spectrum. gas at T = 0, in the plane (q, ω), in units of (k F , ω F ). I superimposed the exact result (shaded gray area delimited by the blue curves) to the result in the Tomonaga-Luttinger liquid framework for dimensionless parameters K = 1 and v s /v F = 1 (red). In the latter, the domain consists in a line starting from the origin, and the area included in the infinite triangle starting from the umklapp point (0, 2k F ). The upper energy limit of potential validity of the Tomonaga-Luttinger liquid theory is approximately given by the dashed line.
Although the TLL result is fairly disappointing at first when compared to the Tonks-Girardeau exact one, I recall that, contrary to the Bogoliubov formalism, it predicts the existence of excitations near the umklapp point, which is not obvious. Moreover, at this stage it is not excluded that the agreement with the exact dynamical structure factor of the Lieb-Liniger model may be better at finite interaction strength. More generally, it is interesting to evaluate the validity range of the standard Tomonaga-Luttinger liquid theory as precisely as possible, but this requires a comparison point. A possible generalization, where it is possible to compare the result with an exact prediction, concerns thermal effects, already investigated in the Tonks-Girardeau regime through Eq. (IV.26). In the Tomonaga-Luttinger liquid formalism, the dynamical structure factor at finite temperature is obtained by Fourier transform of Eq. (II.72).
On the one hand, I obtain (I refer to Appendix C.4 for details) [1]
S T L 0,T >0 (q, ω) = K|q| 1-e -β ω(q) δ[ω -ω(q)] + e -β ω(q) δ[ω + ω(q)] ,
(IV.42)
and on the other hand [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF],
S T L 1,T >0 (q, ω) = C(K, T ) 1 v s (L T k F ) 2(1-K) e β ω 2 B K 2 + i β 4π [ω + (q -2k F )v s ], K 2 -i β 4π [ω + (q -2k F )v s ] B K 2 + i β 4π [ω -(q -2k F )v s ], K 2 -i β 4π [ω -(q -2k F )v s ] ,
(IV.43) where C(K, T ) is a dimensionless prefactor and B(x, y) = Γ(x)Γ(y) Γ(x+y) is the Euler Beta function.
While the value of the prefactor C(K, T ) in Eq. (IV.43) is fixed by the exact result at the umklapp, v s (T ) should be evaluated independently. This can be done at very low temperatures, by identification with the phonon modes, whose slope is v s . One finds that below T 0.2 T F , which is approximately the highest temperature where these phonons are well defined, v s does not significantly vary with T .
Comparison of the approximation Eq. (IV.43) and the exact Tonks-Girardeau result is shown in Fig. IV.9. Their agreement is quite remarkable, and the validity range of the TLL framework is even increased compared to the T = 0 case. Surprising at first, this fact can be understood at the real-space level. Correlations decay exponentially at large distances according to Eq. (II.72), thus the neglected part is not as important. However, at slightly higher temperatures the agreement would break down abruptly. The conclusion is that the Tomonaga-Luttinger liquid framework is valid at very low temperatures only.
Coming back to T = 0, at finite interaction strength the Tomonaga-Luttinger liquid formalism makes predictions that have not been investigated quantitatively so far. Here, I try and fill this gap, to allow for a subsequent comparison with more powerful techniques. A first necessary condition is to evaluate v s (γ). From its thermodynamic definition comes
v s (γ) = v F π 3e(γ) -2γ de dγ (γ) + 1 2 γ 2 d 2 e dγ 2 (γ) 1/2 . (IV.44)
Analytical expansions of the sound velocity at large and small interaction strength can be found in the literature. The first-and second-order corrections to the Tonks-Girardeau regime in 1/γ are given in [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF], they are calculated to fourth order in [START_REF] Zvonarev | Correlations in 1D boson and fermion systems: exact results[END_REF] and up to eighth order in [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF]. In the weakly-interacting regime, expansions are found in [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF] and [START_REF] Wadati | Solutions of the Lieb-Liniger integral equation[END_REF]. Eqs. (III.42, III.57, III.58) for the dimensionless energy per particle obtained in chapter III considerably increase the accuracy compared to these works, after straightforward algebra.
Interestingly, it is not necessary to evaluate the ground-state energy per particle e(γ) to obtain v s (γ), but actually sufficient to know the density of pseudo-momenta g at the edge of the Fermi sea, i.e. at z = 1, due to the useful equality [START_REF] Korepin | Quantum Inverse Scattering Method and Correlation Functions[END_REF]
v s (γ) v F = 1 {2πg[1; α(γ)]} 2 .
(IV.45)
Reciprocally, if v s is already known with high accuracy from Eq. (IV.44) applied to a reliable equation of state e(γ), then Eq. (IV.45) provides an excellent accuracy test for a proposed solution g to the Lieb equation (III.32), since it allows to check its value at the edge of the interval [-1, 1], where it is the most difficult to obtain with most known methods.
I have used both approaches to evaluate the sound velocity over a wide range of strong to intermediate interaction strengths with excellent accuracy, as illustrated in Fig. IV.10. In particular, the fact that v s → γ→0 0 implies that g(z; α) → z→1,α→0 +∞, hinting at the fact that polynomial expansion methods are not appropriate at very low interaction strength, as expected due to the vicinity of the singularity.
As far as the dynamical structure factor is concerned, there are two possible points of view at this stage. Either one is interested in qualitative properties, i.e. in its global shape as a function of γ, and one can divide the result by the unknown coefficient A 1 (K), or one is rather motivated by quantitative evaluations, e.g. for a comparison with experiments. Then, it becomes necessary to evaluate A 1 (K). The solution to this tough problem is provided in [START_REF] Shashi | Nonuniversal prefactors in the correlation functions of one-dimensional quantum liquids[END_REF][START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF]. The form factor is extracted numerically as the solution of a complicated set of coupled integral equations, whose analytical solution stays out of reach.
My philosophy in this thesis is to rely on analytical expressions as often as possible, so additional efforts are required. Instead of trying and solve the set of equations of Ref. [START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF], I have extracted data points from the figure provided in this very reference. It turns out that it is especially well approached by a very simple fit function, of the form
A 1 (K) π 2(K-1) = 1 2
e -α(K-1) , (IV. [START_REF] Sachdev | Quantum phase transitions[END_REF] where α 3.8 up to data extraction errors. This expression is approximately valid for
K ∈ [1, 2]
, for K 2 data can not be read on the graph because A 1 is too small then.
A few comments are in order: first, Eq. (IV.46) is certainly not the exact solution, in view of the extreme complexity of the equations it is derived from. This could be verified by evaluating A 1 (K) at K < 1 (i.e. in the super Tonks-Girardeau regime), where a discrepency with the extrapolated fit function is very likely to become obvious. However, Eq. (IV.46) may be equivalent, or at least close to being so, to the exact solution at K 1, in view of the remarkable agreement with numerical data in this range.
If equation (IV.46) were exact, to infer the value of α, I could rely on the exact expansion close to K = 1 [START_REF] Cherny | Dynamic and static density-density correlations in the one-dimensional Bose gas: Exact results and approximations[END_REF],
A 1 (K) π 2(K-1) = K 1 1 2 {1-[1+4 ln(2)](K -1)} + O[(K -1) 2 ], (IV.47)
and by identification of both Taylor expansions at first order in the variable K-1, deduce that α = 1+4 ln(2) 3.77, which is actually quite close to the value obtained by fitting on data points. The agreement is not perfect at higher values of K, as can be seen in Fig . IV.11. Another clue, if needs be, that Eq. (IV.46) is not exact is that it does not agree with the high-K expansion obtained in [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF].
All together, these results allow to predict quantitatively the dynamical structure factor near the umklapp point in the Tomonaga-Luttinger liquid framework, represented in Fig. IV.12. The values chosen for the interaction strength are in exact correspondence to those of Ref. [START_REF] Caux | Dynamical density-density correlations in the onedimensional Bose gas[END_REF], allowing for a direct comparison with the exact result from the ABACUS. The agreement is excellent for most values, except at too high energies because the Tomonaga-Luttinger liquid does not predict the upper threshold, and for γ = 1, where Eq. (IV.46) is likely to be used outside its range of validity. 2) (solid, black) is quite accurate over a wider range than the first order Taylor expansion, Eq. (IV.47) (dashed, blue) compared to the graphical data from Ref. [START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF] (red dots).
Figure IV.12 -Dynamical structure factor of the Lieb-Liniger gas as predicted by the Tomonaga-Luttinger liquid theory, Eq. (IV.39), with the expression of the form factor Eq. (IV.46), along the umklapp line and in units of the dynamical structure factor of the Tonks-Girardeau gas at the umklapp point, as a function of the dimensionless energy. The various curves correspond respectively to values of K that correspond to Lieb parameters γ = +∞ (dotted, black), γ = 100 (dashed, brown), γ = 20 (long-dashed, red), γ = 10 (thick, blue), γ = 5 (pink) and γ = 1 (thin, gray), to be compared to the corresponding figure in Ref. [START_REF] Caux | Dynamical density-density correlations in the onedimensional Bose gas[END_REF].
IV.5.3 Drag force from the Tomonaga-Luttinger liquid formalism
The dynamical structure factor gives access to the drag force through Eq. (IV.32). First, I have addressed the simplest case, T = 0 and w = 0, that yields [START_REF] Lang | Dynamic structure factor and drag force in a one-dimensional Bose gas at finite temperature[END_REF] (see Appendix C.5 for details)
F T L (v) = U 2 b 2π +∞ 0 dq q S T L (q, qv) = U 2 b 2π B 1 (K) v 2 s √ πΓ(K)(2k F v s ) 2K Γ(K +1/2) v vs 2K-1 1-v vs 2 K+1 , (IV.48)
in agreement with [START_REF] Astrakharchik | Motion of a heavy impurity through a Bose-Einstein condensate[END_REF] in the limit v/v s 1. At low velocities, the drag force scales as a power law v 2K-1 , that depends on the interaction strength in a non-trivial way. A comparison with the Tonks-Girardeau result at K = 1 leads to the determination of the exact form factor,
B 1 (K = 1) = 1 2v F . (IV.49)
Then, I have generalized the expression of the drag force to finite laser waist w. In the Tonks-Girardeau regime, I obtained the analytical, simple expression [1] At arbitrary interaction strength, the expression of the drag force in the case of a finite-width potential is given by [1]
F T L w>0,K=1 (v) = 2U 2 b n 0 m 2 1 (2wk F ) 2 e - w 2 k 2 F (1+v/v F ) 2 -e - w 2 k 2 F (1-v/v F ) 2
F T L w>0 (v) F T L (v) = 1 wk F +∞ k=0 (-1) k k! wk F 1+ v vs 2k+1 2 F 1 -1-2k,K; 2K; - 2v v s -v , (IV.51)
where 2 F 1 is the hypergeometric function. I have verified numerically that for wk F 1, truncating the series to low orders is a very good approximation.
The effect of temperature on the drag force is obtained by integrating numerically As a main result, I have shown that in the strongly-interacting regime K 1, the Tomonaga-Luttinger liquid theory reproduces quite well the exact dynamical structure factor of the Tonks-Girardeau gas around the umklapp point, and its drag force at low velocities, even for a finite-width potential barrier. This allows to use the Tomonaga-Luttinger liquid theory to predict the generic low-velocity behavior of the drag force at large to intermediate interactions, as a complementary approach to the Bogoliubov treatment at weak interactions.
IV.6 Exact excitation spectra from integrability
To go beyond the standard Tomonaga-Luttinger liquid using analytical methods, three types of quantities have to be evaluated with the highest possible accuracy. They are the form factors, that give the weights of the different contributions to the dynamical structure factor, the edge exponents, that describe power laws at the thresholds, and the excitation spectra of the Lieb-Liniger model, that fix their locations. This section is more specifically devoted to the excitation spectra. Lieb studied them in [START_REF] Lieb | Exact Analysis of an Interacting Bose Gas. II. The Excitation Spectrum[END_REF], and much to his surprise, found that the excitation spectrum of his model was twofold. The Bogoliubov spectrum corresponds to the type-I spectrum for weak interactions, but the nature of the type-II spectrum in this regime was elucidated later on, when a new solution to the non-linear Schrödinger equation was found [START_REF] Kulish | Comparison of the Exact Quantum and Quasiclassical Results for the Nonlinear Schrödinger Equation[END_REF]. This spectrum is most probably linked to solitons, as suggested by a fair number of works [START_REF] Khodas | Photosolitonic effect[END_REF][START_REF] Karpiuk | Spontaneous Solitons in the Thermal Equilibrium of a Quasi-1D Bose Gas[END_REF][START_REF] Syrwid | Emergence of dark solitons in the course of measurements of particle positions[END_REF][START_REF] Karpiuk | Correspondence between dark solitons and the type II excitations of the Lieb-Liniger model[END_REF][START_REF] Sato | Quantum states of dark solitons in the 1D Bose gas[END_REF].
At arbitrary interaction strength, coordinate Bethe Ansatz yields the exact excitation spectrum, both at finite particle number and in the thermodynamic limit. It is technically harder to obtain than the energy, however. The set of Bethe Ansatz equations (III. [START_REF] Wilczek | Particle physics and condensed matter: the saga continues[END_REF] for the finite-N many-body problem can not be solved analytically in full generality, but the solution is easily obtained numerically for a few bosons. To reach the thermodynamic limit with several digits accuracy, the Tonks-Girardeau case suggests that N should be of the order of a hundred. Although the interplay between interactions and finite N may slow down the convergence at finite γ [START_REF] Caux | Dynamical density-density correlations in the onedimensional Bose gas[END_REF], a numerical treatment is still possible.
In [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF], I have addressed the problem directly in the thermodynamic limit, where it reduces to two equations [START_REF] Pustilnik | Low-energy excitations of a one-dimensional Bose gas with weak contact repulsion[END_REF][START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF]:
p(k; γ) = 2π Q(γ) k/Q(γ) 1 dy g[y; α(γ)] (IV.52) and (k; γ) = 2 Q 2 (γ) m k/Q(γ) 1 dy f [y; α(γ)] , (IV .53)
where
Q(γ) = n 0 1 -1 dy g[y; α(γ)] (IV.54)
is a non-negative quantity known as the Fermi rapidity. It represents the radius of the quasi-Fermi sphere, and equals k F in the Tonks-Girardeau regime. The function f that appears in Eq. (IV.53) satisfies the integral equation
f (z; α) - 1 π 1 -1 dy α α 2 + (y -z) 2 f (y; α) = z, (IV.55)
referred to as the second Lieb equation in what follows.
For a given interaction strength γ, the excitation spectrum is obtained in a parametric way as (k; γ)[p(k; γ)], k ∈ [0, +∞[. Why is it so, then, that Lieb predicted two excitation spectra, and not just one? The answer was rather clear at finite N from general consider-ations on particle-hole excitations. In the thermodynamic limit considered presently, the type I and type II spectra could be interpreted as a single parametric curve, but the type I part corresponds to |k|/Q ≥ 1 and thus to quasi-particle excitations, while the type II dispersion is obtained for |k|/Q ≤ 1. Thus, the latter is associated to processes taking place inside the quasi-Fermi sphere, which confirms that they correspond to quasi-hole excitations, in agreement with the finite N picture. (c) The maximal excitation energy associated to the type II curve lies at k = 0 and corresponds to p = p F .
(d) If k ≤ Q(γ), p(-k) = 2p F -p(k) and (-k) = (k), hence II (p) = II (2p F -p),
generalizing to finite interaction strength the symmetry p ↔ 2p F -p already put into light in the Tonks-Girardeau regime.
(e) The type I curve I (p) repeats itself, starting from the umklapp point, shifted by 2p F in p. Thus, what is usually considered as a continuation of the type II branch can also be thought as a shifted replica of the type I branch.
(f) Close to the ground state, I (p) = -II (-p). This can be proven using the following sequence of equalities:
I (p) = I (p+2p F ) = -II (p+2p F ) = -II (2p F -(-p)) = -II (-p).
These symmetry properties are useful in the analysis of the spectra, and provide stringent tests for numerical solutions. Before calculating the excitation spectra, let me make a few technical comments. The momentum p is relatively easy to obtain if k/Q ≤ 1, since the Lieb equation (III.32) has been solved with high accuracy in chapter III. Otherwise, if k/Q > 1, the Lieb equation is solved numerically at z > 1 from the solution at z ≤ 1. Thus, the type II spectrum between p = 0 and p = 2p F is a priori easier to obtain than the exact type I spectrum.
A new technical difficulty whose solution is not readily provided by the evaluation the ground-state energy comes from the second Lieb equation (IV.55), which is another type of integral equation, whose exact solution at arbitrary interaction strength is also unknown. A possible tactics to solve Eq. (IV.55) is to adapt the orthogonal polynomial method used to solve the first Lieb equation (III.32), yielding an approximate solution for α > 2 [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF][START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF]. In the weakly-interacting regime, no systematic method has been developed so far, but since f is well-behaved at low α, numerical solutions are rather easily obtained. Moreover, at α > 2 the strong-coupling expansion converges faster to the exact solution and generates far fewer terms than was the case for the density of pseudo-momenta g [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF]. I also noticed that Eq. (IV.55) appears in the alternative approach to the Lieb-Liniger model, based on a limit case of the sinh-Gordon model, where this equation must be solved to obtain the ground-state energy and correlation functions, cf Appendix B. [START_REF] Lang | Tan's contact of a harmonically trapped onedimensional Bose gas: strong-coupling expansion and conjectural approach at arbitrary interactions[END_REF]. It also appears in other contexts, such as the problem of two circular disks rotating slowly in a viscous fluid with equal angular velocities [START_REF] Cooke | A solution of Tranter's dual integral equations problem[END_REF], or of the radiation of water waves due to a thin rigid circular disk in three dimensions [START_REF] Farina | Water wave radiation by a heaving submerged horizontal disk very near the free surface[END_REF].
In the end, I have access to accurate analytical estimates of the type II branch, provided that α > 2. As far as the type I spectrum is concerned, an additional limitation comes from the fact that the approximate expressions for g(z; α) and f (z; α) are valid only if |z -y| ≤ α. This adds the restriction |k|/Q(α) ≤ α-1, which is not constraining as long as α 1, but for α 2, the validity range is very narrow around p = 0.
To bypass this problem, one can use an iteration method to evaluate g and f . In practice, this method is analytically tractable at large interactions only, as it allows to recover at best the first few terms of the exact 1/α expansion of (k; α) and p(k; α) (to order 2 in [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF]). Another difficulty is that these approximate expressions are not of polynomial type, and it is then a huge challenge to substitute the parameter k and express (p) explicitly, forcing to resort on approximations at high and small momenta.
Both excitation spectra are shown in Fig. IV.15 for several values of the interaction strength, as obtained from the most appropriate method in each case. Note that the area below the type II spectrum, as well as the maximal excitation energy at p F , are both increasing functions of the Lieb parameter γ and vanish for a noninteracting Bose gas.
At small momenta, the type I spectrum can be expressed by its series expansion in p The exact value in the Tonks-Girardeau regime is plotted in dashed blue as a comparison. [START_REF] Matveev | Decay of Fermionic Quasiparticles in One-Dimensional Quantum Liquids[END_REF]415], that reads
I (p; γ) = p 0 v s (γ)p + p 2 2m * (γ) + λ * (γ) 6 p 3 + . . . . (IV.56)
By comparison, in the Tonks-Girardeau regime, as follows from Eq. (II.15), the coefficients in Eq. (IV.56) are v s = v F , m * = m, λ * = 0, and all higher-order coefficients are null as well. At finite interaction strength, v s can be seen as a renormalized Fermi velocity, and m * is interpreted as an effective mass, whose general expression is [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF][START_REF] Matveev | Effective mass of elementary excitations in Galilean-invariant integrable models[END_REF]
m m * = 1 -γ d dγ v s v F , (IV.57)
shown in Fig. IV.16. Note that for a noninteracting Bose gas, m * = m, causing a discontinuity at γ = 0. This means that the ideal Bose gas is not adiabatically connected to the weakly-interacting regime in 1D.
As far as the type II spectrum is concerned, the properties (a)-(f) detailed above suggest another type of expansion, whose truncation to first order has been anticipated in [START_REF] Shamailov | Dark-soliton-like excitations in the Yang-Gaudin gas of attractively interacting fermions[END_REF]:
II (p; γ) F = +∞ n=1 n (γ) p p F 2n 2 - p p F 2n , (IV.58)
where { n (γ)} n≥1 are dimensionless functions. The property (f) enables me to write and equating both expressions to order p 2 , I find in particular that
II (p; γ) = v s (γ)p - p 2 2m * (γ) + . . . , (IV.59)
1 (γ) = v s (γ) v F . (IV .60)
A similar result was recently inferred from a Monte-Carlo simulation of one-dimensional 4 He in [START_REF] Bertaina | One-Dimensional Liquid 4 He: Dynamical Properties beyond Luttinger-Liquid Theory[END_REF], and proved by Bethe Ansatz applied to the hard-rods model in [START_REF] Motta | Dynamical structure factor of one-dimensional hard rods[END_REF]. Using the same approach to next order, I also obtained
2 (γ) = 1 4 v s (γ) v F - m m * (γ) . (IV.61)
It is now possible to compare the exact spectrum to the truncated series obtained from Eq. (IV.58), to investigate the validity range of various approximate expressions. I denote by ∆p and ∆ respectively the half-width of momentum around the umklapp point, and the maximum energy, such that the linearized spectrum T L = v s |p -2p F | is exact up to ten percent. These quantities, shown in Fig. IV.17, should be considered as upper bounds of validity for dynamical observables such as the dynamical structure factor. The Tomonaga-Luttinger liquid theory works better at large interaction strength.
Including the quadratic term in Eq. (IV.59) and neglecting higher-order ones is actually a complete change of paradigm, from massless bosonic to massive fermionic excitations at low energy, at the basis of the Imambekov-Glazman theory of beyond-Luttinger liquids. Systematic substitution of the variable k in Eqs. (IV.52) and (IV.53), and higher-order expansions suggest that higher-order terms in expansion (IV.58) can be neglected in a wide range of strong to intermediate interaction strengths. Figure IV.18 shows the local maximum value II (p F ) of the Lieb-II excitation spectrum, as obtained from a numerical F , as a function of the dimensionless interaction strength γ. The left panel shows the first order approximation in Eq. (IV.58) taking n≥2 = 0 (dashed, black), compared to exact numerical data (blue dots). The right panel shows a zoom close to the origin and the second order approximation (solid, black). Agreement with the exact result is significantly improved when using this correction. calculation as well as the expansion (IV.58) truncated to orders one and two. I find that the result to order one is satisfying at large γ, but the second order correction significantly improves the result at intermediate values of the Lieb parameter. Numerical calculations show that third-and higher-order corrections are negligible in a wide range of strong interactions.
IV.7 Summary of this chapter/résumé du chapitre
This chapter starts with an historical account of a few experiments that allowed major breakthroughs in the understanding of superfluidity. Discovered in 4 He, dramatic suppression of viscosity was then witnessed in 3 He, in ultracold atom experiments and in polaritons. The conceptual difficulties raised by the notion of superfluidity are such that no universal definition or characterization means has been found so far.
The most celebrated criterion for superfluidity is due to Landau, who predicted a thorough absence of viscosity at zero temperature below a critical velocity, that coincides with the sound velocity at the mean field level. In the filiation of this criterion, the concept of drag force due to an impurity or a laser beam stirred in the fluid allows to study friction in a quantitative way. It generalizes Landau's arguments, taking into account transition probabilities to excited states and the precise shape of the potential barrier. Linear response theory allows to express the drag force due to a weak barrier given the profile of the latter and another observable, the dynamical structure factor. The Tonks-Girardeau regime gives an opportunity to find the drag force in a stronglyinteracting Bose gas in the linear response framework, without additional approximation.
In the thermodynamic limit, the energy-momentum profile of possible excitations shows that a low-energy region is forbidden in 1D, and the low-energy excitations are dominated by processes that occur close to the umklapp point. To obtain the dynamical structure factor at finite temperature, I first had to derive the temperature profile of the chemical potential. Then, thermal effects on the dynamical structure factor consist essentially in a broadening of the momentum-energy sector where excitations can occur, and above T 0.2 T F , well-defined phonon excitations disappear progressively.
As far as the drag force is concerned, at zero temperature, linear response theory predicts that it saturates at supersonic flow velocities, which seems quite unrealistic, and at finite temperature the drag force is lower than it would at T = 0 close to the Fermi velocity, which is also disturbing. The first point is solved by taking into account the barrier width, assuming a Gaussian profile. Then, instead of saturating, at high velocities the drag force vanishes. In other words, I have predicted the existence of a quasi-superfluid, supersonic regime.
At finite interaction strengths, several approaches exist to evaluate the dynamical structure factor, ranging from perturbation theory to the Tomonaga-Luttinger liquid framework, its extension to higher energies through the Imambekov-Glazman liquid formalism, and numerical solution based on algebraic Bethe Ansatz methods. Although experiments have demonstrated the predictive power of the more advanced techniques, I have used the standard Tomonaga-Luttinger liquid formalism all the same. I have calculated the dynamical structure factor and generalized it to finite temperature, where a comparison to the exact Tonks-Girardeau result has allowed me to quantitatively show that the effective description is limited to low temperatures, of the order of T 0.2 T F .
At zero temperature and finite interaction strength, making quantitative predictions within the Tomonaga-Luttinger liquid framework requires the knowlege of the sound velocity. The latter is obtained exactly by Bethe Ansatz. It is also important to evaluate the first form factor. I have found an accurate fitting function that allows for a comparison with the exact result from algebraic Bethe Ansatz. Then, I have evaluated the drag force at low velocities, in the simplest case and also at finite barrier width and temperature, where I have compared the predictions with the exact result in the Tonks-Girardeau regime. This shows that the drag force evaluated in the Tomonaga-Luttinger liquid framework is correct up to v 0.2 v F .
To go beyond the standard Luttinger liquid treatment, it is important to accurately evaluate the excitation spectra. This is done exactly by Bethe Ansatz, through a procedure that I explain in detail. I have obtained several symmetry properties of the Lieb II excitation spectrum, found several approximations in terms of the sound velocity and effective mass, and evaluated their range of validity.
Ce chapitre débute par un bref rappel des principales expériences historiques qui ont mis en évidence la superfluidité et ont conduit à ses premières interprétations théoriques. Découverte grâce à l'isotope à quatre nucléons de l'hélium sous forme liquide, la spectaculaire disparition de la viscosité en-dessous d'une température critique a ensuite été observée pour l'isotope à trois nucléons du même élément, puis dans des systèmes d'atomes froids et plus récemment dans les polaritons. Les difficultés conceptuelles soulevées par la superfluidité sont telles qu'aucun critère universellement valide n'a pu être défini à ce jour pour la définir ou la caractériser.
Parmi les multiples critères proposés, celui dû à Landau, qui prévoit notamment l'existence d'une vitesse critique en-dessous de laquelle un écoulement devient superfluide, a sans doute eu le plus fort impact. Dans son étroite filiation, le concept de force de traînée dans le régime quantique, due à une impureté mobile ou à un faisceau laser qui parcourt le fluide, permet d'évaluer quantitativement l'effet de la viscosité. Le critère fondé sur l'absence de force de traînée dans le régime superfluide généralise celui de Landau, en prenant en compte les probabilités de transition vers des états excités et le profil spatial de la barrière de potentiel. Le formalisme de la réponse linéaire permet d'évaluer la force de traînée en fonction de la vitesse d'écoulement, si on se donne un profil de potentiel et le facteur de structure dynamique du fluide. dans le domaine supersonique.
Plusieurs stratégies sont possibles pour prendre en compte les interactions dans le cas général. Le formalisme des liquides de Tomonaga-Luttinger s'applique approximativement à basse énergie et faible vitesse, celui des liquides d'Imambekov-Glazman a un domaine de validité plus conséquent, et enfin des méthodes numériques pour résoudre l'Ansatz de Bethe algébrique donnent un résultat quasi-exact. Bien que les expériences mettent en évidence le pouvoir prédictif des méthodes les plus avancées, dans un premier temps je me contente d'utiliser le formalisme des liquides de Tomonaga-Luttinger, que j'ai généralisé à température non-nulle et comparé autant que possible au traitement exact du régime de Tonks-Girardeau, qu'il reproduit plutôt bien à basse énergie et très basse température.
À température nulle et intensité des interactions arbitraire, afin de faire des prédictions quantitatives dans le formalisme des liquides de Tomonaga-Luttinger, il faut au préalable évaluer la vitesse du son, obtenue par Ansatz de Bethe, et le premier facteur de forme. Ce dernier est très difficile à calculer, et requiert des techniques avancées fondées sur l'Ansatz de Bethe algébrique. Par ajustement d'une expression approchée sur les données numériques disponibles dans la littérature, j'ai pu obtenir une bonne approximation de cette quantité. Ceci m'a permis en particulier de comparer les prédictions du formalisme de Tomonaga-Luttinger au facteur de structure dynamique exact le long d'une ligne à la verticale du point d'umklapp, qui se traduit par un accord surprenant dans une large gamme d'intensité des interactions, à énergie suffisamment faible. J'ai ensuite évalué la force de traînée, et me suis appuyé une fois de plus sur la solution exacte dans le régime de Tonks-Girardeau pour montrer que la solution effective s'applique bien aux vitesses faibles.
Pour aller plus loin que le formalisme des liquides de Tomonaga-Luttinger, il est essentiel d'évaluer avec précision le spectre d'excitation du modèle de Lieb et Liniger. J'en ai étudié les propriétés de symétrie et ai trouvé un développement en série approprié pour l'exprimer dans le cas général, dont les premiers termes dépendent de la vitesse du son et de la masse effective, obtenues par Ansatz de Bethe. J'ai comparé diverses approximations au résultat exact et évalué leur domaine de validité.
IV.8 Outlook of this chapter
Although it has already led to a fair number of new results, the project associated to this chapter is not at an end. My analytical analysis stays at a basic level compared to my initial ambitions, as its accuracy does not compete yet with the numerical results from the ABACUS algorithm. I am only one step away, however, from making quantitative predictions based on the Imambekov-Glazman theory, as I only miss the edge exponents, that should be at reach in principle. My analytical estimates of the excitation spectrum of the Lieb-Liniger model are already fairly accurate. They could be further improved at small and intermediate interaction strength, by including next order in the truncated series. In particular, the function 3 (γ) in Eq. (IV.58), depends on λ * (γ) in Eq. (IV.56), that could be explicitly evaluated by straightforward algebra from results of Refs. [START_REF] Matveev | Decay of Fermionic Quasiparticles in One-Dimensional Quantum Liquids[END_REF]415], see Ref. [START_REF] Petković | The spectrum of elementary excitations in Galilean invariant integrable models[END_REF] for recent improvements in this direction.
Whatever level of sophistication should be employed to evaluate the dynamical structure factor, I already foresee a lot of exciting open problems related to the drag force, that could be answered at a basic level within current means. For instance, the anisotropic drag force due to spin-orbit coupling constitutes a new thread of development [420,[START_REF] Liao | Noncollinear drag force in Bose-Einstein condensates with Weyl spin-orbit coupling[END_REF].
In the context of Anderson localization, a random potential may break down superfluidity at high disorder. This issue was pioneered in [START_REF] Albert | Breakdown of the superfluidity of a matter wave in a random environment[END_REF], and it has been recently shown that a random potential with finite correlation length gives rise to the kind of drag force pattern that I have put into light in this chapter [START_REF] Cherny | Landau instability and mobility edges of the interacting one-dimensional Bose gas in weak random potentials[END_REF]. Other types of barrier may also be considered, for instance a shallow lattice [START_REF] Cherny | Decay of superfluid currents in the interacting one-dimensional Bose gas[END_REF]. Those having an appropriate profile, whose Fourier transform does not overlap with the energy-momentum area where the dynamical structure factor is zero, lead to exact superfluidity in the framework of linear response theory, according to the drag force criterion. However, even if linear response theory predicts superfluidity, to be certain that it is the case, drag force should be accounted for at higher orders in perturbation theory, or even better, non-perturbatively. This was investigated in [START_REF] Cherny | Theory of superfluidity and drag force in the one-dimensional Bose gas[END_REF] for the Tonks-Girardeau gas.
It has been argued that when quantum fluctuations are properly taken into account, they impose a zero critical velocity as there always exists a Casimir type force [START_REF] Roberts | Casimir-Like Force Arising from Quantum Fluctuations in a Slowly Moving Dilute Bose-Einstein Condensate[END_REF][START_REF] Sykes | Drag Force on an Impurity below the Superfluid Critical Velocity in a Quasi-One-Dimensional Bose-Einstein Condensate[END_REF]. However, the latter is of a far lesser amplitude, and another key ingredient, neglected in this approach and mine, is finite mass impurity. It can be taken into account in the drag force formalism [START_REF] Cherny | Theory of superfluidity and drag force in the one-dimensional Bose gas[END_REF], or in the Bose polaron framework, where the full excitation spectrum of the gas and impurity is considered non-perturbatively [START_REF] Schecter | Critical Velocity of a Mobile Impurity in One-Dimensional Quantum Liquids[END_REF]. The situation is radically different then, as strict superfluidity seems possible if m i < +∞ [START_REF] Lychkovskiy | Perpetual motion of a mobile impurity in a one-dimensional quantum gas[END_REF]. More generally, even in the drag force context, at finite N a small gap appears at the umklapp point [START_REF] Cherny | Decay of superfluid currents in the interacting one-dimensional Bose gas[END_REF], allowing superfluidity at very low velocities [START_REF] Schenke | Probing superfluidity of a mesoscopic Tonks-Girardeau gas[END_REF]. It would also be interesting to relate the drag force to supercurrent decay, which has emerged as the standard observable to study superfluidity of mesoscopic systems. These works show that superfluidity is rather expected at the mesoscopic scale than the macroscopic one.
The drag force formalism, as it has been applied so far, is not fully satisfying, as it should be used in Newton's equations to predict the equation of motion of the impurity, rather than just checking whether the drag force is zero or not. The only example thereof that I know is [START_REF] Cherny | Decay of superfluid currents in the interacting one-dimensional Bose gas[END_REF], although studies of long-time velocity as a function of the initial one are flourishing in the literature based on the Bose polaron formalism [START_REF] Lychkovskiy | Perpetual motion of a mobile impurity in a one-dimensional quantum gas[END_REF][START_REF] Gamayun | Quench-controlled frictionless motion of an impurity in a quantum medium[END_REF]. Moreover, at the moment the drag force formalism does not take into account the inhomogeneities, that are essential [START_REF] Fedichev | Critical velocity in cylindrical Bose-Einstein condensates[END_REF]. They have been taken into account in the Tomonaga-Luttinger liquid framework in [430], but to fully understand the back action of the drag on the local density profile, it should be taken into account dynamically, so that numerical support is still needed to follow the center of mass position correctly [START_REF] Castelnovo | Driven impurity in an ultracold onedimensional Bose gas with intermediate interaction strength[END_REF][START_REF] Robinson | Motion of a Distinguishable Impurity in the Bose Gas: Arrested Expansion Without a Lattice and Impurity Snaking[END_REF]. To finish with, experimentally, there are some contexts where the drag force should still be measured, for instance in polaritons [START_REF] Berceanu | Drag in a resonantly driven polariton fluid[END_REF], or in a 1D gas.
To enlarge the scope of this study to other fields of physics, let me mention that the concept of superfluidity is also studied in astrophysics since neutron stars may have a superfluid behavior [START_REF]Superfluidity and the Moments of Inertia of Nuclei[END_REF][START_REF] Page | Rapid Cooling of the Neutron Star in Cassiopeia A Triggered by Neutron Superfluidity in Dense Matter[END_REF], and in cosmology [START_REF] Volovik | Superfluid analogies of cosmological phenomena[END_REF]. In a longer run perspective, as far as technological applications of superfluids beyond mere cooling are concerned, stability against thermal fluctuations or external perturbations is crucial. The key parameters, critical temperature and critical velocity, are typically highest in the strongly-correlated regime, where the interactions stabilizing the many-body state are peculiarly strong.
Chapter V Dimensional crossovers in a gas of noninteracting spinless fermions V.1 Introduction
In order to describe ultracold atom experiments with high accuracy, in addition to selecting a model for the interactions, several other aspects have to be taken into account, such as finite temperature, system size and number of particles, or inhomogeneities of the atomic cloud. I have illustrated such refinements on the example of the Lieb-Liniger model in the previous chapters. Taking several effects into account simultaneously is technically challenging, but it is not easy either to rank them by relative importance, to decide which of them could be neglected or should be incorporated first.
However, among all assumptions that could be questioned, there is one on which I have not insisted on yet. Most of the low-dimensional gases created so far are not strictly one-dimensional, but involve a multi-component structure. For instance, when an array of wires is created, the gases confined in two different ones may interact through their phase or density. Even in a tightly-confined single gas, the conditions requested to reach a strictly one-dimensional regime are not ideally fulfilled, and the gas is actually quasi-1D, with several modes in momentum-energy space. This slight nuance may be more important to take into account than any of the effects listed before. Although considering even a few modes is a theoretical challenge of its own, it did not refrain pioneers to investigate the dimensional crossover to higher dimensions already mentionned in chapter II, whose technical difficulty lies at an even higher level, but is accessible to experiments.
To have a chance of treating the dimensional crossover problem analytically and exactly, I have considered the most conceptually simple system, i.e. a noninteracting gas. In this case, dimensional crossover is obviously not realized by interactions, but rather by a transverse deconfinement, through a progressive release of trapping. This scenario is not the most commonly considered in the literature, so Ref. [START_REF] Lang | Dimensional crossover in a Fermi gas and a cross-dimensional Tomonaga-Luttinger model[END_REF], on which this chapter is based, is quite pioneering and original compared to my other works presented above. The simplifications made at the beginning enable me to study dynamical observables such as the dynamical structure factor and drag force.
The chapter is organized as follows: in a first time, I calculate the dynamical structure factor of a noninteracting Fermi gas in higher dimensions. Then, I develop a formalism that describes the multi-mode structure, and use it to recover the previous results by adding more modes up to observing a dimensional crossover. The same work is done on the drag force, then.
In a second time, I consider the effect of a harmonic trap in the longitudinal direction within the local-density approximation, and show that the trap increases the effective dimension of the system, allowing to simulate a gas in a box up to dimension d = 6.
To conclude the chapter, I develop a multimode Tomonaga-Luttinger model to describe the noninteracting Fermi gas, and apply it throughout the dimensional crossover from 1D to 2D.
Pour décrire avec précision une expérience d'atomes froids, choisir un modèle approprié est un étape obligée, mais encore faut-il prendre en compte divers aspects du problème, comme la température, la taille du système et le nombre d'atomes, ainsi que les inhomogénéités du profil de densité du gaz, autant de paramètres qui peuvent avoir leur importance. J'ai déjà illustré ces différents points dans les chapitres précédents, en m'appuyant sur le modèle de Lieb et Liniger. La prise en compte de plusieurs paramètres en même temps s'avère rapidement problématique pour des raisons techniques, mais il est a priori difficile de savoir lesquels négliger en toute sécurité, ou prendre en compte en priorité, sachant qu'ils peuvent avoir des effets inverses qui se compensent partiellement et n'apportent pas grand chose, de ce fait, en comparaison d'une analyse moins fine.
Toutefois, dans la liste établie plus haut, j'ai passé sous silence un aspect important de la modélisation, à savoir le choix d'affecter au système une dimension donnée. La plupart des gaz ultrafroids de basse dimension réalisés expérimentalement ne s'accomodent pas à la perfection d'une description unidimensionnelle, mais possèdent une structure multimode, qui leur donne le statut de gaz quasi-1D. Par exemple, un gaz dans un réseau optique peut être vu comme un ensemble de gaz unidimensionnels séparés spatialement, mais dans la pratique des couplages peuvent avoir lieu entre ces différents gaz. Même dans un gaz unique fortement confiné, les conditions théoriques pour le rendre strictement unidimensionnel ne sont pas toujours remplies, et le gaz est alors quasi-1D dans la pratique. Ces considérations de dimension peuvent s'avérer plus importantes que tous les autres effets réunis, de par leurs importantes conséquences en basse dimension. Bien que la prise en compte ne serait-ce que de deux ou trois modes puisse s'avérer très technique sur le plan théorique, cela n'a pas empêché certains pionniers d'ouvrir le champ de recherche associé au changement de dimension, extrêmement compliqué dans la théorie, mais réalisable en pratique. Afin de maximiser mes chances de réussir à traiter un exemple analytiquement et de manière exacte, j'ai considéré le modèle le plus simple possible, un gaz idéal. Dans ce cas, les effets dimensionnels ne sont bien évidemment pas dus aux interactions, mais au relâchement progressif d'une contrainte de confinement latéral. La situation s'avère suffisamment simple pour me permettre de considérer des observables à la structure riche, comme le facteur de structure dynamique et la force de traînée.
Le chapitre s'organise de la sorte: dans un premier temps, je calcule le facteur de structure dynamique d'un gaz de fermions en dimensions deux et trois, après quoi je développe le formalisme adéquat pour étudier des structure multimodes, et en guise d'illustration, je m'en sers pour retrouver ces résultats à travers le passage vers la dimension supérieure. J'adapte ensuite tout cela à la force de traînée.
Dans un second temps, j'envisage un confinement harmonique dans la direction longitudinale, que je traite dans le cadre de l'approximation de la densité locale. Je montre que ce piège a pour effet d'augmenter la dimension effective du système, ce qui permet de simuler jusqu'à six dimensions dans une boîte. Enfin, j'étends le formalisme des liquides de Tomonaga-Luttinger à des situations multi-modes, et l'applique jusqu'à la limite de dimension deux afin d'illustrer ses capacités prédictives.
V.2 Energy-momentum space dimensional crossover in a box trap
In this section, I consider N non-interacting spinless fermions of mass m in an anisotropic paralellepipedic box confinement at zero temperature. I assume that the length L x of the box trap is much larger than its width L y and height L z , giving it the shape of a beam, and that the gas confined in the latter is uniform. Thanks to recent developments, this seemingly much-idealized situation can be approached experimentally, in an optical box trap [START_REF] Van Es | Box traps on an atom chip for one-dimensional quantum gases[END_REF][START_REF] Gaunt | Bose-Einstein Condensation of Atoms in a Uniform Potential[END_REF][START_REF] Mukherjee | Homogeneous Atomic Fermi Gases[END_REF][START_REF] Hueck | Two-Dimensional Homogeneous Fermi Gases[END_REF]. If at least one of the transverse sizes is small enough, i.e. such that the energy level spacing is larger than all characteristic energy scales of the problem (given by temperature, or chemical potential), then the gas is confined to 2D or even to 1D, since the occupation of higher transverse modes is suppressed. In the following, I study the behavior of the system as transverse sizes are gradually increased and transverse modes occupied. This yields a dimensional crossover from 1D to 2D, and eventually 3D, whose principle is sketched in Fig. V.1.
An interesting observable in this context is the dynamical structure factor, already Consider a d-dimensional gas of noninteracting spinless fermions. In the transverse direction, the Fermi energy is low enough, so that only one mode is selected. When a transverse dimension of the box increases, new modes are available below the Fermi energy. In the limit of an infinite transverse direction, they form a continuum, and the system is (d+1)-dimensional. considered in the previous chapter. To begin with, I study the effect of space dimension by direct calculation. In arbitrary dimension d in a box-trap, the dynamical structure factor can be calculated as
S d ( q, ω) = V d +∞ -∞ dt d d r e i(ωt-q • r) δn d ( r, t)δn d ( 0, 0) , (V.1)
where V d is the volume of the system, q and ω are the transferred momentum and energy in the Bragg spectroscopy process, δn d ( r, t) = n d ( r, t) -N/V d is the local fluctuation of the density operator around its average value, and . . . denotes the equilibrium quantum statistical average. Note that I have slightly modified the expression of the drag force compared to the previous chapter, so that its unit does not depend on d. Putting the prefactor V d before would only have lead to heavier notations.
If the gas is probed in the longitudinal direction, i.e. specializing to q = q e x , where e x is the unit vector along the x-axis, also denoted by x 1 , the first coordinate in dimension d, then
S d (q e x , ω) = V d d d k (2π) d-1 Θ F - d i=1 kx i Θ d i=1 kx i +qδ i,1 -F δ[ω-(ω kx 1 +q -ω kx 1 )], (V.2)
which is more convenient than Eq. (V.1) to actually perform the calculations. From Eq. (V.2), I have computed the dynamical structure factor of a d-dimensional Fermi gas in the thermodynamic limit, for d = 1, 2, 3, and in particular, I recovered the known result in 1D, given by Eq. (IV.26), and 3D [START_REF] Nozières | The Theory of Quantum Liquids: Superfluid Bose Liquids[END_REF]441]. As far as the two-dimensional case is concerned, I have given details of calculations in Ref. [START_REF] Lang | Dimensional crossover in a Fermi gas and a cross-dimensional Tomonaga-Luttinger model[END_REF].
Then, looking for a general expression that would depend explicitly on d, I have realized that these results can be written in a compact form as
S d (q e x , ω) = V d s d m 2π q d Θ(ω + -ω)Θ(ω-ω -)(ω + -ω) d-1 2 [ω-sign(q-2k F )ω -] d-1 2 +Θ(2k F -q)Θ(ω --ω) [(ω -+ω)(ω + -ω)] d-1 2 -[(ω + +ω)(ω --ω)] d-1 2 , (V.3)
where
k F = N V d (2π) d Ω d 1/d (V.4)
is the modulus of the d-dimensional Fermi wavevector of the gas,
Ω d = π d 2 Γ d+2 2 (V.5)
is the volume of the unit d-dimensional ball,
s d = 2π d+1 2 Γ d+1 2 (V.6)
is the area of the unit d-sphere, and keeping the same notation as in the previous chapter,
ω ± = q 2 2m ± k F q m . (V.7)
Equation (V.3) clearly shows that the case d = 1 is special, in the sense that the second term vanishes, then. The dynamically-forbidden, low-energy region seen in the previous chapter is one more specificity of a 1D gas. Another interesting aspect of Eq. (V.3) is that it explicitly depends on an integer parameter, that I have denoted by d for obvious reasons. Let us call it n, forget about physics for a while and look at (V.3) with the eyes of a pure mathematician. When thinking about a strategy to prove that the property P (n), that means '(V.3) holds if d = n', is true for any natural integer n, the first one that comes to mind is induction on this index. For a mathematician, this ought to be a reflex, but a physicist would argue that there is actually no need for such a proof, since P (n) is already proven for all physically-relevant dimensions, 1, 2 and 3.
An interesting issue has been raised, though: a tool would be needed to perform the induction step P (n) → P (n+1), and multimode structures are natural candidates to play this role. The very possibility of performing any of the peculiar induction steps is in itself an appreciable opportunity, as it yields an alternative to direct calculation and, as such, a way to cross-check long and tedious derivations. One can even hope that this step would yield new insights into dimensional crossovers, allowing to revisit dimensional-dependent phenomena.
As an illustration, I consider the dimensional crossover of the dynamical structure factor from 1D to 2D, obtained by populating higher transverse modes of the atomic waveguide in a quasi-1D (Q1D) structure. I write the two-dimensional fermionic field operator as
ψ(x, y) = kx ky e ikxx √ L x e ikyy √ L y âkxky , (V.8)
where k x,y = 2π Lx,y j x,y , with j x,y an integer, and âkxky = â k is the fermionic annihilation operator, such that {â k , â † k } = δ k, k , and â † k â k = δ k, k n F ( k ). Then, applying Wick's theorem, I find that
δn( r, t)δn( 0, 0) = 1 (L x L y ) 2 k, k e -i[( k-k ) • r-(ω kx +ω ky -ω k x -ω k y )t] n F ( kx + ky )[1-n F ( k x + k y )]. (V.9)
Substituting Eq. (V.9) into (V.1), the dynamical structure factor reads
S Q1 (q e x , ω) = L x L y +∞ -∞ dt Lx/2 -Lx/2 dx Ly/2 -Ly/2 dye i(ωt-qxx) 1 (2π) 2 +∞ -∞ dk x +∞ -∞ dk x ky,k y e -i[(kx-k x )x+(ky-k y )y-(ω kx +ω ky -ω k x -ω k y )t] n F ( kx + ky )[1-n F ( k x + k y )]. (V.10)
A few additional algebraic manipulations and specialization to T = 0 yield
S Q1 (q e x , ω) = ky 2πL x +∞ -∞ dk x Θ[ F -( kx + ky )]Θ[ kx+qx + ky -F ]δ[ω-(ω kx+qx -ω kx )] = M jy=-M S 1 (q e x , ω; kF [j y /M ]), (V.11)
where S 1 (q e x , ω; kF [j y /M ]) is the 1D dynamical structure factor where the chemical potential has been replaced by Fky , or equivalently, where the Fermi wavevector k F,1 is replaced by
kF [j y /M ] = k F 1- j 2 y M 2 . (V.12)
This defines the number of transverse modes, 2M + 1, through
M = I k F L y 2π = I( M ), (V.13)
where I is the floor function. In the large-M limit, the Riemann sum in Eq. (V.11) becomes an integral, and then
k F L y π 1 0 dx S 1 q e x , ω; k F √ 1 -x 2 = S 2 (q e x , ω), (V.14)
providing the dimensional crossover from 1D to 2D. More generally, one can start from any dimension d and find, after relaxation of a transverse confinement,
S Qd (q e x , ω) = M j=-M S d (q e x , ω; kF [j/M ]) -→ M →+∞ k F L d+1 π 1 0 dx S d (q e x , ω; kF [x]) = S d+1 (q e x , ω). (V.15)
If used repeatedly, Eq. (V.15) allows to evaluate the dynamical structure factor up to any dimension, and is the key to the induction step.
I illustrate numerically the dimensional crossover from 1D to 2D using Eq. (V.15). Figure V.2 shows the dynamical structure factor as a function of the frequency ω for two choices of wavevector q. Sections are made at fixed q, rather than ω, because such curves are accessible to experiments. In each panel, this observable is given for a 1D gas, a Q1D gas where M = 10 and 2D for a comparison. Notice that only a few modes are needed to recover higher-dimensional physics within a very good approximation, since in this example, the staircase shape taken by the dynamical structure factor of the Q1D gas mimics already quite well the 2D one. I proceed, following the spirit of chapter IV where this analysis was done for a weaklyinteracting Bose gas, by studying the effect of dimension on the drag force. I recall that if a weak potential barrier or impurity is stirred along the fluid, putting it slightly out of equilibrium, then in linear response theory the average energy dissipation per unit time is linked to the dynamical structure factor by the relation
Ė = - 1 2π V d +∞ 0 dω d d q (2π) d S d ( q, ω)|U d ( q, ω)| 2 ω, (V.16)
where U d ( q, ω) is the Fourier transform of the potential barrier U d ( r, t) defining the perturbation part of the Hamiltonian
H pert = d d r U d ( r, t)n d ( r).
With a delta-potential barrier U d ( r, t) = U d δ(x -vt) in the direction x, covering the Figure V.2 -Dynamical structure factor of a noninteracting Fermi gas S(q, ω), in units of S(q = 2k F , ω = ω F ), for dimensionless wavevectors q/k F = 1 (upper panel) and q/k F = 3 (lower panel), as a function of frequency ω in units of the Fermi frequency, in 1D (dashed, blue) and Q1D for 2M +1 = 21 modes (solid, red) compared to 2D (dotted, black). Few modes are needed for the Q1D system to display a similar behavior as the 2D one, such as the shark fin shape in the upper panel.
whole waveguide in the transverse directions, the drag force reads
F d (v) = U 2 d 2π V d +∞ 0
dq qS d (q e x , qv) (V.17 m to denote the Fermi velocity in dimension d, from Eqs. (V.17) and (V.3), I find that for v ≤ v F,d ,
F 1 (v) = 2U 2 1 mn 1 2 v v F,1 , (V.18) F 2 (v) = 2U 2 2 mn 2 2 2 π v v F,2 1- v v F,2 2 + arcsin v v F,2 (V.19)
and
F 3 (v) = 2U 2 3 mn 3 2 3 2 v v F,3 1 - 1 3 v v F,3 2 . (V.20) If v > v F,d
, for the potential barrier considered, the drag force saturates and takes a universal value From Equations (V.18), (V. [START_REF] Wilczek | Quantum Mechanics of Fractional-Spin Particles[END_REF]) and (V.20), it is difficult to guess a general formula, valid for any integer dimension d. This is in stark contrast to the dynamical structure factor, where a close inspection was sufficient to infer Eq. (V.3). In particular, dimension two looks fairly weird, as it involves an arcsin function. In absence of any intuition, I carried out the calculation from Eqs. (V.3) and (V.17), and found that the general expression actually reads
F d (v F,d ) = 2U 2
F d (u d ≤ 1) = 2U 2 d mn d 2 2 √ π(d + 1) Γ( d+2 2 ) Γ( d+1 2 ) (1 -u 2 d ) d-1 2 (1+u d ) 2 F 1 1, 1-d 2 ; d+3 2 ;- 1+u d 1-u d -(u d → -u d ) , (V.22)
where 2 F 1 is the hypergeometric function, and I used the notation u d = v/v F,d . Actually, this expression can be simplified in even and odd dimensions separately, but the final expression remains rather heavy all the same, see Ref. [START_REF] Lang | Dimensional crossover in a Fermi gas and a cross-dimensional Tomonaga-Luttinger model[END_REF] for details. According to the drag force criterion, the non-interacting Fermi gas is not superfluid, as expected since superfluidity is a collective phenomenon.
V.3 Dimensional crossovers in a harmonic trap
After discussing the dimensional crossover in energy space in a box trap, I focus here on dimensional crossovers in the experimentally relevant case of a harmonically trapped gas. To begin with, I consider a 1D Fermi gas, longitudinally confined by a harmonic trap described by the potential V (x) = 1 2 mω 2 0 x 2 , where ω 0 is the frequency of the trap. Assuming a slow spatial variation of the density profile of the gas along x, the local-density approximation accurately describes the density profile of the gas, as shown in chapter III.
Within the same approximation of a slowly-varying spatial confinement, for wavevectors q larger than the inverse size of the spatial confinement 1/R T F , where R T F is the Thomas-Fermi radius, the dynamical structure factor S 1,h.o. (q, ω) of the harmonically trapped gas is given by the spatial average
S 1,h.o. (q, ω) = 1 2R T F R T F -R T F dx S 1,hom [q, ω; n 1 (x)] = 1 0 dz S 1,hom (q, ω; n 1 √ 1 -z 2 ) (V.23)
after the change of variable z = x/R T F , where S 1,hom [q, ω; n] is the dynamical structure factor of a 1D homogeneous gas, and the linear density n 1 = 2N πR T F . In other words, the local-density approximation assumes that portions of the size of the confinement length scale a h.o. = mω 0 can be considered as homogeneous, and that their responses are independent from each other [442]. The validity of this approximation for the dynamical Figure V.4 -Dynamical structure factor of a noninteracting Fermi gas, S(q, ω), in units of S(q = 2k F , ω = ω F ), for dimensionless wavevectors q/k F = 1 (upper panel) and q/k F = 3 (lower panel), as a function of frequency ω in units of the Fermi frequency ω F , in 2D (dashed, black), 4D (dotted, blue) and 6D (solid, red), as obtained from a harmonically trapped gas, in a quantum simulator perspective structure factor has been verified in [START_REF] Vignolo | One-dimensional non-interacting fermions in harmonic confinement: equilibrium and dynamical properties[END_REF], by comparison with the exact result.
Interestingly, equation (V.23) has the same structure as Eq. (V.14), thus establishing the equivalence, for the dynamical structure factor, of a 1D harmonic trapped gas and a 2D gas in a box. More generally, a similar procedure yields the following property:
In reduced units, the dynamical structure factor of a harmonically trapped ideal gas in d dimensions as predicted by the LDA is the same as in a box trap in 2d dimensions. In Figure V.4, I illustrate the latter on the dynamical structure factor of an ideal Fermi gas in a box in dimensions d = 2, 4, 6, as can be simulated by a harmonically-confined gas in dimension d = 1, 2, 3 respectively. This correspondence between a 2dD box trap and a dD harmonic trap can be inferred directly from the Hamiltonian of the system: for a box trap there are d quadratic contributions stemming from the kinetic energy, whereas for a harmonic confinement there are 2d quadratic terms originating from both kinetic and potential energy. Since, in a semiclassical treatment, each term contributes in a similar manner, harmonic confinement leads to a doubling of the effective dimensionality of the system in this noninteracting case. This is expected not only for the dynamical structure factor, but is also witnessed in other quantities such as the density of states, the condensate fraction of a Bose gas below the critical temperature, and virial coefficients [START_REF] Bahauddin | Virial Coefficients from Unified Statistical Thermodynamics of Quantum Gases Trapped under Generic Power Law Potential in d Dimension and Equivalence of Quantum Gases[END_REF] for instance.
The relevance of the dimensional crossover as a tool to prove dimensional-dependent properties by induction should be revised in view of this correspondence. In its light, the induction tool suddenly becomes far more interesting, in particular in the context of a 3D harmonically-trapped gas, as it corresponds to a uniform gas in 6D, whose dynamical structure factor is difficult to evaluate by direct calculation.
Actually, harmonic trapping in a longitudinal dimension does not necessarily increase the effective dimension of the system. To illustrate this point, I analyze the dynamical structure factor of a harmonically-confined gas in the experimentally relevant case where only the central part of the cloud is probed, over a radius r < R T F . Assuming that r is larger than the characteristic variation length of the external confinement, and using again the local density approximation, Eq. (V.23) transforms into
S 1,h.o. (q, ω; r) = r/R T F 0 dx S 1 q, ω; n 1 √ 1 -x 2 . (V.24)
An explicit expression is obtained by evaluating the integral
S = r/R T F 0 dx Θ q 2 +2q √ 1-x 2 -ω Θ ω-|q 2 -2q √ 1-x 2 | , (V.25)
where ω and q are expressed in reduced units such that k F = 1 and ω F = 1. The final expression reads
S = Θ(ω + -ω)Θ(ω-ω -) min r R T F , 1 - ω -q 2 2q 2 +Θ(2-q)Θ(ω --ω) min r R T F , 1 - ω -q 2 2q 2 -Θ(2-q)Θ(ω --ω) min r R T F , 1 - ω + q 2 2q 2 .
(V.26) Equation (V.26) displays another kind of crossover between the dynamical structure factor of a 1D gas in a box and the one of a 2D gas in a box. In order to obtain the 1D behavior, r/R T F must be the minimal argument in Eq. (V.26) above, while a 2D behavior is obtained when it is the maximal one. (q = k F , ω; r), as a function of energy, at varying the size r of the probed region, at q = k F . In essence, to get Figure V.5 -Reduced dynamical structure factor S(q = k F , ω; r)/r in units of S 1 (q = k F , ω) in the plane (r, ω), where r is the probed length of the gas in units of the Thomas-Fermi radius R T F , and ω the energy in units of ω F . If r R T F , the 1D box result is recovered, while r → R T F yields the 2D box result. Excitations below the lower excitation branch ω -appear progressively as the dimensionless ratio r/R T F is increased.
close to 1D behavior in spite of the longitudinal trapping potential, one should take the smallest r compatible with the condition r 1/q, that ensures the validity of the LDA, and with 1-r/R T F 1 in order to detect enough signal.
After this study of the effect of a trap, that was inspired by experiments, I investigate another point, motivated by theoretical issues, anticipating a possible generalization to interacting systems. In 1D, I have widely used the Tomonaga-Luttinger liquid approach to tackle the dynamical correlations, in chapters II and IV. Here, I proceed to consider the noninterating Fermi gas as a testbed to develop a generalized Tomonaga-Luttinger liquid framework in higher dimensions.
V.4 Low-energy approach for fermions in a box trap, cross-dimensional Luttinger liquid
In chapter II, I have pointed out that the Tomonaga-Luttinger liquid approach breaks down when d > 1. Explanations of this fact often rely of fairly non-trivial arguments. However, I shall show that the dynamical structure factor provides a quite simple and pictural illustration and explanation thereof. As can be seen in Fig. V.6 and in Eq. (V.3), in 2D and 3D, since excitations are possible at energies lower than ω -and down to ω = 0 for any q < 2k F , no linearization of the lower branch of the excitation spectrum is possible. This, in turn, can be interpreted as a dramatic manifestation of the standard Tomonaga-Luttinger liquid theory breakdown. Many attempts have been made to generalize the Tomonaga-Luttinger model to higher dimensions, as an alternative to Fermi liquids to describe interacting systems. An intermediate issue is whether or not the TLL applies to Q1D systems. As an answer to both questions at once, I have tried and constructed a Tomonaga-Luttinger model in higher dimension, defining a multimode Tomonaga-Luttinger model (M-TLM). Indeed, if d > 1, the emergence of contributions to the dynamical structure factor at energies lower than ω -can be interpreted as contributions of transverse modes of a 1D gas.
As an appetizer, note that all these modes, taken separately, display a linear structure in their excitation spectrum at low energy, as illustrated in Fig.V.7. This means that each mode, taken separately, can be described by a Tomonaga-Luttinger liquid. Thus, applying Eq. (V.14) to the Tomonaga-Luttinger model, in Q1D the dynamical structure factor reads
S T L Q1 (q, ω) = M j=-M S T L 1 (q, ω; kF [j/M ]). (V.27)
The question is, up to what point the small errors for each mode in the framework of the effective theory amplify or cancel when adding more modes, especially in the limit M → +∞, that corresponds to the crossover to 2D. To address this question, I have carried out the procedure explicitly on the example of the 1D to 2D crossover and compared the prediction of the cross-dimensional Tomonaga-Luttinger liquid theory to the exact solution. Combining Eq. (V.27) to the dynamical structure factor of a 1D gas, I have for a Q1D gas with three modes, in the plane (q, ω) in units of (k F , ω F ), as found in the Tomonaga-Luttinger formalism (dashed), compared to the exact solution (solid) found
S T L Q1 (q, ω) = L x m 4π 1 M M j=-M 1 1-j 2 M 2 Θ ω-q-2k F 1- j 2 M 2 v F 1- j 2 M 2 → M →+∞ L x m 2π 1 0 dx 1 √ 1-x 2 Θ ω-q-2k F √ 1-x 2 v F √ 1-x 2 = S T L 2 (q, ω). (V.28)
Evaluating the integral yields
S T L 2 (q, ω) = mL x 2π
[Θ(q-2k F )S > (q, ω)+Θ(2k F -q)S < (q, ω)] (V.29)
with
S > (q, ω) = Θ q 2 8m -ω Θ(qv F -ω) arcsin q 4k F 1 -1 - 8mω q 2 + Θ(4k F -q)Θ(ω -qv F )Θ q 2 8m -ω arcsin q 4k F 1 -1 - 8mω q 2 + arccos q 4k F 1+ 1- 8mω q 2 + Θ ω - q 2 8m Θ(ω -qv F ) π 2 (V.30)
and
S < (q, ω) = Θ(|q|v F -ω) arcsin q 4k F 1 + 1 + 8mω q 2 + Θ q 2 8m -ω arcsin q 4k F 1-1- 8mω q 2 -arcsin q 4k F 1+ 1- 8mω q 2 + Θ(ω -|q|v F ) π 2 , (V.31)
where q = q-2k F . , that compares sections of the dynamical structure factor as predicted by the multi-mode Tomonaga-Luttinger model and by exact calculation, as a function of q at ω = 0.1 ω F . This low-energy value has been chosen in view of the known validity range of the TLL already studied in 1D. Around the umklapp point (q = 2k F , ω = 0) lies a sector where the effective model is in a rather good quantitative agreement with the exact result in 2D. Discrepencies between the Tomonaga-Luttinger model and the exact solution of the original one at low q are due to the fact that for a given point, the TLM slightly overestimates the value of the dynamical structure factor for larger q and underestimates it at lower q, as can be seen in the 1D case. Combined with the fact that the curvature of the dispersion relation is neglected, and that the density of modes is lower at low q, this explains the anomalous cusp predicted by the M-TLM at low q. Note however that this result is by far closer to the 2D exact result than the 1D one in the large M case, showing that there is a true multi-mode effect. I find that the limiting prediction of the multimode Tomonaga-Luttinger model for a noninteracting gas is in quantitative agreement with the exact 2D result for ω ω F and |q-2k F | 2k F . Similar conditions have to be met in 1D in order to ensure the validity of the Tomonaga-Luttinger model, therefore my heuristic construction is quite satisfactory from this point of view. It is not fully satisfying, however, in the sense that one needs to start from 1D, and the 2D model that would directly yield this result is unknown.
V.5 Summary of this chapter/résumé du chapitre
In this chapter, I have investigated the dynamical structure factor and drag force of a noninteracting Fermi gas as functions of the dimension of space. It turns out that dimension has a dramatic effect on the dynamical structure factor, whose strongest manifestation is a low-energy forbidden region in momentum-energy space, that becomes completely filled in higher dimensions. This effect on the dynamical structure factor allows to forecast, by adiabatic continuation, the transition from Luttinger to Fermi liquid behavior in interacting systems. In comparison, the effect on the drag force is not so huge, as it is dominated by excitations close to the umklapp point.
Then, I have investigated multi-mode systems obtained by releasing a transverse trapping and demonstrated the dimensional crossover allowed by this structure. Actually, the mathematical property hidden behing dimensional crossover is as simple as the crossover from Riemann sums to integrals. I have studied the effect of a longitudinal harmonic trap and shown, within the local-density approximation, that each degree of trapping is equivalent, for the noninteracting gas, to an additional effective dimension. This property allows to simulate up to six dimensions. The multimode structure allows to prove general results by induction on space dimension, which is quite useful in this context. I have also shown that dimension enhancement does not occur if the dynamical structure factor of a 1D trap in a longitudinally trapped gas is probed only close to the center of the trap.
To finish with, I have turned to the issue of extensions of the TLL formalism to higher dimensions, in view of future applications to interacting systems. I have proposed a model of multimode Tomonaga-Luttinger liquid, whose dimensional crossover to 2D reproduces the exact result at sufficiently low energies with satisying accuracy, close to the umklapp point.
Dans ce chapitre, j'ai étudié le facteur de structure dynamique et la force de traînée en fonction de la dimension du système, dans le cas d'un gaz de fermions sans interaction. Il s'avère que la dimension a un effet important sur le facteur de structure dynamique, qui s'annule dans une zone de basse énergie uniquement en dimension un. La possibilité d'observer des excitations dans cette dernière en dimension supérieure peut s'interpréter, par extrapolation adiabatique à un gaz présentant des interactions, comme une manifestation de la transition entre un liquide de Luttinger et un liquide de Fermi. Afin de mieux comprendre la transition dimensionnelle, j'ai étudié l'apparition d'une structure multimode par relaxation d'un degré de confinement transverse. La dimension du système augmente alors progressivement, et on peut observer comment la zone dynamiquement interdite en dimension un s'emplit progressivement du fait de l'augmentation du nombre de modes, jusqu'à être complètement comblée en dimension deux, qui correspond à un nombre infini de modes. J'ai aussi mis en évidence une autre manière d'augmenter la dimension d'un système, cette fois de manière effective, par confinement harmonique selon une direction longitudinale. Chaque degré de liberté entravé augmente la dimension effective d'une unité d'après l'approximation de la densité locale, ce qui permet de simuler un gaz de dimension six. Toutefois, cette augmentation de dimension n'a pas lieu si on sonde uniquement la région centrale du piège, où le gaz est relativement uniforme. Pour finir, je me suis appuyé sur ce même exemple d'un gaz sans interaction pour développer un formalisme de liquide de Tomonaga-Luttinger multimode, dont j'ai testé la validité jusqu'au passage à la dimension deux, où ses prédictions restent correctes à basse énergie au voisinage du point de rétrodiffusion.
V.6 Outlook of this chapter
A few issues dealt with in this chapter could be investigated further as was done in chapter IV for a Bose gas, such as the effect of finite temperature on the dimensional structure factor of a Fermi gas in dimensions two and three. It is not obvious how dimensional crossovers would manifest themselves at finite temperature, and the issue deserves attention. The effect of the barrier width on the drag force profile is also unknown yet in higher dimensions, but I expect that a quasi-superfluid regime exists in this configuration too. The multimode structure leading to dimensional crossovers may be investigated for virtually any observable and for other sufficiently simple systems, offering a wide landscape of perspectives.
The most thriving issue, however, is by far the adaptation of the multicomponent approach to interacting systems. Some results, such as the d ↔ 2d correspondence in a harmonic trap, are likely not to be robust. The multimode Tomonaga-Luttinger liquid formalism, however, can definitely be adapted to multicomponent interacting system, by choosing the type of interactions considered. There are essentially two types of terms that emerge [START_REF] Cazalilla | Instabilities in Binary Mixtures of One-Dimensional Quantum Degenerate Gases[END_REF], leading respectively to density-coupled gases, or to couplings of a cosine type, that correspond to the Sine-Gordon model. They should be a low-dimensional description of multicomponent Lieb-Liniger or Yang-Gaudin type gases.
The simpler case is assuredly the density-coupled multicomponent gas, whose Hamil-tonian,
H T L M = M-1 i=0 v i 2π L 0 dx K i (∂ x Φ i ) 2 + 1 K i (∂ x θ i ) 2 + 1 2 M-1 i=0 M-1 j=0 (1-δ ij )g ij L 0 dx ∂ x θ i π ∂ x θ j π , (V.32)
is quadratic in the fields and can be diagonalized explicitly [START_REF] Matveev | Conductance and Coulomb blockade in a multimode quantum wire[END_REF]. This case will be the subject of a later publication. A few results concerning dynamical correlations are even already available for a few components [START_REF] Iucci | Fourier transform of the 2k F Luttinger liquid density correlation function with different spin and charge velocities[END_REF][START_REF] Orignac | Spectral functions of two-band spinless fermion and single-band spin-1/2 fermion models[END_REF], and a general formalism based on generalized hypergeometric series has been developped to deal with an arbitrary number of components [START_REF] Orignac | Response functions in multicomponent Luttinger liquids[END_REF].
Next step would be the extension of the Imambekov-Glazman formalism to multimode gases, and the development of Bethe Ansatz techniques to the dynamics of multicomponent Lieb-Liniger and Yang-Gaudin models.
In view of the technical difficulty of the dimensional crossover problem even for a few modes, it might be that quantum simulation will be needed to solve it. However, if only a few modes are needed to recover higher-dimensional physics as was the case for a noninteracting Fermi gas, then there is hope that some cases are at reach. Whether or not such solutions could help to better understand or solve higher-dimensional models is not obvious, nor the way a model transforms along the dimensional crossover. As an example of this problematics, the Tonks-Girardeau gas is equivalent to a gas of noninteracting fermions for a few observables in 1D, but no such correspondence is known in higher dimension. To what conditions would an indifferentiate 1D gas become a noninteracting Fermi gas or a unitary Bose gas in 2D is far from obvious.
Chapter VI
General conclusion/le mot de la fin
In conclusion, in this thesis, I have studied the effects of interactions, quantum and thermal fluctuations on a one-dimensional Bose gas.
In the introductory chapter II, I have recaped a few known hallmarks of one-dimensional quantum systems, such as collectivization of motion and excitations, that prevents the existence of well-defined quasi-particles and seals the breakdown of Fermi liquid theory. Fermionization of interacting bosons manifests itself through the appearance of a Fermi sea structure in quasi-momentum space, and in real space, through a fictitious Pauli principle that is not due to statistics but to interactions. For systems with spin, the charge and spin sectors of the Hilbert space decouple, and their excitations split in real space too, challenging the notion of elementary particle. All of these effects are consequences of the crossing topological constraint, that enhances the role of fluctuations. Another striking consequence of dimensional reduction is the Mermin-Wagner theorem, that states the impossibility of spontaneous symmetry breaking in many models. The latter do not undergo phase transitions but rather smooth crossovers, withdrawing interest to their phase diagramms. An alternative paradigm consists in characterizing systems through their correlation functions, either local or non-local, in real or in energy-momentum space, at or out of equilibrium. These correlation functions are probed on a daily basis in ultracold atom setups.
Low-dimensional quantum gases are obtained in experiments by strong confinement along transverse directions, allowed by trapping and cooling. They can be created in an optical lattice, leading to an ensemble of wires, or on a microchip that provides a single gas, both situations corresponding to open boundary conditions. Ring geometries, that realize periodic boundary conditions, are also available thanks to magnetic trapping, radio-frequency fields or a combination, using time-average adiabatic potentials.
A fair number of simple models describing these gases are integrable, meaning that their scattering matrix verifies the Yang-Baxter equation. This situation is more likely to appear in low dimension, and allows to obtain the exact ground-state energy, the excitation spectrum and even, at the price of huge efforts, correlation functions, using Bethe Ansatz techniques. Other theoretical tools allow for a quite detailed analytical study of low-dimensional models, such as the exact Bose-Fermi mapping, that states the formal equivalence, for many observables, between the Tonks-Girardeau gas of stronglyinteracting bosons, and a fictitious gas of noninteracting spinless fermions. Many models belong to the universality class of Tomonaga-Luttinger liquids, which is completely solved by bosonization, and yields the structure of the short-time, large-distance, and low-energy correlation functions of these models. These correlations are critical at T = 0 in the thermodynamic limit, as they decay algebraicly in space, which is one more hallmark of one-dimensional physics. At finite temperature, their decay becomes exponential. Conformal field theory can be used as an alternative formalism to obtain these correlation functions, and requires less calculation efforts. The validity range of both approaches is investigated by comparison with exact results in the Tonks-Girardeau regime, or from Bethe Ansatz when they are available.
In chapter III, I have studied one of the most famous models of 1D gases, where bosonic atoms interact through a local potential, a.k.a. the Lieb-Liniger model. I have recalled the Bethe Ansatz procedure to obtain its ground-state energy in closed form in the thermodynamic limit, as the solution of a set of coupled integral equations. Approximate solutions of these equations can be constructed systematically with arbitrary accuracy in the weakly-interacting and strongly-interacting regimes, by identification of the corresponding series expansions. In the weak-coupling regime, I have identified the general pattern of this series, and guessed the exact value of the third-order coefficient. In the strongly-interacting regime, I have pushed the expansion to an unprecedented order and inferred a partially-resummed structure. I have also developed a semi-analytical technique that works in all interaction regimes. In the end, these methods give access to the whole range of interaction strengths with excellent accuracy.
Then, I have turned to the more intricate issue of local correlation functions. The one-body local correlation is trivial, the second-and third-order ones can be expressed in terms of moments of the density of pseudo-momenta, already studied to obtain the energy. I have found new expressions in the weak-coupling regime and conjectured the global structure in the strongly-interacting regime, improving analytical estimates.
The one-body correlation function acquires a non-trivial structure at finite space separation. Tomonaga-Luttinger liquid theory predicts that it vanishes at infinity, meaning the absence of off-diagonal long-range order in 1D, but its decay is algebraic, which is a signature of quasi-long-range order. While the large-distance behavior is universal, at short distance it depends on the microscopic details of the model. Here, the coefficients of the short-distance series expansion can be obtained by Bethe Ansatz, through relations that I have called 'connections', that link them to local correlation functions and moments of the density of pseudo-momenta. I have derived the first few connections, and noticed that they correspond to well-known results, that are gathered and classified for a first time in a single formalism. I have also given new and shorter derivations of a few of them. Then, by Bethe Ansatz, I have evaluated the first few coefficients of the short-distance series expansion of the one-body correlation function explicitly, and found that the first that changes sign when the interaction strength varies is the forth one, at a value that I have evaluated with very high accuracy.
The Fourier transform of the one-body correlation function is the momentum distribution, whose large-momentum tail scales like an inverse quartic law. Its coefficient depends on the interaction strength and is known as Tan's contact, it contains much information on the microscopic details of the model. I have chosen this observable to illustrate a method to solve the Lieb-Liniger model in the case of an additional harmonic trap, that breaks its integrability. The technique relies on a combination of Bethe Ansatz and the local-density approximation, whose acronym is BALDA. Within the latter, I have found a procedure, valid in the strongly-interacting regime, to obtain Tan's contact to arbitrary order in the inverse coupling.
In chapter IV, I have considered correlation functions in energy-momentum space. To be more specific, I have focused on the dynamical structure factor, i.e. the absorption spectrum of the gas, probed by Bragg scattering. When an impurity or a laser beam is stirred along the fluid and couples locally to its density sufficiently weakly, then linear response theory applies and allows to evaluate energy dissipation, or equivalently the drag force, once the dynamical structure factor and the shape of the potential barrier are known. This allows to study superfluidity, as characterized by the absence of viscosity, i.e. of a drag force, below a critical velocity, as a generalization of Landau's criterion. After an introduction to experiments on superfluids, an exposition of Landau's criterion and of the drag force concept in the quantum regime, the dynamical structure factor of the Tonks-Girardeau gas is obtained by Bose-Fermi mapping. It features a low-energy region where excitations are forbidden, whose upper bound corresponds to the lower excitation spectrum of the model. The drag force due to an infinitely thin potential barrier is linear below the Fermi velocity, then it saturates to a finite value. At finite temperature, I have found that the dynamical structure factor spreads beyond the zero-temperature excitation spectra and acquires excitations at negative energy, that correspond to emission. When temperature increases too much, phonons are not welldefined anymore. I have also studied the effect of a finite barrier width on the drag force. It turns out that in this more realistic picture, the drag force is strongly suppressed at large velocities, putting into light the existence of a quasi-superfluid, supersonic regime.
Several techniques are available to study dynamical correlation functions at arbitrary interaction strength. I have focused on the Tomonaga-Luttinger formalism. By comparison of the effective theory with the exact Tonks-Girardeau predictions, I have studied the validity range of the effective theory and found that it is limited to low energy, low temperature, and low velocity for the drag force. At finite interaction strength, to make quantitative predictions, two parameters are needed: the Luttinger parameter and the first form factor. The former is obtained with high accuracy by coordinate Bethe Ansatz, whereas the form factor requires more advanced techniques. I have guessed an approximate expression that allows to reproduce with satisfying accuracy the exact dynamical structure factor at the vertical of the umklapp point, in a wide range of strong to intermediate couplings.
In view of more sophisticated treatments, for instance within the Imambekov-Glazman liquid formalism, I have obtained another key ingredient to evaluate the dynamical structure factor, i.e. the excitation spectrum of the Lieb-Liniger model. I have identified an exact series expansion of the Lieb-II type spectrum, and expressed the two first coefficients as functions of the sound velocity and the effective mass, found by Bethe Ansatz. Comparison with the exact solution shows that truncating the series to second order is a rather good approximation over a wide range of interactions strengths.
In chapter V, I have turned to the issue of the dimensional crossover from 1D to higher dimensions. There are several ways to address a dimensional crossover, for example coupling 1D gases, or using internal degrees of freedom to create a synthetic dimension, or releasing a transverse trapping. Here, I have focused on the last one, considering a gas of noninteracting fermions in a box trap of tunable size. In a first time, I have obtained the dynamical structure factor of the gas as a function of dimension. A general expression shows that the forbidden low-energy region in 1D is filled with excitations in any higher dimension, providing another example of dramatic dimensional effect. The crossover from 1D to 2D is especially interesting. I have followed it all along and observed the progressive appearance of transverse energy modes by increasing a transverse size of the box. These modes fill the low-energy region progressively, up to a point where no gap remains and dimension two is recovered. Then, I have done the same study for the drag force, on which the effect of dimension is far less spectacular.
Experiments often involve longitudinal harmonic trapping, that can be taken into account in the LDA framework. It turns out that each degree of confinement is equivalent, for the dynamical structure factor, to adding an effective space dimension. This effect is not observed, however, if only the central region of the trap is probed, where the gas is practically homogeneous.
To finish with, in view of future generalizations to interacting systems, I have developed a multimode Tomonaga-Luttinger liquid framework, and tested it along the dimensional crossover from 1D to 2D. Its predictions for a Fermi gas are accurate in the vicinity of the umklapp point of each single mode, and the global one in 2D.
As detailed at the end of each chapter, various research directions open up from my work. Explicit identification of the weak-or strong-coupling series expansion of the ground-state energy of the Lieb-Liniger model may lead to the possibility of a full resummation, that would yield the exact ground-state energy, an achievement whose importance would be comparable to the celebrated solution of the 2D Ising model by Onsager. In view of the weak-coupling expansion, that seems to involve the Riemann zeta function at odd arguments, it might be that this solution could help proving difficult theorems in analytic number theory.
A deeper study of the concept of connexion and their calculation to higher orders may suggest general formulae and solve the Lieb-Liniger model in a stronger sense, or at least, allow to investigate the high-momentum tail of the momentum distribution to next order, beyond Tan's contact, as well as higher-order local correlation functions. Bethe Ansatz, coupled to the local-density approximation, allows to study trapped gases in nonintegrable regimes, possibly in an exact manner, and could be tested in other cases than harmonic trapping. Now that the standard Tomonaga-Luttinger model has been pushed to its limits, next step would be to make quantitative predictions for dynamical observables from the Imambekov-Glazman formalism. The excitation spectrum can be evaluated with excellent accuracy, the form factor is known from algebraic Bethe Ansatz, and the edge exponents are at hand. This approach, or the ABACUS algorithm, could serve for a detailed study of the shape and width of the potential barrier on the drag force profile, in particular to investigate the required conditions to observe a quasi-superfluid supersonic regime. Dimensional crossover by confinement release could be investigated in other systems and on other observables, to gain insight in the crossover mechanism. In particular, the role that finite temperature could play is not obvious. Multimode Tomonaga-Luttinger liquids coupled through their densities or a cosine term are a first step towards an accurate dynamical description of multi-component models, that would seemingly require a generalized Imambekov-Glazman formalism or a Bethe-Ansatz based treatment.
Dans le Chapitre II, j'ai commencé par rappeler quelques traits spécifiques aux systèmes quantiques unidimensionnels. En guise de premier exemple, je rappelle que tout mouvement y est nécessairement collectif du fait des collisions, ce qui empêche l'existence d'excitation individuelle et de quasi-particule, et enlève tout sens au concept de liquide de Fermi, si utile en dimension supérieure. Les interactions dans les systèmes de bosons peuvent conduire à l'annulation de la fonction d'onde en cas de contact, comme le principe de Pauli l'imposerait pour des fermions libres, et dans l'espace des quasi-impulsions, on observe la structure caractéristique d'une mer de Fermi, ce qui diminue fortement la pertinence de la notion de statistique quantique. Enfin, pour les systèmes non polarisés en spin, on observe la séparation effective des excitations de charge et de spin, ce qui amène à revoir la notion de particule élémentaire. Tous ces effets sont en fait directement liées à la contrainte topologique imposée par la nature unidimensionnelle du système, les particules ne pouvant pas se croiser sans entrer en collision.
Une autre conséquence frappante de la dimension de dimension est la validité du théorème de Mermin-Wagner dans bon nombre de situations, ce qui empêche toute brisure spontanée de symétrie et conduit à des diagrammes de phase moins riches qu'en dimension supérieure. Une alternative possible est d'étudier les fonctions de corrélation pour soi, qu'elles soient locales ou non-locales, dans l'espace réel ou celui des énergies, à l'équilibre ou hors équilibre. Elles permettent en particulier de caractériser les gaz d'atomes ultrafroids de basse dimension. Ces derniers sont obtenus par un système à base de pièges fortement anisotropes. Sur un réseau optique, on peut créer un ensemble de gaz de géométrie filaire, mais on peut aussi obtenir un gaz unique, et même imposer les conditions aux limites périodiques chères aux théoriciens, en choisissant une géométrie annulaire.
Un certain nombre de modèles simples introduits en physique mathématique dans les années 1960 s'avèrent décrire ces gaz avec une bonne précision, et posséder la propriété d'intégrabilité, ce qui permet d'en faire une étude analytique exacte qui couvre l'énergie et la thermodynamique, le spectre d'excitations et même les fonctions de corrélations, grâce à l'Ansatz de Bethe. D'autres outils spécifiques aux modèles de basse dimension viennent enrichir la panoplie, par exemple la correspondance bosons-fermions, qui permet de traiter de façon exacte le gaz de Tonks-Girardeau, fortement corrélé, en remarquant qu'il se comporte souvent comme un gaz de fermions idéal. Un bon nombre de modèles appartient à la classe d'universalité des liquides de Tomonaga-Luttinger, qui est complètement résolu par bosonisation. En particulier, on connaît ses fonctions de corrélations, qui décrivent celles de modèles plus compliqués à longue distance, aux temps courts et à basse énergie. À température nulle, les corrélations spatiales décroissent comme des lois de puissance, tandis que leur effondrement est exponentiel à longue distance à température finie, comme le confirme le formalisme de la théorie conforme des champs, tout aussi valide pour étudier ce problème.
Dans le Chapitre III, j'ai étudié le modèle de Lieb et Liniger, qui décrit un gaz de bosons avec interactions de contact en dimension un. L'Ansatz de Bethe permet d'exprimer son énergie dans l'état fondamental en tant que solution d'un système d'équations intégrales couplées, dont des expressions approchées de précision arbitraire peuvent être obtenues dans les régimes de faible ou forte interaction. Dans le premier, j'ai réussi à identifier un coefficient auparavant inconnu du développement en série, et dans le régime de forte interaction, j'ai atteint des ordres élevés et réussi à sommer partiellement la série pour proposer une expression conjecturale plus simple. En parallèle, j'ai développé une autre méthode, applicable quelle que soit l'intensité des interactions. La combinaison de ces résultats me donne accès à l'énergie avec une précision diabolique.
Je me suis ensuite tourné vers les fonctions de corrélation locales. Celle à un corps est trivial, celles d'ordre deux et trois sont accessibles par Ansatz de Bethe moyennant quelques calculs supplémentaires. Il m'a notamment fallu évaluer les premiers moments de la densité de pseudo-impulsions, dont j'ai obtenu des estimations plus précises que celles connues jusqu'alors.
La fonction de corrélation à un corps acquiert une structure intéressante si on la considère d'un point de vue non-local. À longue distance, son comportement est celui d'un liquide de Tomonaga-Luttinger. Elle décroît de façon algébrique, ce qui est caractéristique d'un quasi-ordre à longue distance. À courte distance, le comportement dépend fortement des caractéristiques microscopiques du modèle. Dans le cas du modèle de Lieb et Liniger, j'ai obtenu les premiers coefficients du développement en série par Ansatz de Bethe, en établissant des liens entre ceux-ci, les fonctions de corrélation locales et les moments de la densité de pseudo-impulsions. J'ai baptisées ces relations 'connexions', et me suis rendu compte qu'elles correspondaient souvent à des relations bien connues mais qui n'avaient jamais été envisagées comme aussi étroitement liées les unes aux autres ni englobées dans un seul et unique cadre interprétatif. Parmi ces coefficients, le premier à avoir un comportement non-monotone en fonction de l'intensité des interactions est celui d'ordre quatre, qui change de signe pour une certaine valeur de l'intensité des interactions, que j'ai évaluée avec une grande précision.
La transformée de Fourier de la fonction de corrélation à un corps n'est autre que la distribution des impulsions, qui se comporte à grande impulsion comme l'inverse de la puissance quatrième de cette dernière. Son coefficient est le contact de Tan, qui dépend des interactions et renseigne sur les caractéristiques microscopiques du modèle. J'ai choisi cette observable pour illustrer un formalisme hybride combinant l'Ansatz de Bethe et l'approximation de la densité locale, afin d'étudier un gaz de Bose confiné longitudinalement par un piège harmonique, qui retire au modèle son intégrabilité. J'ai pu notamment développer une procédure donnant le contact de Tan à un ordre arbitraire dans l'inverse de l'intensité des interactions.
Dans le Chapitre IV, je me suis tourné vers les fonctions de corrélation dans l'espace des énergies, en particulier le facteur de structure dynamique du modèle de Lieb et Liniger, qui représente le spectre d'absorption de ce gaz. Quand une impureté ou un faisceau laser traverse le fluide et modifie localement sa densité, si l'effet est suffisamment faible, il peut être traité à travers le formalisme de la réponse linéaire. Ce dernier permet d'en déduire la force de traînée, liée à la dissipation d'énergie vers le milieu extérieur sous forme d'échauffement du gaz. J'ai ainsi pu étudier la superfluidité, caractérisée par l'absence de viscosité ou de force de traînée en-dessous d'une vitesse d'écoulement critique.
Après une introduction qui retrace le cheminement historique, un exposé des idées qui ont conduit au critère de Landau et au formalisme de la force de traînée dans le régime quantique, j'ai obtenu le facteur de structure du gaz de Tonks-Girardeau en m'appuyant sur la correspondance bosons-fermions. Une certaine région de basse énergie située endessous du spectre d'excitation inférieur est interdite à toute excitation. La force de traînée due à une barrière de potentielle infiniment fine est d'abord une fonction linéaire de la vitesse, avant de saturer à partir de la vitesse de Fermi. Rien de nouveau sous le soleil, ces résultats étant déjà connus. En revanche, j'ai ensuite étudié l'effet de la température sur le facteur de structure dynamique. Cette dernière a pour principal effet d'augmenter la taille de la zone où les excitations peuvent avoir lieu, jusqu'aux énergies négatives qui correspondent à une émission. Les phonons sont de plus en plus mal définis à mesure que la température augmente, et finissent par être noyés dans le continuum. Quant à la force de traînée, si on ne suppose plus une barrière de potentiel infiniment fine mais qu'on prend en compte son épaisseur, la saturation disparaît et la force de traînée finit pratiquement par disparaître dans le régime de grande vitesse, signe d'un comportement quasi-superfluide.
Plusieurs techniques ont déjà été éprouvées pour traiter les interactions quelconques. Je me suis concentré sur le formalisme des liquides de Tomonaga-Luttinger, que j'ai poussé dans ses ultimes retranchements. Par comparaison avec les résultats exacts dans le régime de Tonks-Girardeau, à température nulle et finie, j'ai pu en étudier le domaine de validité, qui est restreint aux basses énergies, basses températures, et faibles vitesses en ce qui concerne la force de traînée. À interaction finie, toute prédiction quantitative requiert la connaissance du paramètre de Luttinger, que j'ai obtenu par Ansatz de Bethe, et du premier facteur de forme, accessible uniquement par des outils avancés. J'en ai obtenu une expression effective qui s'est avérée assez efficace pour reproduire le résultat exact à la verticale du point de rétrodiffusion, à basse énergie et dans un large domaine d'interactions. Un traitement plus sophistiqué requiert en particulier la connaissance précise des spectres d'excitation, ce qui est désormais chose faite. J'ai obtenu un développement en série du spectre de type II, dont les deux premiers ordres s'expriment en fonction de la vitesse du son et de la masse effective, évalués avec une précision arbitraire par Ansatz de Bethe, et qui suffisent à reproduire la solution numérique exacte avec une bonne précision. on augmente progressivement la taille d'un côté de la boîte. Ces modes emplissent petit à petit tout l'espace disponible en-dessous du spectre d'excitation inférieur, jusqu'à le combler entièrement lorsqu'on atteint la dimension deux. J'ai fait la même étude pour la force de traînée, qui se révèle moins instructive d'un point de vue physique.
Les expériences mettent souvent en jeu un piège harmonique longitudinal, qui peut être étudié théoriquement dans le cadre de l'approximation de la densité locale. Il s'avère alors que chaque degré de confinement ajouté est équivalent, pour le facteur de structure dynamique, à une augmentation de la dimension de l'espace d'une unité. Ce n'est pas le cas, en revanche, si on se contente de sonder la région centrale du piège, où le gaz est relativement homogène.
Pour finir, en vue de généralisations à des systèmes en interaction, j'ai développé un formalisme de liquide de Tomonaga-Luttinger multimode, que j'ai testé le long de la transition d'une à deux dimensions. Ses prédictions s'avèrent correctes pour chaque mode au voisinage de son point de rétrodiffusion, et à la limite de dimension deux, au voisinage de ce point au niveau global.
Then, Eq. (A.1) reads
n(x) = |∂ x ζ(x)| i δ[ζ(x) -iπ] = ∂ x ζ(x) i δ[ζ(x) -iπ]. (A.5)
One transforms this expression by applying Poisson's formula, namely
+∞ m=-∞ g(m) = +∞ m=-∞ +∞ -∞ dz g(z)e 2imπz , (A.6)
to the function defined as g
(z) = δ[ζ(x) -πz], yielding n(x) = ∂ x ζ(x) π +∞ m=-∞ e 2imζ(x) . (A.7) I finally define θ(x) = k F x-ζ(x)
, and rewrite the density-density correlator as
n(x, t)n(x , t ) = 1 π 2 [k F +∂ x θ(x, t)][k F +∂ x θ(x , t )] +∞ m=-∞ +∞ m =-∞ e 2im[θ(x,t)+k F x] e 2im [θ(x ,t )+k F x ] . (A.8)
Due to Galilean-invariance of the system, only relative coordinates are important in the thermodynamic limit. Therefore, I set x and t to zero in the following. To compute the correlations using Eq. (A.8), I split the summations over m and m into several parts. First, I compute the leading term, obtained for m = m = 0. Keeping only the latter is called 'harmonic approximation'. This term corresponds to 1 π 2 ∂ x ζ(x, t)∂ x ζ(0, 0) . Using the diagonal form of the Hamiltonian Eq. (II.34), the field expansion over the bosonic basis Eq. (II.32), and the equation of motion or the Baker-Campbell-Haussdorff formula for exponentials of operators yields
∂ x ζ(x, t) = πn 0 + 1 2 q =0 2πK qL 1/2 iq e i[qx-ω(q)t] b q -e -i[qx-ω(q)t] b † q , (A.9)
with b q b † q = δ q,q and b † q b q = 0 since q, q = 0 and T = 0. Also, b † q b † q = b q b q = 0, thus
∂ x ζ(x, t)∂ x ζ(0, 0) -(πn 0 ) 2 = πK 2 1 L q =0 |q|e iq[x-sign(q)vst] , (A.10)
and after a few lines of algebra,
1 π 2 ∂ x ζ(x, t)∂ x ζ(0, 0) = n 2 0 1 - K 4k 2 F 1 (x -v s t + i ) 2 + 1 (x + v s t -i ) 2 , (A.11)
where a short-distance regulator i has been added.
Another type of contribution is given by n 2 0 +∞ m,m =-∞, =(0,0) e 2imζ(x,t) e 2im ζ(x ,t ) . I introduce a generating function: G m,m (x, t; x , t ) = e 2imζ(x,t) e 2im ζ(x ,t ) , (A.12)
and use the identity, valid for two operators A and B both commuting with their commutator:
e A+B = e A e B e 1 2 [A,B] , (A.13) hence G m,m (x, t; 0, 0) = e 2i[mζ(x,t)+m ζ(0,0)] e 1 2 [2imζ(x,t),2im ζ(0,0)] . (A.14) Using Eq. (II.43), [ζ(x, t), ζ(0, 0)] = 1 4 q,q =0 2πK qL 1/2 2πK q L 1/2 e i[qx-ω(q)t] b q , b † q + e -i[qx-ω(q)t] b † q , b q = i q =0 πK qL sin [qx -ω(q)t] . (A.15) since [b q , b † q ] = δ q,q . Also, mζ(x, t) + m ζ(0, 0) = mk F x + 1 2 q =0 2πK qL 1/2 (me i[qx-ω(q)t] + m )b q + h.c. , (A.16)
where h.c. means 'hermitian conjugate'. Thus, setting α m,m = me i[qx-ω(q)t] + m for concision,
G m,m (x, t; 0, 0) = e imk F x+i q =0 | 2πK qL | 1/2 (α m,m bq+h.c.) e -2mm i 1 L q =0 | πK q | sin[qx-ω(q)t] . (A.17)
We are interested in its statistical average. I use the identity
e A = e 1 2 A 2 , (A.18)
valid for any linear operator A, to show that
e i q =0 | 2πK qL | 1/2 (α m,m bq+h.c.) = e -q,q =0 | πK qL | 1/2 πK q L 1/2 |α m,m | 2 ( bqb q † + b † q b q ) = e -q =0 πK |q|L (m 2 +m 2 +2mm cos[qx-ω(q)t]) , (A.19) since |α m,m | 2 = (me i[qx-ω(q)t] +m )(me -i[qx-ω(q)t] +m ) = m 2 +m 2 +2mm cos[qx-ω(q)t]. (A.20) Thus, G m,m (x, t; 0, 0) = e 2imk F x e -q =0 πK |q|L (m 2 +m 2 +2mm e i[qx-ω(q)t] ) . (A.21)
To go further, I have to evaluate
1 L q =0 1 |q| (m 2 + m 2 + 2mm e iq[x-vssign(q)t]
). First, note that if this series diverges, then G mm (x, t; 0, 0) = 0. Rewriting
m 2 + m 2 + 2mm e iq[x-vssign(q)t] = (m + m ) 2 -2mm 1 -e iq[x-vssign(q)t] , (A.22)
one sees that m = -m is a necessary condition in the thermodynamic limit to result in a non-vanishing contribution.
In the thermodynamic limit, introducing a regularizing cut-off e -q and using the property
- 1 2 e -( +y+ivst-ix)q - 1 2 e -( +y+ivst+ix)q = 1 2 ln ( + iv s t) 2 + x 2 2 = 1 2 ln (x -v s t + i )(x + v s t -i ) 2 . (A.24)
In the end,
G m,m (x, t; 0, 0) = e 2imk F x δ m,-m (x -v s t + i )(x + v s t -i ) 2 -Km 2 (A.25)
yielding the other contributions to Eq. (II.41) after replacing the various powers of the regularizing term by the non-universal form factors. In particular, I obtain
A m = 2( k F ) 2Km 2 (A.26)
in terms of the small-distance cut-off.
A.2 Density correlations of a Tomonaga-Luttinger liquid at finite temperature by bosonization
This appendix provides an alternative approach to CFT to derive Eq. (II.72), based on the Tomonaga-Luttinger liquid formalism. Calculations are far longer, but provide an independent way to check the result and are more elementary in the mathematical sense. I split the calculations into two parts.
A.2.1 First contribution to the density correlation
The beginning of the derivation is essentially the same as in the zero temperature case, already treated in Appendix A.1. Using the same notations, I start back from Eq. (A.9), up to which the derivation is the same. Now, since T > 0 the mean values in the bosonic basis are re-evaluated as b † q b q = δ q,q n B (q), and b q b † q = δ q,q [1+n B (q)], where
n B (q) = 1 e β ω(q) -1 (A.27)
is the Bose-Einstein distribution of the bosonic modes. Thus,
[ ∂ x ζ(x, t)∂ x ζ(0, 0) -(πn 0 ) 2 ] T >0 = 1 4 q =0 2πK |q|L q 2 e i(qx-ω(q)t) [1 + n B (q)] + e -i(qx-ω(q)t) n B (q) = [ ∂ x ζ(x, t)∂ x ζ(0, 0) -(πn 0 ) 2 ] T =0 + 1 4 q =0
2πK|q| L e i[qx-ω(q)t] + e -i[qx-ω(q)t] n B (q), (A. [START_REF] Kleine | Spin-charge separation in two-component Bose gases[END_REF] where I have isolated the result at T = 0 already evaluated in Appendix A.1, and a purely thermal part.
To evaluate this thermal part, I make a few algebraic transformations, take the thermodynamic limit and use the change of variable β v s q → q to obtain 2π
L q =0 |q| e i[qx-ω(q)t] + e -i[qx-ω(q)t] n B (q) = +∞ 0 dq q e iq(x-vst) + e -iq(x-vst) + e iq(x+vst) + e -iq(x+vst) n B (q) = 1 L 2 T +∞ 0 dq q e q -1 e iq (x-vst) L T + e iq (x+vst) L T + e -iq (x-vst) L T + e -iq (
(-ib + n) 2 + 1 (ib + n) 2 = 2 +∞ n=1 n 2 -b 2 (n 2 + b 2 ) 2 . (A.31)
Then, I use the property [449]
1 sin 2 (πx) = 1 π 2 x 2 + 2 π 2 +∞ n=1 x 2 + k 2 (x 2 -k 2 ) 2 (A.32) combined to sin(ix) = i sinh(x), yielding 2 +∞ k=1 k 2 -x 2 (k 2 + x 2 ) 2 = π 2 1 π 2 x 2 - 1 sinh 2 (πx) , (A.33)
put back the prefactors and add the known result at T = 0 to obtain
n(x, t)n(0, 0) m=0 n 2 0 = - K 4k 2 F π 2 L 2 T 1 sinh 2 π(x+vst) L T + 1 sinh 2 π(x-vst) L T . (A.34)
A.2.2 Second contribution to the density correlation function
This time again, the derivation is at first strictly similar to the one at zero temperature.
The point where they start to differ is Eq. (A.19), so I come back to this exact point and find
e i q =0 | 2πK qL | 1/2 (α mm bq+h.c.) = e -q,q =0 | πK qL | 1/2 πK q L 1/2 |α mm | 2 ( bqb † q + b † q b q ) = e -q,q =0 | πK qL | 1/2 πK q L 1/2 |α mm | 2 δ q,q [1+2n B (q)] , (A.35)
thus the generating function at finite temperature reads
G mm (x, t; 0, 0) T >0 = e 2imk F x e -2mm i 1 L q =0 | πK q | sin[qx-ω(q)t] e -q =0 | πK qL |(m 2 +m 2 +2mm cos[qx-ω(q)t])[1+2n B (q)]
= e 2imk F x e -q =0 | πK qL |{m 2 +m 2 +2mm e i[qx-ω(q)t] +2(m 2 +m 2 +2mm cos[qx-ω(q)t])n B (q)} = e 2imk F x e -q =0 | πK qL |{(m+m ) 2 +2mm (e i[qx-ω(q)t] -1)+[(m+m ) 2 +2mm (cos[qx-ω(q)t]-1)]n B (q)} (A.36)
and the condition m = -m in the thermodynamic limit yields
G mm (x, t; 0, 0) T >0 = δ m,-m e 2imk F x e -m 2 K q =0 dq |q| {1-e i[qx-ω(q)t] +2(1-cos[qx-ω(q)t])n B (q)} . (A.37) It involves the integral q =0 dq |q| 1 -e i[qx-ω(q)t] + 2[1 -cos(qx-ω(q)t)]n B (q) = 2 +∞ 0 e -q dq q 1 -e -iqvst cos(qx) + 2[1-cos(qx) cos(qv s t)]n B (q) , (A .38)
where I have restored the regulator neglected up to here to simplify the notations. This integral has been evaluated in imaginary time, defined through τ = it, in Ref. [START_REF] Giamarchi | Quantum Physics in One Dimension[END_REF]. The result should be
F 1 (r) =
+∞ 0 e -q dq q 1 -e -qvsτ cos(qx) + 2[1 -cos(qx) cosh(qv s τ )]n B (q)
x,vsτ
1 2 ln L 2 T π 2 2 sinh 2 πx L T + sin 2 πv s τ L T . (A .39)
I propose my own derivation of the latter, that relies on the following steps:
+∞ 0 dq q e -q 1 -e -iqvst cos(qx) + 2[1 -cos(qx) cos(qv s t)]n B (q) = +∞ 0 dq e -q +∞ 0 dy e -yq 1-e -iqvst e iqx +e -iqx 2 +2 1- e iqx +e -iqx 2 e iqvst +e -iqvst 2 1 e L T q -1 = 1 2 +∞ 0 dy +∞ 0 dq e -( +y)q
2-e iq(x-vst) -e -iq(x+vst) + +∞ n=0 e -L T q(n+1) 4e iq(x+vst) +e iq(x-vst) +e -iq(x+vst) +e -iq(x-vst)
= 1 2 A→+∞ 0 dy 2 +y - 1 +y-i(x -v s t) - 1 +y+i(x + v s t) + +∞ n=0 4 +y+ L T (n + 1) - 1 +y+L T (n + 1)-i(x + v s t) - 1 +y+L T (n + 1)+i(x -v s t) - 1 +y+L T (n + 1)+i(x + v s t) - 1 +y+L T (n + 1)-i(x -v s t) = 1 2 ln (x+v s t-i )(x-v s t+i ) 2 +∞ n=0 1+ (x + v s t) 2 [ +L T (n+1)] 2 +∞ n =0 1+ (x -v s t) 2 [ +L T (n +1)] 2 .(A.40)
Then, in the limit L T , x, v s t and using the infinite product expansion of sinh,
sinh(x) = x +∞ n=1 1 + x 2 k 2 π 2 , (A.41)
as well as sinh 2 (x) -sinh 2 (y) = sinh(x + y) sinh(x -y) and sin(ix) = i sinh(x), I recover Eq. (A.39) as expected, and Eq. (II.72) follows.
Actually, it is even possible to evaluate the integral F 1 (r) exactly, providing another derivation of Eq. (A.39). To do so, I start from
F 1 (r) = 1 2 ln (x+v s t-i )(x-v s t+i ) 2 +∞ n=1 1+ (x+v s t) 2 ( +nL T ) 2 +∞ n =1 1+ (x-v s t) 2 ( +n L T ) 2 (A.42)
and rewrite
+∞ n=1 1 + (x ± v s t) 2 ( + nL T ) 2 = +∞ n=0 1 + x±vst l T 2 L T + n 2 2 2 + (x ± v s t) 2 . (A.43) Then, I use the property [449] Γ(x) Γ(x -iy) 2 = +∞ k=0 1 + y 2 (x + k) 2 , x = 0, -1, -2 . . . (A.44) to obtain F 1 (r) = 1 2 ln x 2 -(v s t-i ) 2 2 1 1+ (x+vst) 2 2 1 1+ (x-vst) 2 2 Γ L T Γ -i(x+vst) L T 2 Γ L T Γ -i(x-vst) L T 2 . (A.45)
To check consistency with Eq. (A.39), I take the large thermal length limit combined to the properties Γ(x)
A.3 Density correlations of a Tomonaga-Luttinger at finite size and temperature by bosonization
In this appendix, I provide elements of derivation of Eq. (II.73). To do so, I generalize Eq. (A.11) to finite size and temperature.
∂ x ζ(x, t)∂ x ζ(0, 0) L<+∞,T >0 -(πn 0 ) 2 = 1 4 q =0 2πK |q|L q 2 {e i[qx-ω(q)t] [1 + n B (q)] + e -i[qx-ω(q)t] n B (q)} = ∂ x ζ(x, t)∂ x ζ(0, 0) L<+∞ -(πn 0 ) 2 + 1 4 q =0 2πK |q|L q 2 {e i[qx-ω(q)t] + e -i[qx-ω(q)t] }n B (q). (A.46)
The result at finite size and zero temperature is easily evaluated and yields Eq. (II.71). After a few lines of algebra I find that the second part reads The second contribution is obtained by adapting calculation tricks used to evaluate the autocorrelation function of the wavefunction in [START_REF] Eggert | Correlation functions of interacting fermions at finite temperature and size[END_REF]451]. To obtain the generating function, it is no more possible to transform the sums into integrals, so I shall evaluate
1 4 q =0 2πK |q|L q 2 {e i[qx-ω(q)t] + e -i[qx-ω(q)t] }n B (q) = πK 2L [F (x -v s t) + F (x + v s t)]. (A.47) Function F reads F (u) = q>0 q(e iqu + e -iqu ) +∞ n=1 e -L T qn = - +∞ n=1 1 L T ∂ ∂n +∞ m=1 e i 2π L mu + e -i 2π L mu e -L T 2π L mn = - π 2L +∞ n=1 1 sinh 2 π L (iu -L T n) + 1 sinh 2 π L (iu + L T n) = - π L +∞ n=1 cos 2 πu L sinh 2 πL T L n -sin 2 πu L cosh 2 πL T L n cos 2 πu L sinh 2 πL T L n + sin 2 πu L cosh 2 πL T L n 2 = - π L +∞ n=1 2[cos 2πu L cosh 2πL T L n -1] cosh 2πL T L n -cos 2πu L 2 = - π 2L - 1 sin 2 πu L +
q =0 1 |q|
[(e i[qx-ω(q)t] -1)(1 + n B (q)) + (e -i[qx-ω(q)t] -1)n B (q)] = F 1 (q, u) + F 1 (q, -v), (A.49)
where F 1 (q, u) = q>0 1 q (e iqu -1) 1 + 1 e L T q -1 + (e -iqu -1) 1 e L T q -1 . In this appendix I illustrate an exact mapping from the Lieb-Liniger model of deltainteracting bosons in 1D discussed in the main text, onto system of classical physics. Historically, both have beneficiated from each other, and limit cases can be understood in different ways according to the context.
Capacitors are emblematic systems in electrostatics lectures. On the example of the parallel plate ideal capacitor, one can introduce various concepts such as symmetries of fields or Gauss' law, and compute the capacitance in a few lines from basic principles, under the assumption that the plates are infinite. To go beyond this approximation, geometry must be taken into account to include edge effects, as was realized by Clausius, Maxwell and Kirchhoff in pioneering works [452,453,[START_REF] Kirchhoff | Zur Theorie des Condensators[END_REF]. This problem has a huge historical significance in the history of science, since it stimulated the foundation of conformal analysis by Maxwell. The fact that none of these giants managed to solve the problem in full generality, nor anyone else a century later, hints at its tremendous technical difficulty.
Actually, the exact capacitance of a circular coaxial plate capacitor with a free space gap as dielectrics, as a function of the aspect ratio of the cavity α = d/R, where d is the distance between the plates and R their radius, reads [START_REF] Carlson | The circular disk parallel plate capacitor[END_REF] C(α, λ) = 2 0 R where 0 is the vacuum permittivity, λ = ±1 in the case of equal (respectively opposite) disc charge or potential and g is the solution of the Love equation [START_REF] Love | The electrostatic field of two equal circular co-axial conducting disks[END_REF][START_REF] Love | The potential due to a circular parallel plate condenser[END_REF] g(z; α, λ) = 1 + λα π In what follows, I shall only consider the case of equally charged discs. At small α, i.e. at small gap, using the semi-circular law Eq. (III.37), one finds
C(α) α 1 π 0 R α = 0 A d , (B.3)
where A is the area of a plate, as directly found in the contact approximation. On the other hand, if the plates are separated from each other and carried up to infinity (this case would correspond to the Tonks-Girardeau regime in the Lieb-Liniger model), one finds g(z; +∞) = 1 and thus
C(α → +∞) = 4 0 R. (B.4)
This result can be understood as follows: at infinite gap, the two plates do not feel each others anymore, and can be considered as two one-plate capacitors in series. The capacitance of one plate is 8 0 R from an electrostatic treatment, and the additivity of inverse capacitances in series then yields the awaited result.
At intermediate distances, one qualitatively expects that the capacitance is larger than the value found in the contact approximation, due to the effect of the fringing electric field outside the cavity delimited by the two plates. The contact approximation shall thus yield a lower bound for any value of α, which is in agreement with the property (v) of the main text.
The main results and conjectures in the small gap regime beyond the contact approximation [START_REF] Hutson | The circular plate condenser at small separations[END_REF][START_REF] Ignatowski | The circular plate condenser[END_REF]459,460,[START_REF] Atkinson | The Asymptotic Solution of Some Integral Equations[END_REF] are summarized and all encompassed in the most general form [START_REF] Soibelmann | Asymptotics of a condenser capacity and invariants of Riemannian sumanifolds[END_REF]: It follows from the positivity of the Lorentzian kernel and linearity of the integral that for repulsive plates, g(z; α, +1) > 1, yielding a global lower bound in agreement with the physical discussion above. One also finds that g(z; α, -1) < 1. Approximate solutions are then obtained by truncation to a given order. One easily finds that where C(α, λ) = +∞ n=0 λ n C I n (α). Higher orders are cumbersome to evaluate exactly, which is a strong limitation of this method in view of an analytical treatment. Among alternative ways to tackle the problem, I mention Fourier series expansion [START_REF] Carlson | The circular disk parallel plate capacitor[END_REF][START_REF] Norgren | The capacitance of the circular parallel plate capacitor obtained by solving the Love integral using an analytic expansion of the kernel[END_REF], and those based on orthogonal polynomials [START_REF] Milovanović | Properties of Boubaker polynomials and an application to Love's integral equation[END_REF], that allowed to find the exact expansion of the capacitance to order 9 in 1/α in [START_REF] Rao | Capacity of the circular plate condenser: analytical solutions for large gaps between the plates[END_REF] for identical plates, anticipating [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF] in the case of Lieb-Liniger model.
C( ) =
B.2 Ristivojevic's method of orthogonal polynomials
In this appendix, I detail Ristivojevic's method, that allows to systematically find approximate solutions to Eq. (III.32) in the strongly-interacting regime [START_REF] Ristivojevic | Excitation Spectrum of the Lieb-Liniger Model[END_REF][START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF]. First, let me recall a few qualitative features of the density of pseudo-momenta, g(z; α). At fixed α, g as a function of z is positive, bounded, unique and even. Moreover, it is analytic provided α > 0.
Since g is an analytic function of z on the compact [-1, 1], at fixed α it can be written where {a n } n≥0 are unknown analytic functions and {Q n } n≥0 are polynomials of degree n.
To solve the set of equations (III.32), (III.33) and (III.34), one only needs the values of g for z ∈ [-1, 1]. Thus, a convenient basis for the Q n 's is provided by the Legendre polynomials, defined as
P n (X) = (-1) n 2 n n! d dX n [(1-X 2 ) n ], (B.12)
that form a complete orthogonal set in this range. Furthermore, the Legendre polynomial P n is of degree n and consists in sums of monomials of the same parity as n, so that, since g is an even function of z, g(z; α) = where A is a (M + 1) × (M + 1) square matrix, inverted to find the set of coefficients {a 2n (α)} 0≤n≤M . Actually, one only needs to compute (A -1 ) i1 , ∀i ∈ {1, . . . , M +1}, and combine to Eq. (B.13) to obtain the final result at order M . For full consistency with higher orders, one shall expand the result in 1/α and truncate it at order 2M +2, and to order 2M in z.
B.3 A general method to solve the Lieb equation
In this appendix I explain a method that I have developed to solve the Lieb equation (III.32). Contrary to Ristivojevic's method presented in the previous appendix, it works at arbitrary coupling, as it does not rely on a strong-coupling expansion of the kernel. However, it also starts with a series expansion of the density of pseudo-momenta, g(z; α) = injected in the integral equation (III.32) to transform the latter into an infinite set of algebraic equations for the coefficients c 2n . To do so, the series expansion of the integral
I n (α) = 1 -1
dy y 2n α 2 +(y -z) 2 (B.26) must be known explicitly. I n (α) is expressed in terms of hypergeometric functions in [START_REF] Fabrikant | Electrostatic problem of several arbitrarily charged unequal coaxial disks[END_REF]. After a series of algebraic transformations detailed in Ref. [START_REF] Lang | Ground-state energy and excitation spectrum of the Lieb-Liniger model : accurate analytical results and conjectures about the exact solution[END_REF], this integral reads
I n (α) = - 1 α I[ 2n-1 2 ] k=0 2n 2k + 1
(-1) k α 2k+1 z 2(n-k)-1 argth 2z 1 + z 2 + α 2 To finish with, the series is truncated to order M in a self-consistent way, to obtain the following set of M linear equations: where the unknowns are the coefficients c 2i;M (α). The solution yields approximate expressions for a truncated polynomial expansion of g in z. Since the method is not perturbative in α, each coefficient converges faster to its exact value than with the orthogonal polynomial method. I have calculated these coefficients up to order M = 25.
B.4 Other approaches to local correlation functions
In this appendix, I analyze other approaches to obtain the local correlation functions of a δ-interacting 1D Bose gas. They on mappings from special limits of other models onto the Lieb-Liniger model. The original models are the sinh-Gordon quantum field theory in a non-relativistic limit, the XXZ spin chain in a continuum limit, and q-bosons when q → 1.
In the case of the sinh-Gordon model, a special non-relativistic, weak-coupling limit, that keeps the product of the speed of light and coupling constant unchanged, maps its S-matrix onto the Lieb-Liniger one, leading to an exact equivalence [START_REF] Kormos | Expectation Values in the Lieb-Liniger Bose Gas[END_REF]. In par-ticular, their correlation functions are formally identical [START_REF] Kormos | One-Dimensional Lieb-Liniger Bose gas as nonrelativistic limit of the sinh-Gordon model[END_REF], with a subtle difference: they refer to the ground state for the Lieb-Liniger model and to the vacuum in the case of the sinh-Gordon model, where the form factors are computed within the Le Clair-Mussardo formalism [START_REF] Leclair | Finite temperature correlation functions in integrable QFT[END_REF]. Expressions obtained perturbatively in previous works have been resummed non-perturbatively in [START_REF] Kormos | Exact Three-Body Local Correlations for Excited States of the 1D Bose Gas[END_REF], yielding the exact second-and third-order local correlation functions in terms of solutions of integral equations.
A major strength of this approach is that the expressions thereby obtained are also valid at finite temperature and in out-of-equilibrium situations. Another one, though it is rather a matter of taste, is that no derivative is involved, contrary to the direct Bethe Ansatz approach through moments of the density of quasi-momenta, making numerical methods less cumbersome, and more accurate. On the other hand, within this method one needs to solve several integral equations, whose number increases with the order of the correlation function.
The LeClair-Mussardo formalism has also been extended to the non-relativistic limit, that can thus be addressed directly in this formalism, without invoking the original sinh-Gordon model anymore [START_REF] Pozsgay | Mean values of local operators in highly excited Bethe states[END_REF]. In the same reference, it has also been shown that the LeClair-Mussardo formalism can be derived from algebraic Bethe Ansatz.
Then, additional results have been obtained, based on the exact mapping from a special continuum limit of the XXZ spin chain onto the Lieb-Liniger model [470,[START_REF] Seel | A note on the spin-1 2 XXZ chain concerning its relation to the Bose gas[END_REF]. In [472], multiple integral formulae for local correlation functions have been obtained, that encompass the previous results from the sinh-Gordon model approach for g 2 and g 3 , and provide the only known expression for g 4 to date. This formalism also yields higher-order correlations as well, but no systematic method to construct them has been provided yet.
B.5 Density profile of a trapped Tonks-Girardeau gas:
LDA and exact result
The aim of this appendix is to show the equivalence, in the thermodynamic limit, between the density profile predicted by the local-density approximation, and the exact Tonks-Girardeau result obtained by Bose-Fermi mapping. Writing A first derivation based on physical arguments proceeds as follows [START_REF] Mehta | Random Matrices[END_REF]: φ j is the normalized oscillator function, thus φ 2 j (x) dx represents the probability that an oscillator in the j-th state is in [x, x+dx]. When N is large, σ represents the density of particles at
x. Since I consider fermions (using the Bose-Fermi mapping) at T = 0, there is at most one particle per state and all states are filled up to the Fermi energy. I recall that the differential equation satisfied by φ N -1 is
2 d 2 dx 2 φ N -1 (x) + 2 (2N -1-x 2 )φ N -1 (x) = 0. (B.35)
The latter also reads An incident plane-wave initial state |k i is scattered to a final state which, in the framework of the Born approximation for scattering, is considered as a plane-wave state |k f . The matrix element of this interaction operator between the two states is k f |A(r)|k i = e -ik f r A(r)e ik i r dr = A(-q), where q = k i -k f . At lowest perturbation order, the probability per unit time of the process (|k , |i ) → (|k f , |f ) to occur is given by the Fermi golden rule, that reads
W (k f ,f ),(k i ,i) = 2π | f |A(-q)|i | 2 δ[ ω -( f -i )]. (C.1)
The total probability per unit time of the process |k i → |k f is obtained by weighting W (k f ,f ),(k i ,i) by the occupation probability p i of the initial state of the system at equilibrium, and by summing over all initial and final states:
W (k f ,f ),(k i ,i) = 2π i,f p i | f |A(-q)|i | 2 δ[ ω -( f -i )]. (C.2)
The dynamical structure factor is defined as S(q, ω) = 2 W k i ,k f , where A(r) = δn(r). This yields the Lehmann representation of the dynamical structure factor, used for instance in the ABACUS code.
Introducing the Fourier representation of the delta function, S(q, ω) can be expressed as an autocorrelation function. Since δn(r) is hermitian, f |δn(-q)|i * = i|δn(q)|f , so that
S(q, ω) = +∞ -∞ dt i,f
p i i|e i i t/ δn(q)e -i f t/ |f f |δn(-q)|i e iωt In this appendix, I shall derive Eq. (IV.17). This derivation is quite elementary, but is a good occasion to make a few comments. According to Eq. (IV.16), the dynamical structure factor is the Fourier transform of the autocorrelation function of density fluctuations. For the homogeneous gas considered here, the mean value of the density fluctuations is null, thus δn(x, t)δn(x , t ) = n(x, t)n(x , t ) -n 2 0 .
(C.4)
I have evaluated the time-dependent density-density correlation of the Tonks-Girardeau gas in chapter II. The strategy based on the brute-force integration of the latter to obtain the dynamical structure factor is quite tedious, I am not even aware of any work where this calculation has been done, nor did I manage to perform it.
In comparison, making a few transformations before doing the integration considerably simplifies the problem. According to the Bose-Fermi mapping, the model can be mapped onto a fictitious one-dimensional gas of noninterating spinless fermions. The fermionic field is expressed in second quantization in terms of fermionic annihilation operators c k as
ψ(x) = 1 √ L k e ikx c k , (C.5)
whose time-dependence in the Schrödinger picture is obtained as ψ(x, t) = e iHt/ ψ(x)e -iHt/ , (C.6)
where the Hamiltonian is
H = k 2 k 2 2m c † k c k = k ω k c † k c k , (C.7)
and computed using the equation of motion or the Baker-Campbell-Hausdorff formula, that yields
ψ(x, t) = 1 √ L k e i(kx-ω k t) c k . (C.8)
Then, since n(x, t) = ψ † (x, t)ψ(x, t), Wick's theorem, that applies to noninteracting fermions, yields n(x, t)n(0, 0) = ψ † (x, t)ψ(0, 0) ψ(x, t)ψ † (0, 0) + ψ † (x, t)ψ(x, t) ψ † (0, 0)ψ(0, 0) -ψ † (x, t)ψ † (0, 0) ψ(0, 0)ψ(x, t) , (C.9)
and the last term is null. Then, using the expansion of the fermionic field over fermionic operators, the property c In this equation, the delta function is the formal description of energy conservation during the scattering process, which is elastic. The Heaviside distributions mean that the scattering process takes a particle out of the Fermi sea, creating a particle-hole pair.
Actually, ω k+q -ω k = 2m (q 2 + 2qk), (C.12)
and one can split the problem into two cases.
If q ≥ 2k F , then k ∈ [-k F , k F ] and the envelopes are 2m (q 2 -2k F q) = 2m |q 2 -2k F q| = ω -, and 2m (q 2 +2k F q) = ω + .
If 0 ≤ q ≤ 2k F , k ∈ [k F -q, k F ], and the envelopes are 2m [q 2 +2q(k F -q)] = 2m (2qk Fq 2 ) = 2m |q 2 -2k F q| = ω -and 2m (q 2 +2qk F ) = ω + . I evaluate the dynamical structure factor in the case q ≥ 2k F :
S T G (q, ω) = k F -k F dk δ ω -2m
(q 2 +2qk) , (C.13) and using the property of the Dirac distribution,
δ[f (k)] = k 0 |f (k 0 )=0 1 |f (k 0 )| δ(k-k 0 ), (C.14)
by identification here f (k) = -q m and k 0 = 1 2q ( 2mω -q 2 ).
It implies that 1 2q ( 2mω -q 2 ) ∈ [-k F , k F ], so ω ∈ [ω -, ω + ], S T G (q, ω) = m |q| if and only if ω ∈ [ω -, ω + ], S T G (q, ω) = 0 otherwise. The same conclusion holds if q ∈ [0, 2k F ], ending the derivation.
C.3 Dynamical structure factor of a Tomonaga-Luttinger liquid in the thermodynamic limit
In this appendix, I give a quite detailed derivation of the dynamical structure factor of a 1D Bose gas, Eq. (IV.39), obtained from the Tomonaga-Luttinger liquid formalism. My main motivation is that, although the result is well known, details of calculations are scarce in the literature, and some technical aspects are relatively tricky. For clarity, I shall split the derivation into two parts, and evaluate separately the two main contributions, S 0 and S 1 .
C.3.1 First contribution: the phonon-like spectrum at the origin in energy-momentum space
First, I focus on the term denoted by S 0 in Eq. (IV.39), that corresponds to the linearized spectrum at low momentum and energy. According to Eq. (II.44), this contribution stems from the terms that diverge on the 'light-cone', whose form factor is known. Their Fourier transform is quite easy, the only subtlety is the need of an additional infinitesimal imaginary part i to ensure convergence of the integral. Using the light-cone coordinate u = x-v s t, I find: du e -iqu (u + i ) 2 = -2πΘ(q)q.
F.T. 1 (x -v s t + i ) 2 = +∞ -∞ dt e iωt +∞ -∞ dx e -iqx 1 (x -v s t + i ) 2
(C.17)
In the end, F.T. 1 (x ± v s t + i ) 2 = -4π 2 |q|δ(ω-|q|v s )Θ(∓q), (C. [START_REF] Leinaas | On the theory of identical particles[END_REF] whence I conclude that S 0 (q, ω) = -K 4π 2 (-4π 2 |q|)δ(ω -ω(q)) = K|q|δ[ω -ω(q)], (C. [START_REF] Wilczek | Quantum Mechanics of Fractional-Spin Particles[END_REF] with ω(q) = |q|v s . This result agrees with [START_REF] Cazalilla | Bosonizing one-dimensional cold atomic gases[END_REF] I[χ nn (q, ω)] = v s Kq 2 2 ω(q) {δ[ω -ω(q)] -δ[ω + ω(q)]} , (C.20)
according to the fluctuation-dissipation theorem at T = 0.
C.3.2 Second contribution to the dynamical structure factor: the umklapp region
The most interesting contribution to the dynamical structure factor is denoted by S 1 in Eq. (IV.39), and corresponds to the Fourier transform of the main contribution to the density-density correlation in Eq. (II.44), with non-trivial form factor. I shall detail the derivation of an arbitrary contribution, and specialize in the end to this main term.
The Fourier transform that should be evaluated here is so that, after a rescaling and tranformation of the cosine into complex exponentials,
I(Km 2 ) =
I(Km 2 ) = 2 -2Km 2 v s +∞ -∞ du +∞ -∞ dv v - i 2 -Km 2 u + i 2 -Km 2
e iv( ω vs -q+2mk F ) e -iu( ω vs +q-2mk F ) + e iv( ω vs -q-2mk F ) e -iu( ω vs +q+2mk F ) .
(C. [START_REF] Batchelor | The Bethe ansatz for 1D interacting anyons[END_REF] This in turn can be expressed in terms of
J(q; α) = +∞ -∞ dx e ixq x - i 2 -α , (C.23)
where α > 0. Once again, complex analysis is a natural framework to evaluate this integral. The difficulty lies in the fact that, since α is not necessarily integer, the power law represents the exponential of a logarithm, which is multiply defined in the complex plane. To circumvent this problem, a branch cut is introduced, and the integral is not and assuming q > 0, brown dz e iqz z -i 2 -α → R→+∞ 0, while the first contribution to the right-hand side coincides with the integral J.
To finish the calculation, I still need to evaluate the integral over the green contour. To do so, I use the property To obtain the dynamical structure factor at finite temperature, I use the same trick as before, and split it artifically into the zero temperature result and a purely thermal term, which is more convenient to evaluate. Hence, S T (q, ω) = S T >0 (q, ω)-S T =0 (q, ω) = F.T. K 4π 2 q =0 dq|q| e i(qx-ω(q)t) + e -i(qx-ω(q)t) n B (q) (C.28) in the thermodynamic limit, and find after straightforward algebra, I find that S T 0 (q, ω) = K L T {δ [ω + ω(q)] + δ [ω -ω(q)]} β ω(q) e β ω(q) -1 .
(C.29)
Eventually, S 0,T >0 (q, ω) = S T 0 (q, ω) + S 0,T =0 (q, ω)
= K|q| e β ω(q) -1 {δ[ω + ω(q)] + δ[ω -ω(q)]} + K|q|δ[ω -ω(q)] = K|q| 1 -e -β ω(q) δ[ω -ω(q)] + e -β ω δ[ω + ω(q)] ,
(C. Another tricky point concerns the branch cut. Prefactors written as (-1) -K are an abuse of notation, and should be interpreted either as e iKπ or as e -iKπ , to obtain the intermediate expression
S 1,T >0 (q, ω) = 1 2v s (L T k F ) 2 L T 2K (2π) 2(K-2) [Γ(1-K)] 2 Γ K 2 + i β 4π [ω + (q -2k F )v s ] Γ 1 -K 2 + i β 4π [ω + (q -2k F )v s ] + e -iKπ Γ K 2 -i β 4π [ω + (q -2k F )v s ] Γ 1 -K 2 -i β 4π [ω + (q -2k F )v s ] Γ K 2 -i β 4π [ω -(q -2k F )v s ] Γ 1 -K 2 -i β 4π [ω -(q -2k F )v s ] + e iKπ Γ K 2 + i β 4π [ω -(q -2k F )v s ] Γ 1 -K 2 + i β 4π [ω -(q -2k F )v s ] (C.34)
Using the property Γ(z)Γ(1-z) = π sin(πz) , after straightforward algebra I finally obtain
S 1,T >0 (q, ω) = 1 2v s L T 2π 2(1-K) (n 0 ) 2 e β ω 2 B K 2 + i β 4π [ω + (q -2k F )v s ], K 2 -i β 4π [ω + (q -2k F )v s ] B K 2 + i β 4π [ω -(q -2k F )v s ], K 2 -i β 4π [ω -(q -2k F )v s ] , (C.35)
yielding Eq. (IV.43), and the expression of the coefficient C(K, T ) in terms of the smalldistance cut-off.
After this rather long calculation, it is worth checking its consistency with the T = 0 case. To do so, I first come back to the property of the Beta function B(x, y) = Γ(x)Γ(y) Γ(x+y) , as well as the properties of the Gamma function, Γ(z) = Γ(z) and The special case K = 1 is also worth studying on its own as an additional indirect check of Eq. (IV.43). Assuming q > 0,
S 1,K=1,T >0 (q, ω) = (k F ) 2 L 2 T +∞ -∞ dt +∞ -∞ dx e i(ωt-qx) e 2ik F x
C.5 Drag force due to a delta-barrier in the Tomonaga-
Luttinger liquid framework
This appendix gives two derivations of the second line of Eq. (IV.48), starting from its first line and Eq. (IV.39), whose combination yields
F = U 2 b 2π +∞ 0 dq qS(q, qv) = U 2 b B 1 (K) 2π +∞ 0 dq q (qv) 2 -v 2 s (q-2k F ) 2 K-1 Θ (qv-v s |q-2k F |) = U 2 b B 1 (K) 2π (v 2 s -v 2 ) K-1 q + q -
dq q (q-q -) K-1 (q + -q) K-1 , (C. [START_REF] Novoselov | Electric field effect in atomically thin carbon films[END_REF] where I have defined
q ± = 2k F v s v s ∓ v . (C.45)
Then, changing of variables as q = q -q -, q = q q + -q -, up to a global coefficient its boils down to F ∝ 1 0 dq q K-1 (1 -q) K-1 1 + q + -q - q - q (C.46)
with q + -q - q - = 2v vs-v . This integral can be evaluated using the property n! and (q) n = q(q + 1) . . . (q + n -1) is the Pochhammer symbol. I also use the properties of the Beta and Gamma functions, B(X, Y ) = Γ(X)Γ(Y ) Γ(X+Y ) and Γ(2z) = Another derivation starts back from the last line of Eq. (C.44), which is split into two parts as q + q - dq q(q -q -) K-1 (q + -q) K-1 = q + q - dq (q -q -) K (q + -q) K-1 + q - q + q - dq (q -q -) K-1 (q + -q) K-
F (v) (v 2 s -v 2 ) K-1 U 2 b B 1 (K)/(2π )
= (q + -q -) 2K B(K + 1, K) + q -(q + -q -) 2K-1 B(K, K)
= (2k F ) 2K (2vv s ) 2K (v 2 s -v 2 ) 2K Γ(K + 1)Γ(K) Γ(2K + 1) + 2k F v s (2k F ) 2K-1 v s + v (2vv s ) 2K-1 (v 2 s -v 2 ) 2K-1 Γ(K) 2 Γ(2K) , (C.51)
and the property Γ(K+1) = KΓ(K) combined to algebraic manipulations yields Eq. (IV.48).
3 )
3 In this equation, a ⊥ = mω ⊥ represents the size of the ground state of the transverse Hamiltonian and C = -ζ(1/2) 1.46, where ζ is the Riemann zeta function.
Figure
Figure II.1 -In-situ experimental pictures of ring-trapped atomic gases, illustrated the variety of radii obtained with a magnetic trap (top left, from Ref. [103]), an optical trap (top right, from Ref. [109]), and with the combined techniques and a radio-frequency field, allowing to tune the radius (bottom, from Ref. [113])
Figure II. 2 -
2 Figure II.2 -The six configurations allowed by the ice rule in the 6-vertex model
Figure
Figure II.3 -A space-time picture of a two-body elastic scattering in 1D
Figure II. 4 -
4 Figure II.4 -There are two ways to express a N = 3 → N = 3 scattering event as a succession of three N = 2 → N = 2 scattering events. The scattering matrix of the full process is the product of the two-body scattering matrices, and both situations are equivalent, according to the Yang-Baxter equation (II.9).
/4 . (II.[START_REF] Piroli | Exact dynamics following an interaction quench in a one-dimensional anyonic gas[END_REF]
Figure II. 6 -
6 Figure II.6 -Illustration of the complex coordinate conformal transformations to obtain the finite-size, finite-temperature (cylinder geometries), as well as finite size and temperature correlations (torus geometry) from the zero temperature correlations in the thermodynamic limit (plane)
Figure III. 1 -
1 Figure III.1 -Dimensionless function g m (z; α, 9), mean of the 18 th and 20 th order expansions in 1/α of the density of pseudo-momenta g(z; α), as a function of the dimensionless variable z, at dimensionless parameters α = 2.5 (solid, blue), α = 2.3 (dashed, brown) and α = 2 (dotted, black) from bottom to top, compared to the corresponding numerically exact solutions (blue dots). Only a few numerical values are shown to improve the visibility, and numerical error is within the size of the dots.
e(γ) = [e(γ) -γe (γ)] + γe (γ) = e kin (γ) + e int (γ). (III.61) As illustrated in Fig. III.3, the dimensionless kinetic energy per particle, e kin , is a monotonic function of γ, while the dimensionless interaction energy e int (positive since interactions are repulsive), reaches a global maximum at an intermediate interaction strength.
Figure III. 4 -
4 Figure III.4 -Density of pseudo-momenta ρ as a function of dimensionless pseudomomentum k/k F for several interaction strengths. Different colors and line styles represent various approximations. From bottom to top, one sees the exact result in the Tonks-Girardeau regime (thick, blue), then four curves corresponding to dimensionless parameters α = 10, 5, 3 and 2 respectively (solid, blue) obtained from analytical methods and a Monte-Carlo algorithm to solve the Lieb equation Eq. (III.32) (indistinguishable from each other). Above, another set of curves corresponds to interaction strengths from α = 1.8 to α = 0.4 with step -0.2 and from bottom to top (dashed, black), obtained from a Monte-Carlo algorithm, where again analytical and numerical results are indistinguishable. Finally, I also plot the result at α = 0.2 from the method of Appendix B.3 (dotted, red).
Figure III. 5 -
5 Figure III.5 -Dimensionless local correlation functions g 2 (solid, blue) and g 3 (dashed, black) as functions of the dimensionless interaction strength γ. Three-body processes are strongly suppressed at high interaction strength but become of the same order of magnitude as two-body processes in the quasi-condensate regime.
Figure III. 6 -
6 Figure III.6 -Dimensionless fourth moment of the distribution of quasi-momenta, e 4 , as a function of the dimensionless interaction strength γ. The analytical result from the conjecture (III.75) (solid, blue), and from the conjecture in the weakly-interacting regime, Eq. (III.76), with appropriate coefficients (dashed, black), are in excellent agreement with independent accurate numerical evaluations (red and black dots).
16 sinz 4
164 +. . . ,(III.83)
Figure III. 7 -
7 Figure III.7 -Dimensionless one-body correlation function g T G 1 in the Tonks-Girardeau regime as a function of the dimensionless distance z. Short-distance asymptotics given by Eq. (III.85) (solid, red) and large-distance asymptotics given by Eq. (III.83) (dashed, black) overlap at intermediate distances.
Figure III. 8 -
8 Figure III.8 -Dimensionless coefficient c 4 as a function of the dimensionless interaction strength γ, as predicted from the conjectures (solid, blue) and (dashed, black), compared to accurate numerics (red dots). A sign inversion occurs at γ 3.8.
Figure III. 9 -
9 Figure III.9 -Dimensionless coefficients c 2 , c 4 and c 3 (black, blue, red and from bottom to top respectively) as predicted from connections combined to conjectures for the moments of the density of pseudo-momenta, as functions of the dimensionless interaction strength γ.
Figuren 4 0
4 Figure III.10 -Dimensionless Tan's contact C = 2π L 1 n 4 0 C(γ) as a function of the dimensionless interaction strength γ (solid, black) and its value in the Tonks-Girardeau limit, C T G = 4π 2 3 (dashed, blue)
Figure III. 11 -
11 Figure III.11 -Exact density profile of a harmonically trapped Tonks-Girardeau gas (solid, red) and as predicted by LDA (dashed, blue) in units of the inverse harmonic oscillator size 1/a ho , as a function of the dimensionless distance z = x/a ho , for 1, 10 and 100 particles respectively from top to bottom and left to right.
2 2m n 3
23 e(γ). (III.117)The chemical potential is fixed by imposing the normalization condition N = dx n(x).(III.118)Note that in the homogeneous case, Eq. (III.114) would yield
, b5 = -3ζ(3). (III.[START_REF] Bergknoff | Structure and solution of the massive Thirring model[END_REF]
Figure IV. 1 - 1 (2π) d- 1 m d g 2 i n d d+1 in a
111a Figure IV.1 -Dimensionless drag force f = F d / s d-1 (2π) d-1
c d ), (IV.10) where n d = N/L d is the d-dimensional density, s d-1 the area of the unit sphere, c d is the sound velocity in the mean field approach and plays the role of a critical velocity due to the Heaviside Θ function, as illustrated in Fig. IV.1.
Figure IV. 2 -
2 Figure IV.2 -Dynamical structure factor of a Tonks-Girardeau gas at zero temperature in energy-momentum space, where warmer colors correspond to higher values. Excitations occur only inside the area bounded by the excitation spectra of the gas, and are prevented in a low-energy region due to kinematical constraints in 1D. Note that the spectrum is quasi-linear at the origin and around the umklapp point (q = 2k F , ω = 0), which is a signature of well-defined phonon-like excitations.
2 k 2 2m
22 is the dispersion relation of noninteracting spinless fermions, µ the chemical potential, and the infinitesimal imaginary part i0 + ensures causality. From Eqs. (IV.20) and (IV.21), using the property 1 X + i0 + = P.P. 1X-iπδ(X), (IV.23)
Figure IV. 3 -
3 Figure IV.3 -Dimensionless chemical potential of a noninteracting one-dimensional Fermi gas as a function of the dimensionless temperature. Blue dots represent numerical data, the red curve corresponds to the Sommerfeld approximation at low temperature, and the red symbol to the analytical result for the annulation of the chemical potential. The chemical potential starts increasing with T , reaches a maximum and then decreases monotonically, contrary to the 3D case where it is a strictly decreasing function of temperature.
48, (IV.30) where ζ is the Riemann zeta function. These results are illustrated in Fig. IV.3.
Figure IV. 4 -
4 Figure IV.4 -Dynamical structure factor S T G (q, ω) of the Tonks-Girardeau gas in the thermodynamic limit for several dimensionless temperatures, T /T F = 0.1, 0.5, 1 and 4, in panels (a), (b), (c), and (d), respectively. Warmer colors are associated to higher values. Solid black lines correspond to the limiting dispersion relations ω + and ω -, defining the excitation domain at T = 0.
Figure IV. 5 -
5 Figure IV.5 -Sections of the dynamical structure factor of the Tonks-Girardeau gas at q = 0.1k F , in units of the dynamical structure at the umklapp point at zero temperature, as a function of the dimensionless energy ω/ω F , at various temperatures T /T F = 0, 0.1, 0.2, 0.5 and 5 (thick, black), (brown), (dashed, red), (dotted, orange) and (thick, blue) respectively.
Figure IV. 6 -
6 Figure IV.6 -Drag force at finite barrier waist and temperature in the Tonks-Girardeau gas, F T G w>0,T >0 (v), in units of F T G (v F ), as a function of the dimensionless barrier velocity v/v F . Solid lines stand for a dimensionless waist wk F = 0, dashed lines for wk F = 0.5 and thick lines for wk F = 1. For a given set of curves, temperature increases from 0 to 0.1 T F and 0.5 T F from top to bottom, in black, red and blue respectively.
Figure IV. 7 -
7 Figure IV.7 -Picture taken from Ref.[START_REF] Albert | Superfluidité et localisation quantique dans les condensats de Bose-Einstein unidimensionnels[END_REF], showing the three regimes in the drag force profile, put to light in a Bose gas and conjectured in a general quantum fluid. The sound velocity c corresponds to the critical velocity in the Landau mean-field picture. Due to various effects, the flow is really superfluid only below a lower threshold velocity, the true critical velocity v 1 , generically lower than c. This velocity would be null in the Tonks-Girardeau regime. A dissipative regime occurs above v 1 , and dissipation takes finite values up to an upper critical velocity v 2 above which it can be neglected, and the flow can be considered as quasi-superfluid.
Figure IV. 8 -
8 Figure IV.8 -Definition domain of the dynamical structure factor of the Tonks-Girardeau gas at T = 0, in the plane (q, ω), in units of (k F , ω F ). I superimposed the exact result (shaded gray area delimited by the blue curves) to the result in the Tomonaga-Luttinger liquid framework for dimensionless parameters K = 1 and v s /v F = 1 (red). In the latter, the domain consists in a line starting from the origin, and the area included in the infinite triangle starting from the umklapp point (0, 2k F ). The upper energy limit of potential validity of the Tomonaga-Luttinger liquid theory is approximately given by the dashed line.
Figure IV. 9 -
9 Figure IV.9 -Dynamical structure factor of the Tonks-Girardeau gas at T = 0.1 T F in the plane (q, ω) in units of (k F , ω F ) in the vicinity of the umklapp point, as predicted from the Bose-Fermi mapping (left panel), and for a Tomonaga-Luttinger liquid (right panel). The exact temperature dependence is quite well reproduced in the Tomonaga-Luttinger liquid framework. Differences stem mostly from nonlinearities, which are not taken into account in this case.
Figure IV. 10 -
10 Figure IV.10 -Dimensionless sound velocity v s /v F , where v F is the Fermi velocity, as a function of the dimensionless Lieb parameter γ, from numerical solution of the Lieb equation (blue dots), compared to values found in the literature [190, 390] (red squares), and analytical result from Eqs. (III.57, III.58, IV.44)(solid, black). The Tonks-Girardeau limit is indicated by the dashed blue line.
Figure
Figure IV.11 -Reduced form factor of a Tomonaga-Luttinger liquid describing the Lieb-Liniger model, as a function of the dimensionless Luttinger parameter K. The fit function Eq. (IV.46) with α = 1+4 ln(2) (solid, black) is quite accurate over a wider range than the first order Taylor expansion, Eq. (IV.47) (dashed, blue) compared to the graphical data from Ref.[START_REF] Shashi | Exact prefactors in static and dynamic correlation functions of one-dimensional quantum integrable models: Applications to the Calogero-Sutherland, Lieb-Liniger, and XXZ models[END_REF] (red dots).
quantitative comparison with the exact result, Eq. (IV.38). I have used Eqs. (IV.38), (IV.48) and (IV.50) to plot the curves in Fig. IV.13, showing that the Tomonaga-Luttinger liquid model predictions in the Tonks-Girardeau regime are valid for velocities v v F , as expected since the dynamical structure factor is well approximated by the TLL model at low energy only. The exact drag force is all the better approached as the potential is wide. It is always linear near the origin, but its slope depends on the barrier waist w.
Figure IV. 13 -
13 Figure IV.[START_REF] Witten | Three Lectures On Topological Phases Of Matter[END_REF] -Drag force in units of F T G (v F ) as a function of the velocity v (in units of v F ), as predicted for a Tonks-Girardeau gas (dashed lines) and a Tomonaga-Luttinger liquid at dimensionless parameter K = 1 (solid lines), at T = 0. Thin black curves correspond to a dimensionless waist wk F = 0 and thick red curves to a finite waist wk F = 0.5. The TLL prediction is valid provided that v v F .
Figure IV. 14 -
14 Figure IV.14 -Drag force in units of F T G (v F ) as a function of the velocity v (in units of v F ), as predicted for a Tonks-Girardeau gas (dashed lines) and a Tomonaga-Luttinger liquid (solid lines), at w = 0. Thin black curves correspond to T = 0 and thick blue curves to T = 0.1 T F .
Using basic algebraic manipulations on Eqs. (IV.52) and (IV.53), I have obtained a few general properties: (a) The ground state (p = 0, = 0) trivially corresponds to k = Q(γ), confirming that Q represents the edge of the Fermi surface. (b) The quasi-momentum k = -Q(γ) corresponds to the umklapp point (p = 2p F , = 0), always reached by the type II spectrum in the thermodynamic limit, regardless of the value of γ.
Figure IV. 15 -
15 Figure IV.15 -Type I and type II excitation spectra of the Lieb-Liniger model for several values of the interaction strength, from the noninteracting Bose gas (dashed, red) to the Tonks-Girardeau regime (thick, black) with intermediate values α = 0.6 (dashed, brown) and α = 2 (solid, blue).
Figure IV. 16 -
16 Figure IV.16 -Dimensionless inverse renormalized mass m/m * obtained with Eqs. (IV.57), (IV.44), (III.57) and (III.42), as a function of the dimensionless Lieb parameter γ (black). The exact value in the Tonks-Girardeau regime is plotted in dashed blue as a comparison.
Figure IV. 17 -
17 Figure IV.17 -Upper bounds for the validity range of the Tomonaga-Luttinger liquid framework in dimensionless momentum (left panel) and dimensionless energy (right panel) around the umklapp point (p = 2p F , = 0) of the of the Lieb-Liniger model for the excitation spectrum II , as functions of the dimensionless interaction strength γ. Dots represent the numerical estimate at finite interaction strength, the dashed blue curve corresponds to the Tonks-Girardeau regime, where ∆ T G / F 0.3 and ∆p T G /p F 0.2
Figure IV. 18 -
18 Figure IV.18 -Maximum of the type II spectrum, II (p F ; γ), in units of the Fermi energy
Figure V. 1 -
1 Figure V.1 -Illustration of the dimensional crossover concept in energy-momentum space.Consider a d-dimensional gas of noninteracting spinless fermions. In the transverse direction, the Fermi energy is low enough, so that only one mode is selected. When a transverse dimension of the box increases, new modes are available below the Fermi energy. In the limit of an infinite transverse direction, they form a continuum, and the system is (d+1)-dimensional.
) in arbitrary dimension. Using the general notation v F,d = k F,d
d mn d 2 . (V. 21 )
221 Then, I have recovered these results by applying the cross-dimensional approach from dimension d to dimension (d + 1), which validates this technique once more. The drag force profiles are plotted simultaneously in Fig. V.3. The effect of dimension on the drag force is less impressive than on the dynamical structure factor.
Figure V. 3 -
3 Figure V.3 -Dimensionless drag force due to an infinitely thin potential barrier, f = F/F (v F ), as a function of the dimensionless flow velocity u = v/v F , in dimensions 1 (dotted, blue), 2 (dashed, black) and 3 (solid, red). All of them experience a saturation at supersonic velocity flows.
Figure V. 5
5 Figure V.5 shows the section of the dynamical structure factor S 1,h.o. (q = k F , ω; r), as a function of energy, at varying the size r of the probed region, at q = k F . In essence, to get
Figure V. 6 -
6 Figure V.6 -Definition domain of the dynamical structure factor of a Fermi gas in the plane (q, ω) in units of (q F,d , ω F,d ). Colored areas represent the domain where single particle-hole excitations can occur. The light green one is found in any integer dimension d ∈ {1, 2, 3}, while the dark orange one is specific to d > 1. Black straight lines correspond to linearization of the lower excitation spectrum in the Tomonaga-Luttinger formalism in 1D.
Figure V. 7 -
7 Figure V.7 -Lower boundary of the definition domain of the dynamical structure factorfor a Q1D gas with three modes, in the plane (q, ω) in units of (k F , ω F ), as found in the Tomonaga-Luttinger formalism (dashed), compared to the exact solution (solid)
Figure V. 8 -
8 Figure V.[START_REF] Giorgini | Theory of ultracold atomic Fermi gases[END_REF] -Section of the dynamical structure factor S(q, ω = 0.1 ω F ) in units of S(q = 2k F , ω = 0.1 ω F ) as a function of q in units of k F , at fixed energy ω = 0.1 ω F . The exact result in 2D (dashed, red) is compared to the M-TLM prediction (solid, black) in the upper panel. The lower panel shows a zoom into the backscattering region near q = 2k F . It compares the 2D exact (dashed, red) and the M-TLM model (solid, black) to the exact (thick, brown) and TLM (dotted, blue) results in 1D.
( +y)q
e
L T = β v s plays the role of a thermal length. I define K(b) = ∞ 0 dy y e iby e y -1 , (A.30) and find that K(b) + K(-b) = +∞ 0 dy y(e iby + e -iby )e -y iby e -y e -yn ib -(n + 1) + e -iby e -y e -yn -ib -(n + 1)
x→0 1/x and |Γ(iy)| 2 = π/[y sinh(πy)] [449].
term of Eq. (II.73).
(A. 50 )F 1 (= T 1 + T 2 .( 1 - 2 1
50112121 Denoting a = e -i 2π L v and b = e 2π L T L n , k ) n 1 n (a n -1) + (a -n -1)b -n -k ) n -(b -k ) n ] + -1 b -k ) n -(b -k ) n ] -k ) n -(b -k ) n + (a -1 b -k ) n -(b -k ) n ] ln(1 -z), |z| < 1, (A.52)and doing a bit of algebra, I find-T 2 = +∞ k=1 ln(1 -ab -k ) -ln(1 -b -k ) + ln(1 -a -1 b -k ) -ln(1 -b -k ) = ln +∞ k=1 ab -k )(1 -a -1 b -k ) (1 -b -k ) (πz) sinh 2 (kπλ) = θ 1 (πz, e -πλ ) sin(πz)θ 1 (0, e -πλ ) (A.54)allows to conclude after straightforward algebraic manipulations. Exact mapping from the Lieb-Liniger model onto the circular plate capacitor
1 - 1
11 dy g(y; α, λ) α 2 + (y -z) 2 , -1 ≤ z ≤ 1. (B.2)This equation turns out to become the Lieb equation (III.32) when λ = 1, as first noticed by Gaudin[START_REF] Gaudin | Boundary energy of a Bose gas in one dimension[END_REF], and maps onto the super Tonks-Girardeau regime when λ = -1. However, the relevant physical quantities are different in the two problems, and are obtained at different steps of the resolution.
c 1 α 2
12 ij log j ( ),(B.5) where traditionally in this community, 2 = α, and C = C/(4π 0 R) represents the geometrical capacitance. It is known that c 12 = 0[START_REF] Soibelmann | Asymptotics of a condenser capacity and invariants of Riemannian sumanifolds[END_REF], but higher-order terms are not explicitly known. It has also been shown thatC ≤ lower bound [459].At large α, i.e. for distant plates, many different techniques have been considered. His-torically, Love used the iterated kernel method. Injecting the right-hand side of Eq. (B.2) into itself and iterating, the solution is expressed as a Neumann series[START_REF] Love | The electrostatic field of two equal circular co-axial conducting disks[END_REF] g(z; α, λ) +x 2 the kernel of Love's equation, which is a Cauchy law (or a Lorentzian), and K n+1 (y -z; α) = 1 -1 dx K 1 (y -x; α)K n (x -z; α) is the (n + 1)-th order iterated kernel.
Fig. (B.1), I show several approximations of the geometric capacitance as a function of the aspect ratio α. In particular, based on an analytical asymptotic expansion, I have proposed a simple approximation in the large gap regime, namely C(α)
Figure B. 1 -
1 Figure B.1 -Geometrical dimensionless capacitance C of the parallel plate capacitor as a function of its dimensionless aspect ratio α. Results at infinite gap (dashed, black) and in the contact approximation (dotted, black) are rather crude compared to the more sophisticated approximate expressions from Eq. (B.5) for α < 2 and Eq. (B.10) for α > 2 (solid, blue and red respectively), when compared to numerical solution of Eqs. (B.1) and (B.2) (black dots).
+∞ n=0 a 1 - 1 1 - 1
n=01111 2n (α)P 2n (z). (B.[START_REF] Witten | Three Lectures On Topological Phases Of Matter[END_REF] Under the assumption that α > 2, since (y, z) ∈ [-1, 1] 2 , the Lorentzian kernel in Eq.(III.32) can be expanded as: -1) j z 2k-j .(B.14) Thus, the combination of Eqs. (III.32), (B.13) and (B.14) yields dy y j P 2n (y). (B.16) Due to the parity, F j 2n = 0 if and only if j is even. An additional condition is that j ≥ n [449]. Taking it into account and renaming mute parameters (k ↔ n) yields +∞ n=0 a 2n (α)P 2n (z)orthogonality and normalization of Legendre polynomials, dz P 2i (z)P 2j (z) = δ i,j 2 4j +1 , (B.18) allow to go further. Doing 1 -1 dz P 2m (z)× Eq. (B.17) yields: m,0 . (B.20) Then, from equation 7.231.1 of [449] and after a few lines of algebra, F 2l 2m = 2 2m+1 (2l)!(l + m)! (2l + 2m + 1)!(l -m)! . (B.21) Inserting Eq. (B.21) into Eq. (B.20) yields after a few simplifications:2a 2m (α) 4m + 1 -n+m α 2(n+m)+1 C m,n,j,k a 2k (α) = 1 π δ m,0 (B.22)whereC m,n,j,k = 2 2k+1 (j + k)! (2j + 2k + 1)!(j -k)! 2 2m+1 (n + 2m -j)!(2n + 2m)! (2n + 4m -2j + 1)!(n -j)! . (B.23)To obtain a finite set of equations, I cut off the series in Eq. (B.13) at an integer value M ≥ 0. The infinite set of equations (B.22) truncated at order M can then be recast into a matrix form:
k α 2k z 2(m-k) , (B.[START_REF] Kollath | Spin-Charge Separation in Cold Fermi Gases: A Real Time Analysis[END_REF] then recast into the formI n (α) = +∞ i=0 d 2i,n (α)z 2i (B.28) using properties of the Taylor expansions of the functions involved. This tranforms the Lieb equation into +∞ n=0 c 2n (α) z 2n -
d
2n,i (α)c 2i;M (α) = δ n,
φ j (x) = (2 j j! √ π) 1/2 exp x 2 2 -d dx j exp -x 2 , (B.32) I shall prove that σ(x; N ) ∼ N →+∞ 1 π √ 2N -x 2 . (B.33)I recall the orthogonality and normalization condition:+∞ -∞ φ j (x)φ k (x)dx = δ j,k . (B.[START_REF] Calabrese | Time Dependence of Correlation Functions Following a Quantum Quench[END_REF]
-p 2 F + (2N - 1 -x 2 ) 2 2 1/ 2 φN φ 2 N
2122222 the result. A more formal derivation relies on the Christoffel-Darboux formula:N -1 j=0 φ j (x)φ j (y) = N N (x)φ N -1 (y) -φ N (y)φ N -1 (x) (x) -[N (N + 1)] 1/2 φ N -1 (x)φ N +1 (x), (B.39)and the asymptotics of φ N -1 , φ N and φ N +1 .by a fluid in equilibrium, the radiation is scattered by the density fluctuation of the fluid, and therefore the operator A(r) is proportional to the local density fluctuation δn(r) = n(r)-n .
e iωt δn(q, t)δn(-q, 0) = V dr e -iqr +∞ -∞ dt e iωt δn(r, t)δn(0,
e
† k c k = n F (k)δ k,k , fermionic commutation relations, Θ(-x) = 1-Θ(x) and specialization to T = 0 yield S T G (q, ω) = -i[(k-k )x-(ω k -ω k )t] Θ(k F -|k|)Θ(|k |-k F ) F -|k|)Θ(|k |-k F )2πδ(k -k-q)2πδ[ω-(ω k -ω k )].(C.10)Taking the thermodynamic limit, where the sums become integrals,S T G (q, ω) = dk Θ(k F -|k|)Θ(|k+q|-k F )δ[ω -(ω k+q -ω k )]. (C.[START_REF] Francesco | Conformal Field theory[END_REF]
e i(ω-qvs)t +∞ -∞ du e -iqu (u + i ) 2 = 2πδ(ω -qv s ) +∞ -∞ du e -iqu (u + i ) 2 . (C.15)Then, integration by parts and a classical application of the residue theorem on two circular contours in the upper and lower half of the complex plane yield
e -iqx cos(2mk F x) [(x-v s t+i )(x+v s t-i )] -Km 2 . (C.21)A natural change of coordinates is given by the light-cone ones, u = x-v s t and v = x+v s t. The Jacobian of the transformation is
Figure C. 1 -
1 Figure C.1 -The left panel represents the integration contour used in Eq. (C.24), where the black line coincide with the x-axis, the branch-cut with the y-axis, and the pole (red) is at i /2. It contains the Hänkel contour defined in the right panel, rotated by π/2 and followed backward.
2 - α = 2 H 2 Γ 26 )C. 4 C. 4 . 1
2α2226441 t) -z e -t , (C.[START_REF] Piroli | Exact dynamics following an interaction quench in a one-dimensional anyonic gas[END_REF] where Γ(z) is the analytic continuation of the Euler Gamma function in the complex plane and H is the Hänkel contour, sketched in the right panel of Fig. C.1. A rotation of -π/2 maps the green contour of the left panel onto the Hänkel one. Then, it has to be followed backward, and I will then denote it by BH. This transformation yields, after a few algebraic manipulations,J(q; α) = -i BH dz e qz -iz -i iq α-1 (-i) -α e -q dz e -z (-z) -α = i α 2πq α-1 e -q Still according to [449], +∞ -∞ dx (β -ix) -ν e -ipx = 2πp ν-1 e -βp Γ(ν) Θ(p), R(ν) > 0, R(β) > 0, (C.27) that yields the same result even more directly. It is then only a straightforward matter of algebra and combinations to finish the derivation. Dynamical structure factor of a Tomonaga-Luttinger liquid in the thermodynamic limit at finite temperature In this appendix, I give a quite detailed derivation of Eqs. (IV.42) and (IV.43). To gain clarity, I split this derivation into several parts, that correspond to the main intermediate results. First contribution to the dynamical structure factor of a Tomonaga-Luttinger liquid at finite temperature
C. 4 . 2 2 Γ
422 Second contribution to the dynamical structure factor at finite temperature This part of the calculation is, by far, the most difficult. Straightforward algebraic transformations and rescalings show that the integral one needs to evaluate in this situation is L(a) = +∞ -∞ du e -iau sinh -K (u). (C.31) I have done it stepwise, using the property Γyz + xi 2y Γ(1+z) = (2i) z+1 y Γ 1 + yz -xi 2y +∞ 0 dt e -tx sin z (ty), (C.32) valid if R(yi) > 0 and R(x-zyi) > 0, misprinted in [449] so that I first needed to correct it. A few more algebraic manipulations, such as sinh(x) = i sin(-ix) and splitting the integral, yield the more practical property +∞ 0 du e -iau sinh -K (u) = 2 K-1 Γ K+ia
2 s K- 1 e
21 [449] |Γ(x+iy)| y→+∞ √ 2π|y| x-1 2 e -π 2 |y| to obtain: S 1,T →0 (q, ω) = B 1 (K) ω 2 -(q-2k F ) 2 v -π| β 4π [ω+(q-2k F )vs]| e -π| β 4π [ω-(q-2k F )vs]| , (C.36)whereB 1 (K) = 2(n 0 ) 2K (2n 0 v s ) 2(K-1) π 2 Γ(K) 2
Eq. (A.26) I obtain Eq. (IV.40). In Eq. (C.36), the exponentials vanish except if ω ≥ |q -2k F |v s . Then they equal one, and I recover Eq. (IV.39) as expected.
S 1 ,
1 sinh π L T (x-v s t) sinh π L T (x+v s t) .(C.38)so that, after a few algebraic transformations, it boils down to evaluating the integral 1 -e -x = πcotan(πµ), 0 < (µ) < 1, (C.40)and complex integration, yieldingG(a) = P.P. K=1,T >0 (q, ω) = (k F ) 2 2v s e β ω 2 cosh L T 4vs [ω+(q-2k F )v s ] cosh L T 4vs [ω-(q-2k F )v s ], (C.[START_REF] Bednorz | Possible high Tc superconductivity in the Ba-La-Cu-O system[END_REF] consistent with the case T = 0, and with the general case at K = 1.
1 0
1 [449] B(b, c -b) 2 F 1 (a, b; c; z) = dx x b-1 (1 -x) c-b-1 (1 -zx) -a , |z| < 1, (c) > (b), (C.47) (thus for v/v s < 1/3) where 2 F 1 is the hypergeometric function defined as 2 F 1 (a, b; c; z) = +∞ n=0 (a)n(b)n (c)n z n
. (IV.48) after putting back all prefactors.
1 . (C.49) Then, the Beta function naturally appears through the property [449] b a dx (x -a) µ-1 (b -x) ν-1 = (b -a) µ+ν-1 B(µ, ν), b > a, (µ) > 0, (ν) > 0, (C.50) whence
), keeping only the lowest orders in |x ± v F t|, I finally recover Eq. (II.45). Had I not previously used the Galilean invariance argument, the condition v s = v F would have been imposed at this stage for consistency.
natural question is to what extent this expression captures the exact dynamics of the
Tonks-Girardeau gas. Since Eq. (II.28) involves special functions, it is not obvious whether
it is consistent with Eq. (II.45). However, considering a specific point far from the mass
shell, i.e. such that k F |x ± v F t| 1, using the expansions S(z) z 1 1 2 -1 √ 2πz cos(z 2 ) and
C(z) z 1 1 2 + 1 √ 2πz sin(z 2
3 in a 3 should be irrational. One can then think of ζ(3), where ζ is the Riemann zeta function. Indeed, the previous coefficient, a 2 , can be written as (ζ(2) -1) 1 π 2 , and ζ(3) is irrational [274]. My conjecture Eq. (III.44) has inspired Prolhac, who guessed
a 4 = a 3 3π = 1 8 ζ(3) - 1 6 1 π 4 (III.45)
and even
a 5 = - 45 1024 ζ(5) + 15 256 ζ(3) - 1 32 1 π 5 , (III.46)
Dans ce chapitre, je caractérise un gaz de Bose ultrafroid fortement corrélé, confiné sur un anneau unidimensionnel, à travers l'analyse de ses fonctions de corrélation statiques à l'équilibre. Un tel gaz est bien décrit par le modèle de Lieb et Liniger, qui correspond à des interactions de contact et des conditions aux limites périodiques. Ce modèle est sans doute le plus simple parmi les théories quantiques des champs de basse dimension, et l'un des plus étudiés. Il est à la fois intégrable, équivalent au gaz de Tonks-Girardeau dans son régime de fortes interactions, membre de la classe d'universalité des liquides de Tomonaga-Luttinger à basse énergie, et peut y être décrit par une théorie conforme des champs de charge centrale unité. Ces propriétés, qui mettent en jeu l'ensemble des méthodes analytiques présentées au chapitre précédent, autorisent une étude théorique en profondeur. Le présent chapitre s'articule comme suit: dans un premier temps, je présente le modèle
J'ajoute ensuite un degré de complexité au problème, et m'intéresse aux fonctions de corrélation non-locales. Je me concentre sur la fonction de corrélation à un corps, dont le comportement asymptotique à courte et longue distance sont connues à des ordres élevés dans le régime de Tonks-Girardeau. Dans le cas général, à interaction finie, la théorie des liquides de Tomonaga-Luttinger permet d'obtenir le comportement asymptotique à longue distance. À courte distance, je construis le développement limité en m'appuyant sur l'Ansatz de Bethe et des identités que j'ai baptisées connexions. La transformée de Fourier de la fonction de corrélation à un point, appelée distribution en impulsion, est mesurée par expansion balistique dans les expériences d'atomes ultrafroids. Une fois n'est pas coutume, ses développements limité et asymptotique peuvent être calculés de manière exacte, et le terme dominant à haute impulsion s'avère universel, car il décroît comme l'inverse de la puissance quatrième de l'impulsion. Son préfacteur numérique, en revanche, n'est pas universel et dépend de l'intensité des interactions. Il s'agit du contact de Tan, sur lequel je m'appuie pour illustrer une extension de l'Ansatz de Bethe au cas d'un gaz inhomogène dans un piège harmonique longitudinal. Bien que non-intégrable, je le résous par une astucieuse combinaison avec l'approximation de la densité locale.Tout au long de la discussion, je passe sous silence des détails techniques, problèmes transversaux et approches alternatives. Certains sont tout de même évoqués dans une série d'appendices rattachés à ce chapitre.
Dans le régime de Tonks-Girardeau, j'ai pu évaluer le facteur de structure dynamique, puis la force de traînée dans le cadre de la réponse linéaire. À la limite thermodynamique, la région de l'espace des énergies et impulsions dans laquelle les excitations peuvent avoir lieu forme un continuum. Ce dernier est coupé des énergies nulles par une région vide d'excitations, à part à l'origine et au point dit d'umklapp, dont le voisinage domine les processus de dissipation. Pour obtenir le facteur de structure dynamique à température finie, il m'a fallu dans un premier temps calculer le profil en température du potentiel chimique. L'effet de la température sur le facteur de structure dynamique consiste essentiellement en un élargissement du domaine d'excitations possibles, jusqu'à des énergies négatives qui correspondent à une émission vers le milieu extérieur, et en une réduction de la probabilité des excitations à faible vecteur d'onde. On constate également que les phonons sont de moins en moins bien définis à mesure que la température augmente, jusqu'à ne plus être identifiables.En ce qui concerne la force de traînée, à température nulle, le formalisme de la réponse linéaire prévoit un profil linéaire en fonction de la vitesse si celle-ci est plus faible que la vitesse de Fermi, qui est également celle du son, puis une saturation dans le régime supersonique, qui paraît peu réaliste. Ce problème est résolu par la prise en compte de l'épaisseur de la barrière de potentiel, assimilée à une Gaussienne. Dès lors, à grande vitesse la force de traînée s'effondre, ce qui conduit à un régime quasi-superfluide inclus
Dans le Chapitre V, je me suis intéressé au changement progressif de dimension, depuis un gaz unidimensionnel, vers les dimensions supérieures. Cette transition lisse peut être réalisée de diverses manières, par exemple en couplant entre eux un grand nombre de gaz unidimensionnels, en utilisant des degrés de liberté internes, ou encore en relâchant un confinement transverse. C'est sur ce dernier cas que je me suis concentré, l'illustrant à travers l'exemple d'un gaz de fermions idéal dans une boîte de taille variable.Dans un premier temps, j'ai obtenu le facteur de structure dynamique en fonction de la dimension. Son expression générale montre que la région interdite dans l'espace des énergies en dimension un devient accessible dès que celle-ci augmente. La transition d'une à deux dimensions est tout particulièrement intéressante de ce point de vue. Je l'ai suivie de bout en bout à travers l'étude de la structure multi-mode qui apparaît quand
Acknowledgments
This manuscript being an advanced milestone of a long adventure, there are so many people I would like to thank for their direct or indirect contribution to my own modest achievements that I will not even try and list them all. May they feel sure that in my heart at least, I forget none of them.
First of all, I am grateful to the hundreds of science teachers who contributed to my education, and in particular to Serge Gibert, Thierry Schwartz, Jean-Pierre Demange and Jean-Dominique Mosser, who managed to trigger my sense of rigour, and to Alice Sinatra for her precious help in crucial steps of my orientation. Thanks to those who have devoted their precious time to help my comrades and I get firm enough basis in physics and chemistry to pass the selective 'agrégation' exam, most peculiarly Jean Cviklinski, Jean-Baptiste Desmoulins, Pierre Villain and Isabelle Ledoux.
I would like to thank my internship advisors for their patience and enthusiasm. Nicolas Leroy, who gave me a first flavor of the fascinating topic of gravitational waves, Emilia Witkowska, who gave me the opportunity to discover the wonders of Warsaw and cold atoms, for her infinite kindness, Anna Minguzzi and Frank Hekking who taught me the basics of low-dimensional physics and helped me make my first steps into the thrilling topic of integrable models. Thank to their network, I have had the opportunity to closely collaborate with Patrizia Vignolo, Vanja Dunjko and Maxim Olshanii. I would like to thank the latter for his interest in my ideas and his enthusiasm. Our encounter remains one of the best memories of my life as a researcher. I also appreciated to take part to the Superring project and discuss with the members of Hélène Perrin's team.
Appendix A
Complements to chapter II
A.1 Density correlations of a Tomonaga-Luttinger in the thermodynamic limit at zero temperature The aim of this section is to derive Eq. (II.41) in formalism of Tomonaga-Luttinger liquids. Elements of this derivation can be found in various references, see e.g. [START_REF] Giamarchi | Quantum Physics in One Dimension[END_REF].
As a first step, one needs to construct a convenient representation for the density in the continuum. In first quantization, the granular density operator reads
where {x i } i=1,...,N label the positions of the point-like particles. This expression is not practical to handle, and needs coarse-graining in the thermodynamic limit. To simplify it, one defines a function ζ such that ζ(x i ) = iπ at the position of the i th particle and zero otherwise, and applies the property of the δ composed with a function f :
Appendix C
Complements to chapter IV C.1 Around the notion of dynamical structure factor
This appendix, mostly based on [START_REF] Pottier | Nonequilibrium Statistical Physics, Linear Irreversible Processes[END_REF], gives more details about the notion of dynamical structure factor.
A common method to carry out measurements on a physical system is to submit it to an external force and to observe the way it reacts. For the result of such an experiment to adequately reflect the properties of the system, the perturbation due to the applied force must be sufficiently weak. In this framework, three distinct types of measurement can be carried out: actual response measurements, susceptibility measurements that consist in determining the response of a system to a harmonic force, and relaxation measurements in which, after having removed a force that had been applied for a very long time, one studies the return to equilibrium. The results of these three types of measurements are respectively expressed in terms of response functions, generalized susceptibilities and relaxation functions. In the linear range, these quantities depend solely on the properties of the unperturbed system, and each of them is related to the others.
The object of linear response theory is to allow, for any specific problem, to determine response functions, generalized susceptibilities, and relaxation functions. In the linear range, all these quantities can be expressed through equilibrium correlation funtions of the relevant dynamical variables of the unperturbed system. The corresponding expressions constitute the Kubo formulas.
Let us consider an inelastic scattering process in the course of which, under the effect of an interaction with radiation, a system at equilibrium undergoes a transition from an initial state |i to a final state |f . The corresponding energy varies from i to f , whereas the radiation energy varies from E i to E f . Total energy conservation implies that E i + i = E f + f . The energy lost by the radiation is denoted by ω = E i -E f = fi , so that absorption corresponds to ω > 0 and emission to ω < 0. I associate an operator A(r) to the system-radiation interaction. For instance, in the case of scattering of light |
01759294 | en | [
"sdv.mhep"
] | 2024/03/05 22:32:10 | 2016 | https://theses.hal.science/tel-01759294/file/2016LIL2S059.pdf | Keywords: Machine learning, support vector machines, deep learning, stacked denoising auto-encoders, radiotherapy, 1.3 Roadmap
This dissertation is organized as follows.
In Chapter 2, an introduction of brain cancer, radiation techniques and effects of radiation on biological tissues will be presented. As a part of the radiation treatment planning, the problem of manual segmentation of the organs at risk and the blueneed of automatizing this step will be introduced.
In Chapter 3, the relevant literature on the state-of-the-art segmentation methods for brain structures on MRI is presented. Particularly the development of atlas-based methods, statistical models of shape and texture, deformable models and machine learning techniques are reviewed. To reduce the number of meaningless papers, only works handling with brain structures that are included in the RTP are considered.
In Chapter 4, the main contributions of this work are disclosed. First section brings to the reader a theoretical introduction of machine learning basis terms. Concepts such as classification or data representation are briefly explained. Next, historical context, advantages and explanation of deep learning, and particularly the technique employed in this thesis, are afforded. Afterwards, the proposed features to segment brain structures in this dissertation are detailed. Last two sections of this chapter presents the methodological processes performed in this thesis to conduct both training and classification.
In Chapter 5, we detail the materials employed throughout this thesis. Image modalities and their characteristics, as well as volume contours used as reference are introduced. Afterwards, strategies and metrics used to evaluate the performance of the proposed classification system are presented.
In Chapter 6, experiments set-up and results of these experiments are shown. Comparisons with other works is also conducted in this chapter.
In chapter 7, conclusions of the methods presented in this thesis, as well as guidelines for future work are discussed.
And finally, chapters 8 and 9 present the scientific dissemination produced by this work, and a french summary of the thesis, respectively. " The most difficult thing is the decision to act, the rest is merely tenacity."
Amelia Earhart
This chapter provides an overview of the state of the art in the field of segmentation of brain structures. Methods referenced in this chapter are applied in various fields, not being restricted to radiotherapy. However, despite the large number or techniques proposed to segment different regions of the brain, only those approaches focusing on the critical structures detailed in 2.3.1 are included.
Vers la segmentation automatique des organes à risque dans le contexte de la prise en charge des tumeurs cérébrales par lápplication des technologies de classification de deep learning Résumé : Les tumeurs cérébrales sont une cause majeure de décès et d'invalidité dans le monde, ce qui représente 14,1 millions de nouveaux cas de cancer et 8,2 millions de décès en 2012. La radiothérapie et la radiochirurgie sont parmi l'arsenal de techniques disponibles pour les traiter. Ces deux techniques s'appuient sur une irradiation importante nécessitant une définition précise de la tumeur et des tissus sains environnants. Dans la pratique, cette délinéation est principalement réalisée manuellement par des experts avec éventuellement un faible support informatique d'aide à la segmentation. Il en découle que le processus est fastidieux et particulièrement chronophage avec une variabilité inter ou intra observateur significative. Une part importante du temps médical s'avère donc nécessaire à la segmentation de ces images médicales. L'automatisation du processus doit permettre d'obtenir des ensembles de contours plus rapidement, reproductibles et acceptés par la majorité des oncologues en vue d'améliorer la qualité du traitement. En outre, toute méthode permettant de réduire la part médicale nécessaire à la délinéation contribue à optimiser la prise en charge globale par une utilisation plus rationnelle et efficace des compétences de l'oncologue.
De nos jours, les techniques de segmentation automatique sont rarement utilisées en routine clinique. Le cas échéant, elles s'appuient sur des étapes préalables de recalages d'images. Ces techniques sont basées sur l'exploitation d'informations anatomiques annotées en amont par des experts sur un "patient type". Ces données annotées sont communément appelées "Atlas" et sont déformées afin de se conformer à la morphologie du patient en vue de l'extraction des contours par appariement des zones d'intérêt. La qualité des contours obtenus dépend directement de la qualité de l'algorithme de recalage. Néanmoins, ces techniques de recalage intègrent des modèles de régularisation du champ de déformations dont les paramètres restent complexes à régler et la qualité difficile à évaluer. L'intégration d'outils d'assistance à la délinéation reste donc aujourd'hui un enjeu important pour l'amélioration de la pratique clinique.
L'objectif principal de cette thèse est de fournir aux spécialistes médicaux (radiothérapeute, neurochirurgien, radiologue) des outils automatiques pour segmenter les organes à risque des patients bénéficiant d'une prise en charge de tumeurs cérébrales par radiochirurgie ou radiothérapie.
Pour réaliser cet objectif, les principales contributions de cette thèse sont présentées sur deux axes principaux. Tout d'abord, nous considérons l'utilisation de l'un des derniers sujets d'actualité dans l'intelligence artificielle pour résoudre le problème de la segmentation, à savoir le "deep learning". Cet ensemble de techniques présente des avantages par rapport aux méthodes d'apprentissage statistiques classiques (Machine Learning en anglais). Le deuxième axe est dédié à l'étude des caractéristiques d'images utilisées pour la segmentation (principalement les textures et informations contextuelles des images IRM). Ces caractéristiques, absentes des méthodes classiques d'apprentissage statistique pour la segmentation des organes à risque, conduisent à des améliorations significatives des performances de segmentation. Nous proposons donc l'inclusion de ces fonctionnalités dans un algorithme de réseau de neurone profond (deep learning en anglais) pour segmenter les organes à risque du cerveau.
Nous démontrons dans ce travail la possibilité d'utiliser un tel système de classification basée sur techniques de "deep learning" pour ce problème particulier. Finalement, la méthodologie développée conduit à des performances accrues tant sur le plan de la précision que de l'efficacité.
Mots clès : Segmentation des organes à risque, radiochirurgie, radiothérapie, réseau de neurones profond.
Towards automatic segmentation of the organs at risk in brain cancer context via a deep learning classification scheme Brain cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths only in 2012. Radiotherapy and radiosurgery are among the arsenal of available techniques to treat it. Because both techniques involve the delivery of a very high dose of radiation, tumor as well as surrounding healthy tissues must be precisely delineated. In practice, delineation is manually performed by experts, or with very few machine assistance. Thus, it is a highly time consuming process with significant variation between labels produced by different experts. Radiation oncologists, radiology technologists, and other medical specialists spend, therefore, a substantial portion of their time to medical image segmentation. If by automating this process it is possible to achieve a more repeatable set of contours that can be agreed upon by the majority of oncologists, this would improve the quality of treatment. Additionally, any method that can reduce the time taken to perform this step will increase patient throughput and make more effective use of the skills of the oncologist.
Nowadays, automatic segmentation techniques are rarely employed in clinical routine. In case they are, they typically rely on registration approaches. In these techniques, anatomical information is exploited by means of images already annotated by experts, referred to as atlases, to be deformed and matched on the patient under examination. The quality of the deformed contours directly depends on the quality of the deformation. Nevertheless, registration techniques encompass regularization models of the deformation field, whose parameters are complex to adjust, and its quality is difficult to evaluate. Integration of tools that assist in the segmentation task is therefore highly expected in clinical practice.
The main objective of this thesis is therefore to provide radio-oncology specialists with automatic tools to delineate organs at risk of patients undergoing brain radiotherapy or stereotactic radiosurgery. To achieve this goal, main contributions of this thesis are presented on two major axes. First, we consider the use of one of the latest hot topics in artificial intelligence to tackle the segmentation problem, i.e. deep learning. This set of techniques presents some advantages with respect to classical machine learning methods, which will be exploited throughout this thesis. The second axis is dedicated to the consideration of proposed image features mainly associated with texture and contextual information of MR images. These features, which are not present i in classical machine learning based methods to segment brain structures, led to improvements on the segmentation performance. We therefore propose the inclusion of these features into a deep network.
We demonstrate in this work the feasibility of using such deep learning based classification scheme for this particular problem. We show that the proposed method leads to high performance, both in accuracy and efficiency. We also show that automatic segmentations provided by our method lie on the variability of the experts. Results demonstrate that our method does not only outperform a state-of-the-art classifier, but also provides results that would be usable in the radiation treatment planning.
ii To my wife Silvia, and my sons Eithan and Noah iii "Shoot for the moon, even if you fail, you'll land among the stars."
I would like to thank to all the people who made my stay in Lille one of the most cherish experiences of my life. Special mention goes to my enthusiastic advisor Dr. Maximilien Vermandel, whose advises and support have been invaluable. My PhD has been an amazing experience and your advices on both research as well as on my current and future career have been priceless. Similar, profound gratitude goes to Laurent Massoptier, who supervised me during all his time in the company I worked in. I am indebted for his faith on my work and also for his support. I would also like to thank my committee members, Dr. Pierre Jannin, Prof. Su Ruan, Dr. Nacim Betrouni, Dr. Albert Lisbona and Prof. Christian Duriez for serving as jury members of my PhD dissertation. Special mention goes to Dr. Pierre Jannin, who have served as president of the PhD monitoring committee since the beginning and who guided to improve this work.
I would especially like to thank experts that participated in this thesis by manually contouring all what we needed. Particularly, I would like to express my gratitude to Prof. Nicolas Reyns, for collecting data for my Ph.D. thesis.
I would like to express my appreciation to all those persons, companies and departments who have offered me their time during the consecution of this research. First, I would like to give my sincere thanks to AQUILAB, specially to David Gibon and Philippe Bourel, for letting me the opportunity to work in their company, where I have grown up both professionally and personally. In addition, there are two people that I need to mention specially, Dr. Romain Viard and Dr. Hortense A. Kirisli. Their friendship and unselfish help enabled me to improve as researcher. I owe them my sincere gratitude for their generous and timely help. Last, I would like thank to all the partners that composed the FP-7 European project SUMMER, particularly to the Department of Radiation Oncology at the University Medical Center in Freiburg, Germany, and the Center for Medical Physics and Biomedical Engineering at the Medical University of Vienna in Vienna, Austria.
Since they are very important on my life, I would like to thank to my parents, my sister and my family in law for their love and support. Specially mention goes to my mother, which support, patience and efforts made me follow the good way.
Finally, I thank will all my love to my wife Silvia and my sons Eithan and Noah. Silvia's support, her encouragement, patience and unwavering love were undeniably the bedrock upon which the past seventeen years of my life have been built. I cannot imagine a more special soul-mate; I cannot imagine a better mother for my children. They give the strength I need every day and to whom this dissertation is dedicated.
v
The work presented in this dissertation has received funding from the European Union s Seventh Framework Programme for research, technological development and demonstration under grant agreement no PITN-GA-2011-290148.
Chapter 1
Overview
Context
Cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths in 2012 [1]. Cancer represents a group of common, non-communicable, chronic and potentially lethal diseases affecting most families in developed countries, and a growing contributor to premature death within population of these countries [2]. Meanwhile, the annual incidence of cancer keeps raising with an estimation of 26 million of new cases yearly by 2030, with a death toll close to 17 million people [3]. In particular, brain tumors are the second most common cause of cancer death in men ages 20 to 39 and the fifth most common cause of cancer among women age 20 to 39 [4].
Among available techniques to treat brain tumors, radiotherapy and radio surgery have become part of the management of patients with brain tumors, to complement surgery or chemotherapy. During treatment, high intensity radiation beams to destroy the cancerous cells are delivered across the tissues. However, when delivering radiation through the human body, side effects on normal tissues may occur. To limit the risk of severe toxicity of critical brain structures, i.e. the organs at risk (OARs), the volume measurements and the localization of these structures are required. Among available image modalities, magnetic resonance imaging (MRI) images are extensively used to segment most of the OARs, which is performed mostly manually nowadays. However, manual delineation of brain structures is prohibitively time-consuming, and might never be reproducible during clinical routines [5,6], leading to substantial inconsistency in the segmentation.
Medical imaging is increasingly evolving towards higher resolution and throughput. The exponential growth of the amount of data in medical imaging and the usage of multiple imaging modalities have significantly increased the need of computer assisted tools in clinical practice. Among them, automatic segmentation of brain structures has become a very important field of the medical image processing research. A variety of techniques has been presented during the last decade to segment brain structures. Particularly, structures involved in neurological diseases, such as Alzheimer or Parkinson, Chapter 1. Overview have held the attention of researchers. However, critical structures involved in the radiation treatment planning are rarely included in the evaluations. Even in the cases they are analyzed, limited success has been reported. Nevertheless, the fields of computer vision and machine learning are closely related to offer a rich set of useful techniques in the medical imaging domain, in general, and in segmentation in particular.
In this thesis, deep learning techniques are proposed as alternative to the segmentation of the OARs to address the problems of classical segmentation methods. Specifically, an unsupervised deep learning technique known as Stacked Denoising Auto-Encoders is proposed and evaluated. The application of SDAE to the segmentation of OARs in brain cancer allows to (i ) yield more accurate classification in more complex environments, (ii ) achieve faster classification without sacrifying classification accuracy and (iii ) avoid expensive registration stages.
Contributions
The main contributions of this thesis can be summarized as follows:
• An unsupervised deep learning technique known as Stacked Denoising
Auto-Encoders is proposed to segment the OARs in radiotherapy and radio-surgery as alternative to conventional methods used to segment brain structures, i.e. atlas-based.
• New features to include in the classification scheme are proposed to improve the performance of other researchers that used traditional features in Machine Learning classification schemes. Some of the proposed features, have been already employed in neuroimaging. However, their use is limited to other applications rather than segmentation of the OARs.
• Some OARs that have not been previously segmented by proposed methods are included in the list of OARs involved.
• The proposed deep learning classification scheme is compared to a wellknown state-of-the-art machine learning classifier, support vector machines.
• Apart from the previous technical validation of the presented approach, its performance is evaluated in clinical routine. Four observers contributed in this thesis by doing manual contouring of all the OARs involved in the radiation treatment planning (RTP). Results provided by the automatic method were compared to the manual ones.
Chapter 2
Introduction
"What we do in life, echoes in eternity."
Brain Cancer
Cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths in 2012 [1]. Cancer represents a group of common, non-communicable, chronic and potentially lethal diseases affecting most families in developed countries, and a growing contributor to premature death within population of these countries [2]. Meanwhile, the annual incidence of cancer keeps raising with an estimation of 26 million of new cases yearly by 2030, with a death toll close to 17 million people [3]. In particular, brain tumors are the second most common cause of cancer death in men ages 20 to 39 and the fifth most common cause of cancer among women age 20 to 39 [4].
A brain tumor is any mass caused by abnormal or uncontrolled growth of cells that arise within or adjacent to the brain. In general, these tumors are categorized according to several factors, including location, type of cells involved, and the growing rate. Slowly growing tumors that lack of capacity to spread to distant sites and that originate in the brain itself are called primary brain tumors. On the other hand, rapidly growing tumors that can infiltrate surrounding tissues and spread to distant sites, i.e. metastasize, are called secondary brain tumors. While primary brain tumors can be benign or malignant, secondary brain tumors are always malignant. However, both types are potentially disabling and life threatening. Because the space inside the skull is limited, their growth increases intracranial pressure, and may cause edema, reduced blood flow, and displacement, with consequent degeneration of healthy tissue that controls vital functions [7,8]. Additionally, metastatic or secondary brain tumors are the most common types of brain tumors, and occur in 10-15 % of people with cancer. Brain tumors are inherently difficult to treat given that the unique features of the brain can complicate the use of conventional diagnostic and treatment methods.
Brain Tumor Treatment
One of the consequences of tumor growing into or pressing on a specific region of the brain is the probability of stopping that brain area from working the way it should. Consequently, independently on the nature of the tumor, both benign and malignant brain tumors cause signs and symptoms and require treatment.
Available Treatments
A variety of therapies are used to treat brain tumors. Treatments options mainly include surgery, radiotherapy, chemotherapy, and/or steroids. Selection of suitable treatments depends on a number of factors, which may include type, location, size or grade of the tumor, as well as the patient s age and general health. Surgery is used to excise tumors, or parts of tumors, from specific locations directly using a knife. Chemotherapy uses chemical substances to treat cancer indirectly, since these drugs typically target all rapidly dividing cells, which include cancer cells. Radiation therapy (RT) uses radiation to kill tumor cells, which involves radiation permanently damaging the deoxyribonucleic acid (DNA) of tumor cells.
Radiation Therapy
The term radiation therapy, or radiotherapy (RT), describes the medical application of ionizing radiation to control malignant cells by damaging their DNA [START_REF] Ward | DNA damage produced by ionizing radiation in mammalian cells: identities, mechanisms of formation, and reparability. Progress in nucleic acid research and molecular biology[END_REF]. Essential genetic instructions for the development and functioning of a cell are contained in the DNA. Cells are naturally programmed to correct damaged DNA up to a certain degree. Nevertheless, if the deterioration is substantial, the cell dies. It has been demonstrated, however, that healthy cells recover better than cancerous cells when they are exposed to degradation [START_REF] Joiner | Basic Clinical Radiobiology Fourth Edition[END_REF]. This radiobiological difference between healthy and cancerous cells is exploited by radiation therapy. An example of a brain tumor patient having been treated with RT is shown in figure 2.2. The three primary techniques for delivering radiation include: i) external or conventional radiotherapy, ii) internal radiotherapy or brachytherapy, and iii) stereotactic radiosurgery (SRS), sometimes referred to as gamma-knife. Each of them have been evaluated in the treatment of patients with brain tumors and may be utilized in different circumstances. While external radiotherapy is the conventional treatment for brain tumors, SRS has also become a standard procedure. Recently, SRS has been used in the treatment of many types of brain tumors, such as acoustic neuromas, meningiomas or trigeminal neuralgia, for example. Furthermore, it has been proven to be effective in the treatment of brain metastases. Since this work aims at improving the segmentation procedure in RT and SRS treatment planning, only these two techniques will be explained in the following section.
Conventional Radiotherapy
RT involves directing radiation beams from outside the body into the tumor. It implicates careful and accurate use of high intensity radiation beams to destroy the cancerous cells. Machines called linear accelerators (LINAC) produce these high energy radiation beams which penetrate the tissues and deliver the radiation dose deep in the body where the tumor is located. These modern machines and other state-of-the-art techniques have enabled radiation oncologists to enhance the ability to deliver radiation directly to the tumor whilst substantially reducing the side effects.
RT is typically delivered as an outpatient procedure for approximately over a six to eight week period, five days a week. Nevertheless, treatment schedule may vary across patients. The total procedure for each session typically takes between 10 and 20 minutes. This dose fractionation enables normal tissue to recover between two fractions reducing damage to normal tissues. RT begins with a planning session during which the radiation oncologist places marks on the body of the patient and takes measurements in order to align the radiation beam in the precise position for each treatment. During treatment, the patient lies on a table and the radiation is delivered from multiple directions to minimize the dose received by healthy tissues. A conventional RT and CyberKnife SRS treatment plan are shown in Figure 2.3 (Image courtesy of [START_REF] Floyd | Hypofractionated radiotherapy and stereotactic boost with concurrent and adjuvant temozolamide for glioblastoma in good performance status elderly patients-early results of a phase II trial[END_REF]).
Stereotactic Radiosurgery
Stereotactic techniques have been developed with the aim to deliver more localized irradiation and minimize the long-term consequences of treatment. They represent a refinement of conventional RT with further improvement in immobilization, imaging and treatment delivery. Basically, SRS is a single fraction RT procedure at high dose. For instance, while a dose of 2 Gy is delivered for a standard RT fraction, 12 to 90 Gy are delivered in a SRS fraction. Thus, the entire procedure occurs in one day, including immobilization, scanning, planning and the procedure itself.
When a patient undergoes SRS, the radiation dose delivered in one session is commonly lower than the total dose that would be given by following conventional RT. Nevertheless, the tumor receives a very high radiation does at once with SRS. Since more radiation is delivered to surrounding healthy tissues when treatment is split into few or several sessions instead of one, decreasing the number of sessions is important. Otherwise, it might result in more side effects, some of which may be permanent. Other consequence of splitting the treatment is that, a reduced amount of radiation delivered to the tumor with each RT session, rather than a very large dose in a single session, may result in less tumor control and poorer outcomes than by employing SRS.
Even though RT and SRS are reported to have identical outcomes for particular indications [START_REF] Combs | Stereotactic radiosurgery (SRS)[END_REF] and regardless of similarities between their concepts, the intent of both approaches is fundamentally different. On the one hand, conventional RT relies on a different sensitivity of the target and the surrounding normal tissue to the total accumulated radiation dose [START_REF] Barnett | Stereotactic radiosurgery-an organized neurosurgery-sanctioned definition[END_REF]. On the other hand, SRS aims at destroying target tissue while preserving adjacent normal tissue. In other words, SRS offers the possibility of normal tissue protection by improved precision of beam application, while conventional RT is limited to the maximum dose that can be safely applied because of normal tissue constraints. Instead of many doses of radiation therapy to treat a targeted region, SRS usually consists of a single treatment of a very high dose of radiation in a very focused location. Due to this, not only higher total radiation doses but also higher single doses can be used, which results in increased biologically effective doses compared with conventional RT.
Stereotactic radiosurgery is a well-described management option for most metastases, meningiomas, schwannomas, pituitary adenomas, arteriovenous malformations, and trigeminal neuralgia, among others [START_REF] Combs | Stereotactic radiosurgery (SRS)[END_REF][START_REF] Salles | Radiosurgery from the brain to the spine: 20 years experience[END_REF]. The popularity and acceptance of SRS procedures has led to the development of several SRS systems. Stereotactic boosts can be carried out in several modalities, such as Gamma Knife (Elekta AB, Stockholm, Sweden), and various LINAC-based systems such as CyberKnife (Accuray Inc., Sunnyvale, CA) or Novalis (BrainLAB, Feldkirchen, Germany).
2.2.2.2.1 Gamma Knife. The Gamma Knife (GK) is an instrument that was developed by surgeons in Sweden nearly five decades ago. A GK typically Chapter 2. Introduction contains 201 beams of highly-focused gamma rays that are directed so that they intersect at the precise location of the cancer. The patient is placed on a couch and then a specialized helmet (Fig. 2.5) is attached to the head frame. Holes in the helmet allow the beams to match the calculated shape of the tumor.
The most frequent use of the Gamma Knife has been for small, benign tumors, particularly acoustic neuromas, meningiomas, and pituitary tumors. In addition, the GK is also employed to treat solitary metastases and small malignant tumors with well-defined borders.
Linear accelerators (LINAC).
Although a linear accelerator (LINAC) is mostly employed for conventional RT treatments, some SRS system have adopted its use to treat brain cancer patients. A LINAC customizes high energy x-ray beams to conform to a defined tumor's shape. The high energy x-rays are delivered to the region where the tumor is present. The patient is positioned on a sliding bed around which the linear accelerator circles. The linear accelerator directs arcs of x-ray beams at the tumor. The pattern of the arc is computer-matched to the tumor's shape. This reduces the dose delivered to surrounding normal tissue. The LINAC can perform SRS on larger tumors either during multiple sessions, which is referred to as fractionated stereotactic radiotherapy.
Radiation Treatment Flowchart
Radiation treatment planning (RTP) is often organized in two phases: the planning and the delivery. Images are first acquired, the regions of interest are identified and the ballistic problem is solved for the acquired data. The planned treatment is then delivered to the patient.
Imaging
The CT image gives an estimation of the electronic density of the anatomy, which is still required to compute the dose distribution in the patient body. Since this image modality is affected by a lack of contrast between soft tissues, other images have sometimes to be acquired. Depending on the cancer type, other images such as positron emission tomography (PET) or magnetic resonance imaging (MRI) can be recommended. A detailed justification of the importance of MRI in brain cancer is explained in Section 2.4.
Delineation
Acquired images are used to determine the position of the target volumes (TVs) as well as the position of some specific organs. This task is usually performed by the physician. To determine the position of the TVs, the physician defines the borders of regions of interest on the image that corresponds to the gross tumor volumes (GTVs). This operation is known as delineation. It is generally performed by drawing contours on two dimensional (2D) slices extracted from the 3D CT. The delineated region of interest, is made up of several 2D shapes from different slices of the image. As there are assumptions of microscopic spread of cancerous cells around the tumors, margins are added around the GTV. The new volume, called clinical target volume (CTV), takes into account cancerous cells that may not be seen on the image. A third volume, the planning target volume (PTV), is created as an extension of the CTV and takes into account the uncertainties in planning and treatment delivery. It is a geometric volume designed to ensure that the prescribed radiation dose is correctly delivered to the CTV. Critical organs have to be delineated to ensure that they do not receive a higher-than-safe dose. There exist different specifications for each of the organs. In some cases, as for the PTV, an extra margin is added around the organ to take into account the uncertainties. Depending on the localization of the tumor, the delineation stage can take up to 2 hours.
Dose prescription
During this stage the physician evaluates the tumor propagation in the patient body by using staging system such as "tumor-nodes-metastasis" (TNM) and makes the appropriate prescription. The prescription includes, among others, the number of fractions and the dose the tumour has to receive. Those prescriptions must follow the recommendations made by the International Commission on Radiation Units and Measurements (ICRU) (reports ICRU 50, ICRU 62 and ICRU 83).
Dose distribution computation
The delineated images and the prescriptions are then given to the physicist who computes the dose distribution. The physicist tries to find the best trade-off between maximizing the dose on the PTV and preserving the critical healthy structures.
Treatment Delivery
According to the treatment modality selected, treatment delivery will be either fractionated during several weeks, with one daily session without including the weekend, or delivered in a single session. Regardless of the treatment technique used, during each of these sessions, the patient receives a fraction of the planned dose. 4), left optic nerve (5), chiasma (6), brainstem (7), spinal cord (8).
Effects of radiation on biological tissues
A major goal of RT is to deprive cancer cells of their multiplication potential and eventually kill the cancer cells. However, radiation will also damage healthy cells. Hence, the main goal of a radiation therapy treatment becomes to deliver sufficient dose to the tumor, while ensuring that the healthy tissue around the tumor is spared as much as possible. Particularly in treatments that include SRS, where radiation dose is considerably higher, setup or localization errors might result in severe overdosing of the adjacent normal healthy tissue. This over exposition to radiation may lead to progressive and irreversible complications to the brain, which often occur months or years after treatment. These critical structures to be preserved are referred to as Organs at Risk (OARs).
To deliver the correct radiation dose, the radiation oncologist or neurosurgeon must consider not only the effects of treatment on the tumor but also the consequences on normal tissues. These two objectives cannot be fully achieved simultaneously, because both the probability of undesirable effects of radiotherapy on normal tissues and the probability of tumor control increase with the delivered dose (Figure 2.8). The two sigmoid curves respectively refer to the tumor control probability (TCP, grey curve) and to the normal tissue complication probability (NTCP, red curve). In clinical applications, the effectiveness of radiotherapy is measured by the therapeutic ratio (TCP/NTCP) which ideally should be as high as possible. Typical values in a good radiotherapy treatment are higher than 0.5 for the TCP, and lower than 0.05 for the NTCP.
Figure 2.8: The principle of therapeutic ratio. Grey curve represents the TCP, and red curve the probability of complications. The total clinical dose is usually delivered in 2Gy fractions in EBRT.
Organs at Risk
During radiotherapy treatment planning, the normal tissues / critical organs within the radiation beam and at the vicinity of the tumor receive a higher amount of radiation dose, and sometimes may be equal to the tumor dose, which causes normal tissue injury.
The focus of this section is therefore on providing a background in the anatomy that underlies the images that we are attempting to segment. Understanding the role of each of these organs is crucial to comprehend how an overdose may damage their primary functions leading to a decrease of the life's quality of the patient.
Brainstem
The brainstem, or brain stem, is one of the most basic regions of the human brain. Despite this, it is one of the most vital regions for our body's survival. It represents one of the three major parts of the brain, which controls many important body functions. In the anatomy of humans it is the posterior part of the brain, adjoining and structurally continuous with the spinal cord. It is usually described as including the medulla oblongata (myelencephalon), pons (part of metencephalon), and midbrain (mesencephalon). Though small, this is an extremely important part of the brain as the nerve connections of the motor and sensory systems from the main part of the brain to the rest of the body pass through the brainstem. This includes the corticospinal tract (motor), the posterior column-medial lemniscus pathway (fine touch, vibration sensation, and proprioception), and the spinothalamic tract (pain, temperature, itch, and crude touch). The brainstem also plays an important role in the regulation of cardiac and respiratory function. It also regulates the central nervous system, and is pivotal in maintaining consciousness and regulating the sleep cycle.
Eyes
Eyes are the organs of vision. They detect light and convert it into electrochemical impulses in neurons. The different parts of the eye allow the body to take in light and perceive objects around us in the proper color, detail and depth. This allows people to make more informed decisions about their environment. If a portion of the eye becomes damaged, one may not be able to see effectively, or lose vision all together.
Optic nerves join about half way between the eye and brain, and then split up again. The join is called the optic chiasm. At the join, signals from the 'nose' side of each eye's visual world swap sides and continue traveling along the opposite side from where they started. The two optic nerves then join on to the brain. The brain is split into two halves, right and left. This means all the signals from the visual world on the right hand side are now traveling in the left side of the brain. It also means that all the signals from the visual world on the left hand side are now traveling in the right half of the brain.
The information then travels to the many different special 'vision' areas of the brain. The main bit of the brain that works vision is at the back of the head. It is called the occipital lobe. The joined up path that signals travel down from retina to optic nerve then optic chiasm then occipital lobe is called the visual pathway. There are two visual pathways, one on the right side of the brain and another on the left. All parts of both visual pathways need to be present and working for us to see normally.
Chapter 2. Introduction
Optic Nerves
The optic nerves are located in the back of the eyes. However, although the optic nerve is part of the eye, it is considered to be in the central nervous system. The optic nerve is the nerve that carries the neural impulses created by the retina to the brain, where this information is interpreted. At a structure in the brain called the optic chiasm, each optic nerve splits, and half of its fibers cross over to the other side. The crossing over of optic nerve fibers at the optic chiasm allows the visual cortex to receive the same hemispheric visual field from both eyes. Superimposing and processing these monocular visual signals allow the visual cortex to generate binocular and stereoscopic vision.
Any damage or disorder on the optic nerves will always impact vision in some way and might affect either one or both eyes.
Optic Chiasm
The optic chiasm is located in the forebrain directly in front of the hypothalamus. Crucial to sight, left and right optic nerves intersect at the chiasm. One-half of each nerve's axons enter the opposite tract at this location, making it a partial decussation.
We have seen that the optic nerves send electrical signals from each eye to meet in the brain at the optic chiasma. Here, the left visual signal from one eye is combined with the other eye and the same goes for the right visual signal. Now the signals split again. The right visual heads for the left brain and the left visual makes its way to the right side of the brain. This way, visual messages from both eyes will reach both halves of the visual cortex. The brain then merges the image into one image which you are looking out at the world with. This partial crossing of the nerve fibers at the optic chiasm (or chiasma) is the reason why we humans have stereoscopic sight and a sense of depth perception.
Pituitary Gland
The pituitary gland is a pea-sized structure located at the base of the brain, just below the hypothalamus and attached to it by nerve fibers. It is part of the endocrine system and produces hormones which control other glands as well as various bodily functions. The pituitary is divided into three sections known as the anterior, intermediate and posterior lobes, each of which produces specific hormones. The anterior lobe is mainly involved in development of the body, sexual maturation and reproduction. Hormones produced by the anterior lobe regulate growth and stimulate the adrenal and thyroid glands as well as the
The role of Structural MRI in brain tumor radiation treatment 17
ovaries and testes. It also generates prolactin, which enables new mothers to produce milk. The intermediate lobe of the pituitary gland releases a hormone which stimulates the melanocytes, cells which control pigmentation through the production of melanin. The posterior lobe produces antidiuretic hormone, which reclaims water from the kidneys and conserves it in the bloodstream to prevent dehydration. Oxytocin is also produced by the posterior lobe, aiding in uterine contraction during childbirth and stimulating the production of milk.
Hippocampus
The hippocampus is a small region of the brain that belongs to the limbic system and is primarily associated with memory and spatial navigation. The hippocampus is located in the brain's medial temporal lobe, underneath the cortical surface. Its structure is divided into two halves which lie in the left and right sides of the brain. The hippocampus is responsible for long-term, or "declarative" memory, and spatial navigation. Long term memory is like a compilation of data in our conscious memory and all of our gathered knowledge and experiences. The hippocampus is involved in the storage of all of this data.
In some neurological disorders, such as Alzheimer's disease, the hippocampus is one of the first regions of the brain to become damaged and this leads to the memory loss and disorientation associated with the condition. Individuals with hippocampal damage develop amnesia and may be unable to form new memories of the time or location of an event, for instance.
Dose limits
For the OARs typically involved in RTP some of the tolerance limits are presented in table 2.1.
The role of Structural MRI in brain tumor radiation treatment
During the last decades, medical imaging, which was initially used for basic visualization and inspection of anatomical structures, has evolved to become an essential tool for diagnosis, treatment and follow-up of patient diseases.
Particularly, in oncology, image evolution has improved the understanding of the complexities of cancer biology, cancer diagnosis, staging, and prognosis. Advanced medical imaging techniques are thus used for tumor resection surgery (i.e. pre-operative planning, intra-operative, post-operative), and for Optic Nerve volume 0.2CC / Dose limit = 8Gy [START_REF] Timmerman | An Overview of Hypofractionation and Introduction to This Issue of Seminars in Radiation Oncology[END_REF][START_REF] Romanelli | Radiosurgery for hypothalamic hamartomas[END_REF][START_REF] Lee | Radiation therapy and CyberKnife radiosurgery in the management of craniopharyngiomas[END_REF][START_REF] Stafford | A study on the radiation tolerance of the optic nerves and chiasm after stereotactic radiosurgery[END_REF] Table 2.1: Dose limits for the OARs in both radiotherapy and radio-surgery.
subsequent radiotherapy treatment planning (RTP). There exists a wide range of medical imaging modalities that allows neuro-scientists to see inside a living human brain. Early imaging methods, invasive and sometimes dangerous, have been abandoned in recent times in favor of non-invasive, high-resolution modalities, such as computed tomography (CT), and especially structural magnetic resonance imaging (MRI). However, to outline the normal brain structures in great detail, the MRI has a higher sensitivity for detecting the presence of, or changes within, a tumor. It is therefore perfectly suited for anatomic visualization of the human body such as deep structures and tissues of the brain. For this reason, and because MRI does not rely on ionizing radiation, MRI has gradually supplanted CT as the mainstay of clinical neuro-oncology imaging, becoming the preferred modality for the diagnostic, follow-up and planning treatments of brain lesions [START_REF] Sheehan | Controversies in Stereotactic Radiosurgery: Best Evidence Recommendations[END_REF].
Additional advantage of MRI is offered by the ability to directly obtain images in planes other than axially, as with CT. The high contrast resolution noted with MRI over CT offers better clarity and easier diagnosis and demarcation of soft tissues or lesions in most situations. We can therefore say that structural Magnetic Resonance Imaging plays a central and crucial role in brain tumor radiation treatment (RT) assessment.
The typical MR scan for a patient with a brain tumor includes T1/T2weighted, fluid-attenuated inversion recovery (FLAIR), and post-contrast T1weighted images (Figure 2.10). T1-weighted images are most useful for depicting anatomic detail and show cerebrospinal fluid and most tumors as low signal intensity, whereas areas of fat and subacute hemorrhage appear as high signal intensity. T2-weighted images are more sensitive for lesion detection and show cerebrospinal fluid and most lesions as high signal intensity, whereas areas of hemorrhage or chronic hemosiderin deposits may appear as low signal. FLAIR images are T2-weighted with low signal cerebrospinal fluid, are highly sensitive for pathology detection, and display most lesions, including tumors and edema, with higher signal intensity than T2 images. However, the tumor focus in FLAIR or T2 images is not well separated from surrounding edema, gliosis, or ischemic changes. T1-weighted images after contrast enhancement generally provide better localization of the tumor nidus and improved diagnostic information relating to tumor grade, blood-brain barrier breakdown, hemorrhage, edema, and necrosis. Contrast-enhanced T1-weighted images also show small focal lesions better, such as metastases, tumor recurrence, and ependymal or leptomeningeal tumor spread. The T1-weighted enhancement of a contrast agent is attributed to blood-brain barrier leakage associated with angiogenesis and capillary damage in regions of active tumor growth and radiation injury [START_REF] Rees | Advances in magnetic resonance imaging of brain tumours[END_REF]. The fact that most cranial contouring is performed on the MRI means that an excellent registration between the CT and MRI scans is essential in order to have confidence in the position of the contours during dose calculation. In general the skull provides a good reference point which prevents too much deformation of the cranium, allowing good results to be achieved using rigid registration techniques. However, because of the long acquisition times of MRI scans, the patient couch is typically designed with greater comfort in mind than the RT treatment couch, and this can mean there is some deformation in the neck area, which can make an overall good fit hard to achieve, instead the oncologist must choose which region to prioritize in the fitting.
Need of automatization of the OARs segmentation process
Because RT and SRS involve the delivery of a very high dose of radiation, both tumor and surrounding tissue must be precisely delineated. Particularly for the OARs, their volume measurements and localizations are required to constrain the risk of severe toxicity. These segmentations are therefore crucial inputs for the RTP, in order to compute the parameters for the accelerators, and to verify the dose constraints.
As it has been previously discussed, among available image modalities MRI images are extensively used to segment most of the OARs. The delineation task performed manually by experts, or with very few machine assistance [START_REF] Whitfield | Automated delineation of radiotherapy volumes: are we going in the right direction? The British journal of radiology[END_REF], is highly time consuming, and there exists significant variation between the labels produced by different experts [START_REF] Yamamoto | Differences in target outline delineation from CT scans of brain tumours using different methods and different observers[END_REF][START_REF] Mazzara | Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation[END_REF]. For some OARs with clearly defined boundaries these are likely to be on the order of only a few voxels, but for many organs with reduced contrast a difference of 2 cm or more between contour lines is not uncommon, creating large variations in the volumes contoured by different oncologists. Radiation oncologists, radiology technologists, and other medical specialists spend, therefore, a substantial portion of their time to medical image segmentation. Furthermore, recent investigations have shown that the effects of inter-variability in delineating OARs have a 2.5. Need of automatization of the OARs segmentation process 21 significant dosimetric impact [START_REF] Nelms | Variations in the contouring of organs at risk: test case from a patient with oropharyngeal cancer[END_REF].
Consequently, the role of delineating contours on a patient's MRI scan is a highly skilled one, which must be carefully supervised by the physician in charge of treatment. The mean time typically spent to analyze and delineate OAR on a brain MRI dataset has been evaluated to 86 min [5], engaging valuable human resources.
If by automatizing this process it is possible to achieve a more repeatable set of contours that can be agreed upon by the majority of oncologists this would improve the quality of treatment. Additionally, any method that can reduce the time taken to perform this step will increase patient throughput and make more effective use of the skills of the oncologist.
Uncertainty and Choice. Contours approved by the oncologist or requiring minor tricks for a few features is highly expected for any automatic segmentation process. indeed, if the physician spends more time making modifications than it would have taken them to contour by hand, then the purpose of the segmentation algorithm is lost.
To overcome these major issues, various computer-aided systems to (semi-) automatically segment anatomical structures in medical images have been developed and published in recent years. However, brain structures (semi-) automatic segmentation still remains challenging, with no general and unique solution. Because all the aforementioned reasons, and as the number of patients to be treated increases, OARs cannot always be accurately segmented, which may lead to suboptimal plans [START_REF] D'haese | Automatic segmentation of brain structures for radiation therapy planning[END_REF]. This makes the introduction in clinical routine of an automated OARs segmentation assisted tool highly desirable.
Chapter 3
Segmentation methods for brain structures: State of the art
Introduction
Image segmentation represents the problem of partitioning an image in a semantically purposeful way. Subdivision of the image into meaningful regions allows that a compact and easier representation of the image can be achieved. Grouping of the pixels is done according to a predefined criterion. This criterion can be based on many factors, such as intensity, color, or texture similarities, pixel continuity, and some other higher level knowledge about the objects model. For many applications, segmentation reduces to find an object in a given image. This involves partitioning the image only into two classes of regions. These two classes can be either the object or the background (Fig. 3.1). Thus, image segmentation is often an essential step in further image analysis, object representation, visualization and many other image processing tasks.
Medical imaging segmentation
Since image segmentation plays a central role in retrieving meaningful information from images, the effective extraction of all the information and features contained in multidimensional images is of increasingly importance in this field. Medical field provides an interesting source of images. In their raw Image segmentation plays, therefore, an important role in numerous medical applications [START_REF] Pham | Current methods in medical image segmentation 1[END_REF].
However, medical image segmentation distinguishes itself from conventional image segmentation tasks and still remains generally challenging. First, many medical imaging modalities generate very noisy and blurred images due to their intrinsic imaging mechanisms. Particularly, in radiation oncology, radiologists tend to reduce acquisition times on CT and MRI for better patient acceptance. Second, medical images may be relatively poorly sampled. Many voxels may contain more than only one tissue type, which is known as Partial Volume Effect (PVE) (See figure 3.2). When this occurs, the intensity of a given voxel depends not only on the tissue properties, but also on the proportions of each tissue type present in the voxel. As a consequence, loss of contrast between two adjacent tissues is likely to occur, making the delineation more difficult. In addition to these effects, it might also happen that some tissues or organs of interest share similar intensity levels with nearby regions, leading to a lack of strong edge or ridge information along the boundaries of the object. In these cases, structures of interest are very difficult to be separated from its surroundings. If the object to be segmented has a complex shape, this lack of contrast along the boundaries makes the segmentation even harder. Last, besides of the image information, higher level knowledge of anatomy and pathology is critical for medical image segmentation. Medical images have usually complex appearance due to the complexity of anatomic structures. Medical expertise is therefore required to understand and interpret the image so that the segmentation algorithms could meet the clinicians' needs. Despite these drawbacks, recent developments of medical imaging acquisition techniques, such as CT MRI have allowed to increase the resolution of images which have greatly assisted in clinical diagnosis. Nevertheless, these advances have not only significantly improved the resolution and information captured in the diverse image modalities, but also have led to an increase of the amount of data to be analyzed. Additionally, data complexity has been also affected. This increment in complexity has forced to medical technicians to process a large number of images with much more details.
Segmentation in neuroimaging
Initial approaches of brain segmentation on MRI focused on the classification of the brain into three main classes: white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) [START_REF] Xuan | Segmentation of magnetic resonance brain image: integrating region growing and edge detection[END_REF]. During the last two decades, the segmentation of the whole brain into the primary cerebrum tissues (i.e. CSF, GM, and WM) has been one of the core challenges of the neuroimaging community, leading to many publications. Nevertheless, it is still an active area of research [START_REF] Balafar | Review of brain MRI image segmentation methods[END_REF][START_REF] Senthilkumaran | Brain image segmentation[END_REF]. More recent methods include tumors and adjacent regions, such as necrotic areas [START_REF] Lee | Segmenting brain tumors with conditional random fields and support vector machines[END_REF]. Those methods are only based on signal intensity. However, segmentation of subcortical structures (i.e. OARs) can hardly be achieved based solely on signal intensity, due to the weak visible boundaries and similar intensity values between different subcortical structures. Consequently, additional information, such as prior shape, appearance and expected location, is therefore required to perform the segmentation.
Due to the crucial role of the hippocampus (HC) in learning and memory processes [START_REF] Norman | How hippocampus and cortex contribute to recognition memory: revisiting the complementary learning systems model[END_REF] and its role as biomarker for the diagnosis of neural diseases, such as Parkinson, dementia or Alzheimer [START_REF] Laakso | Hippocampal volumes in Alzheimer's disease, Parkinson's disease with and without dementia, and in vascular dementia An MRI study[END_REF], many methods have been published to (semi-) automatically segment the HC on MRI [START_REF] Ghanei | Segmentation of the hippocampus from brain MRI using deformable contours[END_REF][START_REF] Shen | Measuring size and shape of the hippocampus in MR images using a deformable shape model[END_REF][START_REF] Hult | Grey-level morphology combined with an artificial neural networks approach for multimodal segmentation of the Hippocampus[END_REF][START_REF] Morra | Automatic subcortical segmentation using a contextual model[END_REF][START_REF] Artaechevarria | Combination strategies in multi-atlas image segmentation: Application to brain MR data[END_REF][START_REF] Collins | Towards accurate, automatic segmentation of the hippocampus and amygdala from MRI by augmenting ANIMAL with a template library and label fusion[END_REF][START_REF] Coupé | Nonlocal patch-based label fusion for hippocampus segmentation[END_REF][START_REF] Morra | Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation[END_REF][START_REF] Hu | Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging[END_REF][START_REF] Khan | Optimal weights for local multi-atlas fusion using supervised learning and dynamic information (SuperDyn): Validation on hippocampus segmentation[END_REF][START_REF] Kim | Segmenting hippocampus from 7.0 Tesla MR images by combining multiple atlases and auto-context models[END_REF][START_REF] Zhao | Segmentation of hippocampus in MRI images based on the improved level set[END_REF][START_REF] Cardoso | STEPS: Similarity and Truth Estimation for Propagated Segmentations and its application to hippocampal segmentation and brain parcelation[END_REF][START_REF] Kwak | Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening[END_REF][START_REF] Wang | Multi-atlas segmentation with joint label fusion. Pattern Analysis and Machine Intelli-Bibliography gence[END_REF][START_REF] Zarpalas | Hippocampus segmentation through gradient based reliability maps for local blending of ACM energy terms[END_REF]. Among presented methods to segment the HC, atlas-based, statistical and deformable models have been typically employed.
Segmentation approaches of other brain structures, in addition to the HC, have been investigated. For instance, segmentation of corpus callosum has been approached by parametric [START_REF] Mcintosh | Medial-based deformable models in nonconvex shape-spaces for medical image segmentation[END_REF] and geometric [START_REF] Leventon | Statistical shape influence in geodesic active contours[END_REF] deformable models. An active shape model method was employed in [START_REF] Olveres | Midbrain volume segmentation using active shape models and LBPs[END_REF] to segment the mid brain on MR images. Other researchers have focused on a set of different subcortical and cerebellar brain structures instead, proposing several approaches: active shape and appearance models [START_REF] Cootes | A unified framework for atlas matching using active appearance models[END_REF][START_REF] Duchesne | Appearance-based segmentation of medial temporal lobe structures[END_REF][START_REF] Bailleul | Segmentation of anatomical structures from 3D brain MRI using automatically-built statistical shape models[END_REF][START_REF] Babalola | 3D brain segmentation using active appearance models and local regressors[END_REF][START_REF] Tu | Brain anatomical structure segmentation by hybrid discriminative/generative models[END_REF][START_REF] Hu | Nonlocal regularization for active appearance model: Application to medial temporal lobe segmentation[END_REF], atlas-based methods [START_REF] Heckemann | Automatic anatomical brain MRI segmentation combining label propagation and decision fusion[END_REF][START_REF] Wu | Optimum template selection for atlas-based segmentation[END_REF][START_REF] Aljabar | Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy[END_REF][START_REF] Lötjönen | Fast and robust multi-atlas segmentation of brain magnetic resonance images[END_REF][START_REF] Asman | Non-local statistical label fusion for multi-atlas segmentation[END_REF],deformable models [START_REF] Székely | Segmentation of 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier surface models[END_REF][START_REF] Tsai | Mutual information in coupled multi-shape model for medical image segmentation[END_REF][START_REF] Yang | 3D image segmentation of deformable objects with joint shape-intensity prior models using level sets[END_REF] or machine learning approaches [START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF][START_REF] Pierson | Manual and semiautomated measurement of cerebellar subregions on MR im-Bibliography ages[END_REF][START_REF] Golland | Detection and analysis of statistical differences in anatomical shape[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF].
Notwithstanding, the number of publications focusing on segmentation of structures involved in the RTP is relatively lower. In addition, although good performance has been often reported for some of these structures, evaluation of proposed methods has been made on control and on several mental disorders patients, such as Schizophrenia or Alzheimer. Nevertheless, in brain cancer context, the presence of tumors may deform other structures and appear together with edema that changes intensity properties of the nearby region, making the segmentation more challenging.
There exist, however, a reduced number of approaches that have already attempted to segment some OARs and brain structures in patients undergoing radiotherapy [5,6,[START_REF] D'haese | Automatic segmentation of brain structures for radiation therapy planning[END_REF][START_REF] Gensheimer | Automatic delineation of the optic nerves and chiasm on CT images[END_REF][START_REF] Isambert | Evaluation of an atlas-based automatic segmentation software for the delineation of brain organs at risk in a radiation therapy clinical context[END_REF][START_REF] Noble | An atlas-navigated optimal medial axis and deformable model algorithm (NOMAD) for the segmentation of the optic nerves and chiasm in MR and CT images[END_REF][START_REF] Conson | Automated delineation of brain structures in patients undergoing radiotherapy for primary brain tumors: From atlas to dose-volume histograms[END_REF]. While for large structures results were often satisfactory, automatic segmentation of small structures were not sufficiently accurate for being usable in RTP in most cases. An atlas-based approach to segment the brainstem was validated in brain cancer context in [5]. In the work of [START_REF] Isambert | Evaluation of an atlas-based automatic segmentation software for the delineation of brain organs at risk in a radiation therapy clinical context[END_REF], whilst segmentation of large structures was considerably suitable for RTP, optic chiasm and pituitary gland segmentations were totally unsuccessful. In other attempt to evaluate an automatic approach on a clinical environment, [6] also reported unsatisfactory results for small OARs such as the chiasm. Despite insufficient results reported on small OARs, previous works demonstrated that the introduction of automatic segmentation methods may be useful in a clinical context.
The objective of this chapter is to provide the reader with a summary of the current state of the art with regard to approaches to segment subcortical brain structures. As it has been reported in the previous section, a large number of techniques have been proposed over the years to segment specific subcortical structures in MRI. However, we are interested in those techniques which are typically applicable to subcortical brain structures in general. In the presented work, we mainly focus on minimally user-interactive methods -automatic or semi-automatic -, which are not tailored to one or few specific structures, but applicable in general. Thus, methods presented in this chapter can be divided into four main categories: atlas-based methods, statistical models, deformable models and machine learning methods.
Atlas-based segmentation methods
The transformation of brain MRI segmentation procedures from human expert to fully automatic methods can be witnessed by exploring the atlas-based methods. Segmentation by using atlas-based methods can be divided into the following main steps: atlas construction, registration between the atlases and the target image, and optionally atlas selection and label fusion (Figure 3.3).
Atlas build-up
First attempts at atlas construction of the human brain were based on a single subject. Here, a single atlas image is used to perform the segmentation [START_REF] Wu | Optimum template selection for atlas-based segmentation[END_REF].
Chapter 3. Segmentation methods: State of the art
This atlas, referred as topological, single-subject or deterministic atlas, is usually an image selected from a database to be representative of the dataset to be segmented, in terms of size, shape and intensity for instance. Particularly, for follow-up of patient's disease where segmentation of brain structures should be performed on longitudinal studies (i.e. at different time point along the treatment), the use of single-atlas based segmentation method to propagate segmented structures obtained at one time point to another time point is generally sufficient. However, in applications where no prior image of the patient can be used as atlas, the segmentation using single-atlas based methods of anatomical structures presenting wide variability between humans becomes challenging, and might lead to poor results.
To overcome the limitations encountered with single-atlas based method, multiple atlases can be used [5,[START_REF] Artaechevarria | Combination strategies in multi-atlas image segmentation: Application to brain MR data[END_REF][START_REF] Collins | Towards accurate, automatic segmentation of the hippocampus and amygdala from MRI by augmenting ANIMAL with a template library and label fusion[END_REF][START_REF] Coupé | Nonlocal patch-based label fusion for hippocampus segmentation[END_REF][START_REF] Khan | Optimal weights for local multi-atlas fusion using supervised learning and dynamic information (SuperDyn): Validation on hippocampus segmentation[END_REF][START_REF] Kim | Segmenting hippocampus from 7.0 Tesla MR images by combining multiple atlases and auto-context models[END_REF][START_REF] Wang | Multi-atlas segmentation with joint label fusion. Pattern Analysis and Machine Intelli-Bibliography gence[END_REF][START_REF] Heckemann | Automatic anatomical brain MRI segmentation combining label propagation and decision fusion[END_REF][START_REF] Aljabar | Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy[END_REF][START_REF] Lötjönen | Fast and robust multi-atlas segmentation of brain magnetic resonance images[END_REF][START_REF] Asman | Non-local statistical label fusion for multi-atlas segmentation[END_REF][START_REF] Panda | Robust optic nerve segmentation on clinically acquired CT[END_REF] . In this approach, multiple atlas images are selected from a database of images representative of the image to be segmented. Each atlas image is then registered to optimally fit the target image. Subsequently, using the deformation resulting from registration, the atlas labeled image is deformed. At this stage, multiple labeled images are fitted to the target image. At last, propagated labeled images are fused, providing the final segmentation. Beside the registration method used, performance of multi-atlas segmentation methods depends on: 1) the atlas building, 2) the atlas selection (Section 2.3), and 3) the label fusion method (Section 2.4) used. The major drawback of multi-atlas based segmentation methods remains the computation cost since it increases with the number of atlases selected.
A limitation of the multi-atlas based segmentation methods is that individual differences that occur in only a minority of the atlases could be averaged out. Hence, segmentation results might be biased, particularly for MRI scans presenting some pathologies. To address this issue, probabilistic atlases are used. This third category of atlases estimates a probabilistic model of the input images, either from a probabilistic atlas or a combination of topological atlases. For a more detailed explanation see the work of Cabezas et al. [START_REF] Cabezas | A review of atlasbased segmentation for magnetic resonance brain images[END_REF]
Image Registration
Image registration is a prerequisite to perform atlas-based segmentation. The registration process is used to spatially align an atlas A and the target image T. For our segmentation purpose, the registration process involved is necessarily based on non-rigid approaches to tackle inter-individual spatial variation. Various image registration methods exist and have been applied to many medical application domains. We refer the reader to the publications of Hill et al. [START_REF] Hill | Medical image registration[END_REF] and Zitova and Flusser [START_REF] Zitova | Image registration methods: a survey[END_REF] for an overview of the image registra-tion methods, regardless of particular application areas. A review of image registration approaches specifically used in brain imaging is available in the publication of Toga and Thompson [START_REF] Toga | The role of image registration in brain mapping[END_REF]. The main contributions, advantages, and drawbacks of existing image registration methods are addressed.
Atlas selection
Normal individual variations in human brain structures present a significant challenge for atlas selection. Some studies demonstrated that, although the use of more than only one topological atlas improves the accuracy of the segmentation, it is not necessary to use all the cases in a dataset for a given query image [49, 54, 66-68, 86, 87]. Among the existing solutions to choose the best matching cases, the use of meta-information is the simplest case. In this solution, which can be also called population specific atlases, an average atlas is built for several population groups according to similar features, like gender or age. Although they represent the simplest solution, the use of meta-information has proved to be a powerful similarity criterion when used in multi-atlas segmentation [START_REF] Aljabar | Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy[END_REF]. However, this information may not be always available, requiring the use of similarity metrics to compare both atlas and target image.
Initially, the majority of published works used a single individual image randomly selected from the atlas dataset, where the selection criterion was not even mentioned. The optimal selection of a single template from the entire dataset during atlas-based segmentation and its influence in the segmentation accuracy was investigated in [START_REF] Rohlfing | Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains[END_REF]. Han et al. [START_REF] Han | Atlas-based auto-segmentation of head and neck CT images[END_REF] compared the selection of a single atlas against the propagation and fusion of their entire atlas database. In their work, the selection of the single atlas was based on the highest Mutual Information (MI) similarity between atlases and the target image after a global affine registration. Multi-atlas segmentation strategy significantly improved the accuracy of single-atlas based strategy, especially in those regions which represented higher dissimilarities between images. Additionally to MI, Sum of squared differences (SSD) or cross-correlation (CC) are often used as a similarity metric to select the closest atlas with respect to the target image.
Aljabar et al. [START_REF] Aljabar | Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy[END_REF] proved that using multi-atlas selection when segmenting subcortical brain structures improves the overlapping than when using random sets of atlases. In their work, a dataset of 275 atlases was used. As in [START_REF] Han | Atlas-based auto-segmentation of head and neck CT images[END_REF], MI similarity was used to top-rank the atlases from the dataset. Then, the n top ranked atlases from the list were selected to be propagated to the target image by using a non-rigid registration. Mean DSC obtained by selecting the topranked atlases (0.854) was higher than the DSC obtained randomly selecting the atlases (0.811). This difference represents nearly 4% of improvement, Chapter 3. Segmentation methods: State of the art demonstrating that the selection of a limited number of atlases which are more appropriate for the target image and prior to multi-atlas segmentation, would appear preferable to the fusion of an arbitrarily large number of atlases.
The inclusion in the label propagation step of atlases containing high dissimilarities with respect to the target image, may not make the segmentation more accurate, but contribute to a poorer result. Consequently, the proper selection of the atlases to include in the label propagation is a key step of the segmentation process.
Label fusion
Once the suitable atlases have been selected from the atlas dataset and labels propagated to the target image, information from transferred labels has to be combined to provide the final segmentation [44-46, 49, 50, 52, 54, 65, 67, 69, 81, 86, 88, 89]. This step is commonly referred as label fusion or classifier fusion.
Label fusion techniques known as best atlas and majority voting approach represent the simplest strategies to combine the propagated labels. In best atlas technique, after the registration step, the labels from the most similar atlas to the target image are propagated to yield the final segmentation. In majority voting method, votes for each propagated label are counted and the label receiving the most votes is chosen to produce the final segmentation [START_REF] Collins | Towards accurate, automatic segmentation of the hippocampus and amygdala from MRI by augmenting ANIMAL with a template library and label fusion[END_REF][START_REF] Heckemann | Automatic anatomical brain MRI segmentation combining label propagation and decision fusion[END_REF][START_REF] Aljabar | Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy[END_REF]. Since majority voting assigns equal weights to different atlases, it makes a strong assumption that different atlases produce equally accurate segmentations for the target image.
To improve label fusion performance, recent work focuses on developing segmentation quality estimations based on local appearance similarity and assigning weights to the propagated labels. Thus, final segmentation is obtained by increasing the contribution of the atlases that are more similar to the target scan [44-46, 49, 50, 54, 66, 86]. Among previous weighted voting strategies, those that derive weights from local similarity between the atlas and target [START_REF] Artaechevarria | Combination strategies in multi-atlas image segmentation: Application to brain MR data[END_REF][START_REF] Coupé | Nonlocal patch-based label fusion for hippocampus segmentation[END_REF][START_REF] Khan | Optimal weights for local multi-atlas fusion using supervised learning and dynamic information (SuperDyn): Validation on hippocampus segmentation[END_REF][START_REF] Kim | Segmenting hippocampus from 7.0 Tesla MR images by combining multiple atlases and auto-context models[END_REF], and thus allow the weights to vary spatially, have demonstrated to be a better solution in practice. Hence, each atlas contributes to the final solution according to how similar to the target they are. However, the computation of the weights is done independently for each atlas, and the fact that different atlases may produce similar label errors is not taken into account. This assumption can lead to labeling inaccuracies caused by replication or redundancy in the atlas dataset. To address this limitation, a solution for the label fusion problem was proposed [START_REF] Wang | Multi-atlas segmentation with joint label fusion. Pattern Analysis and Machine Intelli-Bibliography gence[END_REF]. In this work the weighted voting was formulated in terms of minimizing the total expectation of labeling error and the pairwise dependency between atlases was explicitly modeled as the joint probability of two atlases making a segmentation error at a voxel.
Hence, the dependencies among the atlases were taken into consideration, and the expected label error was reduced in the combined solution.
Another remarkable example of producing consensus segmentations, especially in the context of medical image processing, is the algorithm named Simultaneous Truth and Performance Level Estimation (STAPLE) [START_REF] Warfield | Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation[END_REF]. STA-PLE approach, instead of using an image similarity metric to derive the classifier performance, estimates the classifier performance parameters by comparing each classifier to a consensus, in an iterative manner according to the Expectation Maximization (EM) algorithm. In order to model miss registrations as part of the rater performance, a reformulation of STAPLE with a spatially varying rater performance model was introduced [START_REF] Commowick | Estimating a reference standard segmentation with spatially varying performance parameters: Local MAP STAPLE[END_REF]. More recently, Cardoso et al. [START_REF] Cardoso | STEPS: Similarity and Truth Estimation for Propagated Segmentations and its application to hippocampal segmentation and brain parcelation[END_REF] extended the classical STAPLE approach by incorporating a spatially image similarity term into a STAPLE framework, enabling the characterization of both image similarity and human rater performance in a unified manner, which was called Similarity and Truth Estimation for Propagated Segmentations (STEPS). At last, a novel reformulation of the STAPLE framework from a non-local perspective, called Non-local Spatial STAPLE [START_REF] Asman | Non-local statistical label fusion for multi-atlas segmentation[END_REF], was used as a label fusion algorithm [START_REF] Panda | Robust optic nerve segmentation on clinically acquired CT[END_REF].
Joint segmentation-registration
It is important to note that most atlas-based methods presented perform registration and segmentation sequentially. Nevertheless, there exist approaches that exploit complementary aspects of both problems to segment either several tissues [START_REF] Yezzi | A variational framework for joint segmentation and registration[END_REF][START_REF] Paragios | Knowledge-based registration & segmentation of the left ventricle: a level set approach[END_REF][START_REF] Wyatt | MAP MRF joint segmentation and registration of medical images[END_REF][START_REF] Wang | Simultaneous registration and segmentation of anatomical structures from brain MRI[END_REF][START_REF] Pohl | A Bayesian model for joint segmentation and registration[END_REF][START_REF] Wu | Joint segmentation and registration for infant brain images[END_REF] or tumors [START_REF] Gooya | Joint segmentation and deformable registration of brain scans guided by a tumor growth model[END_REF][START_REF] Parisot | Joint tumor segmentation and dense deformable registration of brain MR images[END_REF]. The idea of joining registration and segmentation has been utilized by boundary localization techniques using level set representation [START_REF] Leventon | Statistical shape influence in geodesic active contours[END_REF]. These methods relate both problems to each other by extending the definition of the shape to include its pose.
In the work of Yezzi et al. [START_REF] Yezzi | A variational framework for joint segmentation and registration[END_REF], a variational principle for achieving simultaneous registration and segmentation was presented. However, the registration step was limited to rigid motions. Another variational principle in a level-set based formulation was presented in the work of Paragios et al. [START_REF] Paragios | Knowledge-based registration & segmentation of the left ventricle: a level set approach[END_REF] to jointly segment and register cardiac MRI data. A shape model based on a level set representation was constructed and used in an energy to force the evolving interface to rigidly align with the prior shape. The segmentation energy was separately involved as a boundary and region based energy model. In their work, again, the proposed formulation was limited to rigid motion. Departing from earlier methods, Wang et al. [START_REF] Wang | Simultaneous registration and segmentation of anatomical structures from brain MRI[END_REF] proposed a unified variational principle where segmentation and non-rigid registration instead, were simultaneously achieved. Unlike previous approaches, their algorithm could accommodate for image pairs presenting a high variation on intensity distributions. Among other applications of this work, 3D hippocampal segmentation was presented. Wu et al. [START_REF] Wu | Joint segmentation and registration for infant brain images[END_REF] also benefit from joint segmentation and registration to address the problem of segmentation of infant brains from subjects at different ages. In their work, tissue probability maps were separately estimated by using only training at the respective age. Probability maps were then employed as a good initialization to guide the level set segmentation. Some of these work have shown the improvements of coupling segmentation and registration with respect to their isolated use. Nevertheless, the use of this technique to segment some of the structures of interest for our particular problem is minimal, with very few published works [START_REF] Wang | Simultaneous registration and segmentation of anatomical structures from brain MRI[END_REF].
Strengths and Weaknesses
Nearly all atlas-based techniques require some sort of image registration at the initial stages. That means that the success of the atlas propagation highly depends on the registration step. Regarding the creation of the atlases, they are relatively simply to build: any segmentation can be suitable for being an atlas.
The use of a single atlas to propagate segmented structures within a single patient (i.e. at different time point along the treatment for a given patient) is generally sufficient. However, in intra-patients situations presenting wide variability between humans the use of only one atlas might lead to unsatisfactory results. The use of more than one atlas improves segmentation quality in these situations. By increasing the number of atlases in the database, the method becomes more representative of the population and more robust when processing target images that can represent possible deviations. However, when working with multiple atlases, the key point is to determine which atlas must be used, that is not too different from the target image. To achieve this, some similarity metrics are used after the registration step and hence the choice of the closest atlas among all the others in the database. Alternatively to select the closest atlas to the target image, several atlases can be propagated, leading to multiple candidate segmentations that have to be merged at the end of the process. Merging of candidates is performed by label-fusion methods with the risk that these methods can generate organs with disconnected pieces, which is often hardly plausible from an anatomical point of view.
From a clinical perspective, recent clinical evaluations of the final segmentations still reveal the need of manual editing or correction of the automatic contours [START_REF] Daisne | Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation[END_REF]. Additionally, the definition of an appropriate atlas or a set of appropriate atlases remains still an open question. Furthermore, no consensus exists on inclusion/exclusion rules of a given patient in a database, or in the numbers of patients to be included [START_REF] Aljabar | Multi-atlas based segmentation of brain images: atlas selection and its effect on accuracy[END_REF][START_REF] Rohlfing | Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains[END_REF]. Because of all these constraints, atlas-based segmentation techniques still suffer from a slow adoption by the physicians in clinical routine.
One of the main limitations of atlas-based methods is that the contours included in the atlases contain prior knowledge about organs pictured in the image which is not exploited. To perform the segmentation, these contours are merely deformed. As a consequence, most of the information conveyed by the contours, such as shape or appearance, remains implicit and likely underexploited. Statistical models are an alternative that address this issue by making a more explicit use of such prior information to assist the image segmentation. Unlike atlases, the images are not registered but the shapes and, sometimes, the appearance of the organ, are learned in order to be found in a target image.
Statistical models
Statistical models (SM) have become widely used in the field of computer vision and medical image segmentation over the past decade [START_REF] Hu | Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging[END_REF][START_REF] Olveres | Midbrain volume segmentation using active shape models and LBPs[END_REF][START_REF] Cootes | A unified framework for atlas matching using active appearance models[END_REF][START_REF] Duchesne | Appearance-based segmentation of medial temporal lobe structures[END_REF][START_REF] Bailleul | Segmentation of anatomical structures from 3D brain MRI using automatically-built statistical shape models[END_REF][START_REF] Babalola | 3D brain segmentation using active appearance models and local regressors[END_REF][START_REF] Tu | Brain anatomical structure segmentation by hybrid discriminative/generative models[END_REF][START_REF] Hu | Nonlocal regularization for active appearance model: Application to medial temporal lobe segmentation[END_REF][START_REF] Cootes | Training models of shape from sets of examples[END_REF][START_REF] Cootes | Active shape modelstheir training and application[END_REF][START_REF] Brejl | Object localization and border detection criteria design in edge-based image segmentation: automated learning from examples[END_REF][START_REF] Cootes | Active appearance models[END_REF][START_REF] Van Ginneken | Interactive shape models[END_REF][START_REF] Pitiot | Expert knowledge-guided segmentation system for brain MRI[END_REF][START_REF] Zhao | A novel 3D partitioned active shape model for segmentation of brain MR images[END_REF][START_REF] Koikkalainen | Methods of artificial enlargement of the training set for statistical shape models[END_REF][START_REF] Rao | Hierarchical statistical shape analysis and prediction of sub-cortical brain structures[END_REF][START_REF] Heimann | Statistical shape models for 3D medical image segmentation: A review[END_REF][START_REF] Babalola | Using parts and geometry models to initialise Active Appearance Models for automated segmentation of 3D medical images[END_REF][START_REF] Patenaude | A Bayesian model of shape and appearance for subcortical brain segmentation[END_REF][START_REF] Bagci | Hierarchical scale-based multiobject recognition of 3-D anatomical structures[END_REF][START_REF] Bernard | Improvements on the Feasibility of Active Shape Model-based Subthalamic Nucleus Segmentation[END_REF][START_REF] Adiva | Comparison of Active Contour and Active Shape Approaches for Corpus Callosum Segmentation[END_REF]. Basically, SMs use a priori shape information to learn the variation from a suitably annotated training set, and constrain the search space to only plausible instances defined by the trained model. The basic procedure of SM of shape and/or texture is as follows: 1) the vertices (control points) of a structure are modeled as a multivariate Gaussian distribution; 2) shape and texture are then parameterized in terms of the mean and eigenvectors of both the vertex coordinates and texture appearance; 3) new instances are constrained to a subspace of allowable shapes and textures, which are defined by the eigenvectors and their modes of variation. Consequently, if the dimensionality of the shape representation exceeds the size of the training data, the only permissible shapes and textures are linear combinations of the original training data. [START_REF] Cootes | Training models of shape from sets of examples[END_REF], which has been extensively used in SSMs for surface representation. This method regularly distributes a set of points across the surface, which usually relies on high curvatures of boundaries (Figure 3.4. Images courtesy of [START_REF] Duta | Segmentation and interpretation of mr brain images. an improved active shape model[END_REF]). However, they do not need to be placed at salient feature points as per the common definition of anatomical landmark, which is the reason of why they have also been referred as semi-landmarks. Among other shape representation models that have been recently used in medical image segmentation [START_REF] Heimann | Statistical shape models for 3D medical image segmentation: A review[END_REF] we can identify medial models or skeletons, meshes, vibration modes of spherical meshes or the use of wavelets, for example. Alignment of the training shape samples in a common coordinate frame is the first step to create the shape model. Once the samples are co-registered, a reduced number of modes of variation that best describes the variation observed are extracted, which is usually done by applying Principal Components Analysis (PCA) to the set of vectors describing the shapes [START_REF] Cootes | Active shape modelstheir training and application[END_REF]. PCA picks out the main axes of the cloud, and models only the first few, which account for the majority of the variation. Thus, any new instance of the shape can be modeled by the mean shape of the object and a combination of its modes of variations [START_REF] Cootes | Training models of shape from sets of examples[END_REF].
Training
Modelling the appearance
As an extension of the statistical models of shape, the texture variability observed in the training set was included in the model, leading to appearance models (AMs) [START_REF] Cootes | Active appearance models[END_REF]. In this approach, in addition to the shape, the intensity variation seen in the training set is also modeled. As in the SSM, the variability observed in the training set is parameterized in terms of its mean and eigenvectors. Once the shape has been modeled (See section 3.1.1), the statistical model of the gray level appearance has to be built. For this purpose, sample images are warped based on the mean shape. Then, the intensity information from the shape-normalized image is sampled over the region covered by the mean shape. Different techniques to sample the intensity in the warped image can be found in the literature [START_REF] Heimann | Statistical shape models for 3D medical image segmentation: A review[END_REF].
Segmentation Phase. Search algorithm
Once the SM has been created, it is important to define the strategy to search new instances of the model in the input images. This step consists essentially in finding the most accurate parameters of the statistical model that best define a new object. Active shape models(ASM) and active appearance models (AAM) are the most frequently employed constrained search approaches and are described below.
Active Shape Model
Originally introduced by Cootes et al. [START_REF] Cootes | Training models of shape from sets of examples[END_REF][START_REF] Cootes | Active shape modelstheir training and application[END_REF], ASM is a successful technique to find shapes with known prior variability in input images. ASM has been widely used for segmentation in medical imaging [START_REF] Heimann | Statistical shape models for 3D medical image segmentation: A review[END_REF], including segmentation of subcortical structures on brain [58, 61, 63, 101, 103-105, 107, 112, 113]. It is based on a statistical shape model (SSM) to constrain the detected organ boundary to plausible shapes (i.e. shapes similar to those in the training data set). Given a coarse object initialization, an instance of the model can be fit to the input image by selecting a set of shape parameters defined in the training phase (see Section 3.1.1).
Original ASM method [START_REF] Cootes | Active shape modelstheir training and application[END_REF] was improved in [START_REF] Van Ginneken | Interactive shape models[END_REF] by using an adaptive gray-level AM based on local image features around the border of the object. Thus, landmarks points could be moved to better locations during the optimization process. To allow some relaxation in the shape instances fitted by the model, ASM can be combined with other methods, as in [START_REF] Pitiot | Expert knowledge-guided segmentation system for brain MRI[END_REF]. They employed a framework involving deformable templates constrained by statistical models and other expert prior knowledge. This approach was used to segment four brain structures: corpus callosum, ventricles, hippocampus and caudate nuclei. Most of the ASMs used in the literature are based on the assumption that the organs to segment are usually located on strong edges, which may lead to a final shape far from the actual shape model. Instead, [START_REF] Olveres | Midbrain volume segmentation using active shape models and LBPs[END_REF] presented a novel method which was based on the combined use of ASM and Local Binary Patterns(LBP) as features for local appearance representations to segment the midbrain. In this way, segmentation performance was improved with respect to the ASM algorithm.
A major limitation of ASM is the size of the training set (especially in 3D), due to lack of representative data and time needed for model construction process. Hence, 3D ASMs tend to be restrictive in regard to the range of allowable shapes, over-constraining the deformation. Zhao et al. [START_REF] Zhao | A novel 3D partitioned active shape model for segmentation of brain MR images[END_REF] overcame this limitation by using a partitioned representation of the ASM where, given a PDM, the mean mesh was partitioned into a group of small tiles, which were used to create the statistical model by applying the PCA over them. Other techniques focus on artificially enlarging the size of the training set. Koikkalainen et al. [START_REF] Koikkalainen | Methods of artificial enlargement of the training set for statistical shape models[END_REF] concluded that the two best enlargement techniques were the non-rigid movement technique and the technique that combines PCA and a finite element model.
Active Appearance Model
The active appearance model (AAM) is an extension of the ASM that, apart from the shape, models both the appearance and the relationship between shape and appearance of the object [START_REF] Cootes | Active appearance models[END_REF]. Since the purpose of this review is to give a view about the use of these methods in medical image segmentation (especially of the subcortical structures on MRI), and not to enter into detail in the mathematical foundations of each methods, we encourage the readers to review a detailed description of the algorithm in [START_REF] Cootes | Active appearance models[END_REF].
Initially, Cootes et al. [START_REF] Cootes | A unified framework for atlas matching using active appearance models[END_REF] demonstrated the application of 2D AAMs on finding structures in brain MR images. Nevertheless, they are not suitable for 3D images in their primary form because of the underlying shape representation (i.e. PDM) that becomes impractical in 3D. Some approaches extended them to higher dimension by using non-linear registration algorithms for the automatic creation of a 3D-AAM. Duchesne et al. [START_REF] Duchesne | Appearance-based segmentation of medial temporal lobe structures[END_REF] segmented medial tem-poral lobe structures by including nonlinear registration vector fields into a 3D warp distribution model.
However, a number of considerations have to be taken into account in adapting a generic AAM approach to a specific task. Babalola et al. [START_REF] Babalola | Using parts and geometry models to initialise Active Appearance Models for automated segmentation of 3D medical images[END_REF] built AAMs of some subcortical structures using groupwise registration to establish correspondences, i.e. to initialize the composite model within the new image. To build the AAMs, the intensities along vectors normal to the surface of the structures were sampled, which is known as profile AAM. In [START_REF] Babalola | 3D brain segmentation using active appearance models and local regressors[END_REF], the proposed approach used a global AAM to find an approximate position of all the structures in the brain. Once the coarse localization was found, shape and location of each structure were refined by using a set of AAMs individually trained for each of the structures. Although the probability of object occupancy could be derived from the training set, they demonstrated that the use of simple regressors at each voxel based on the pattern of grey level intensities nearby provided better results.
Initialization
Most of the methods that aim to locate a SSM in a new input image use a local search optimization process. So, they need to be initialized near the structure of interest, so that the model boundaries fall in the close vicinity of object boundaries in the image. Straightforward solution for the initialization problem is human-interaction. In some cases, it is sufficient to roughly align the mean shape with the input data, whereas in other cases, it is preferred to use a small number of points to guide the segmentation process [START_REF] Van Ginneken | Interactive shape models[END_REF]. Alternatively, more robust techniques can be used to initialize the model in the image [START_REF] Babalola | Using parts and geometry models to initialise Active Appearance Models for automated segmentation of 3D medical images[END_REF][START_REF] Patenaude | A Bayesian model of shape and appearance for subcortical brain segmentation[END_REF][START_REF] Bagci | Hierarchical scale-based multiobject recognition of 3-D anatomical structures[END_REF]. Nevertheless, the automatic methods can be slow, especially when they work with 3D images.
Strengths and Weaknesses
Unlike atlas-based segmentation methods, statistical models require a learning model. Mean shapes, textures and their modes of variations which define this model are learned from the training set. If the number of samples used to build the learning model is not sufficient, there is a significant risk to overfit the shape or the appearance. If the number of images used to build the model is low, there is a non-negligible risk to overfit the shape and/or the appearance. Overfitting arises when the learned model is too specific to the training set and is not able to acceptable fit unseen instances. Then, it performs well on the training samples but its performance is quite poor when dealing with new examples. Additionally, if some noise along the shapes is learned in the model, robustness when segmenting target images will be also affected.
When utilizing the ASM, during the optimization process, the intensity model and the shape model are applied alternatively. First, candidate target points in the neighborhood of each landmark point are search. And second, a new ASM shape is fit through these points. This procedure is repeated iteratively until convergence. The fact that the shape model may be deceived if the gray-level appearance model does not select a proper landmark makes ASM methods sensitive to local optima.
Because of target points are searched in a local constrained vicinity of the current estimation for each landmark location, a sufficiently accurate initialization needs to be provided in order to make the model converge to the proper shape. Therefore, for both ASM and AAM, the search of the shape and/or appearance requires an initialization. It can be provided either by direct human-interaction or by automatic techniques, which might result too slow. If the initial position is too distant from the searched object, in terms of translation, rotation or scale, this can lead to poor object identification.
Deformable models
The term "deformable model" (DM) was pioneered by Terzopoulos et al. [START_REF] Terzopoulos | Deformable models[END_REF] to refer to curves or surfaces, defined in the image domain, and which are deformed under the influence of internal and external forces. Internal forces are related with the curve features and try to keep the model smooth during the deformation process. In the other hand, external forces are the responsible of attracting the model toward features of the structure of interest, and are related with the image features of the adjacent regions to the curve. Hence, DM tackles the segmentation problem by considering an object boundary as a single, connected structure, and exploiting a priori knowledge of object shape and inherent smoothness [START_REF] Terzopoulos | Deformable models[END_REF]. Although DM were originally developed to provide solutions for computer vision applications to natural scenes and computer graphics problems, their applicability in medical image segmentation has already been proven [START_REF] He | A comparative study of deformable contour methods on medical image segmentation[END_REF]. An example of using deformable models to segment the corpus callosum is shown in Figure 3.5 (Images courtesy of [START_REF] Staib | Boundary finding with parametrically deformable models[END_REF]).
According to the type of shape representation used to define the model, DM methods can be categorized in: parametric or explicit deformable models [START_REF] Mcintosh | Medial-based deformable models in nonconvex shape-spaces for medical image segmentation[END_REF][START_REF] Székely | Segmentation of 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier surface models[END_REF][START_REF] Kass | Snakes: Active contour models[END_REF][START_REF] Mcinerney | T-snakes: Topology adaptive snakes[END_REF][START_REF] Lee | A 2-D Automatic Segmentation Scheme for Brainstem and Cerebellum Regions in Brain MR Imaging[END_REF] and geometric or implicit deformable models [START_REF] Ghanei | Segmentation of the hippocampus from brain MRI using deformable contours[END_REF][START_REF] Zhao | Segmentation of hippocampus in MRI images based on the improved level set[END_REF][START_REF] Leventon | Statistical shape influence in geodesic active contours[END_REF][START_REF] Tsai | Mutual information in coupled multi-shape model for medical image segmentation[END_REF][START_REF] Yang | 3D image segmentation of deformable objects with joint shape-intensity prior models using level sets[END_REF][START_REF] Osher | Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations[END_REF][START_REF] Wang | Boundary finding with correspondence using statistical shape models[END_REF][START_REF] Duncan | Geometric strategies for neuroanatomic analysis from MRI[END_REF][START_REF] Bekes | Geometrical modelbased segmentation of the organs of sight on CT images[END_REF][START_REF] Lee | Segmentation of interest region in medical volume images using geometric deformable model[END_REF].
Parametric deformable models
The first parametric model used in image segmentation found in the literature was originally introduced by Kass et al. [START_REF] Kass | Snakes: Active contour models[END_REF], coined with the name of ?snakes?. It was proposed as an interactive method where, because of its limitations, initial contours must be placed within the vicinity of object boundaries. First, the energy of the contour depends on its spatial positioning and changes along the shape. Sensitivity to initial location obliges the contour to be placed close to the object boundary, leading to failure in case of improper initialization. Second, the presence of noise may cause the contour to be attracted by a local minimum and get stuck in a location that might not correspond with the ground truth. To overcome these limitations different approaches have been proposed [START_REF] He | A comparative study of deformable contour methods on medical image segmentation[END_REF][START_REF] Mcinerney | T-snakes: Topology adaptive snakes[END_REF]. The method presented in [START_REF] Mcinerney | T-snakes: Topology adaptive snakes[END_REF] provides different mechanisms to enable the contour topology to change during the deformation process. In [START_REF] He | A comparative study of deformable contour methods on medical image segmentation[END_REF], an extensive study of DM and different types of external forces was presented.
Regarding the segmentation of subcortical structures, parametric DM have been recently employed to perform the segmentation, in combination with other approaches [START_REF] Mcintosh | Medial-based deformable models in nonconvex shape-spaces for medical image segmentation[END_REF][START_REF] Székely | Segmentation of 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier surface models[END_REF][START_REF] Lee | A 2-D Automatic Segmentation Scheme for Brainstem and Cerebellum Regions in Brain MR Imaging[END_REF]. Ada-boosted algorithm was used in [START_REF] Lee | A 2-D Automatic Segmentation Scheme for Brainstem and Cerebellum Regions in Brain MR Imaging[END_REF] to detect brainstem and cerebellum candidate areas, followed by an active contour model to provide the final boundaries. An extension of natural snakes was proposed in [START_REF] Székely | Segmentation of 3D objects from MRI volume data using constrained elastic deformations of flexible Fourier surface models[END_REF], where desired properties of physical models were combined with Fourier parameterizations of shapes representations and their shape variability to segment the corpus callosum. In [START_REF] Mcintosh | Medial-based deformable models in nonconvex shape-spaces for medical image segmentation[END_REF], the application of genetic al-gorithms to DM was explored in the task of corpus callosum segmentation. In this approach, genetic algorithms were propose to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models.
Geometric deformable models
One of the main drawbacks of parametric DM is the difficulty of naturally handling topological changes for the splitting and merging of contours, restricting severely the degree of topological adaptability of the model. To introduce topological flexibility, geometric DM have been implicitly implemented by using the level set algorithm developed by Osher and Sethian [START_REF] Osher | Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations[END_REF]. These models are formulated as evolving contours or surfaces, usually called fronts, which define the level set of some higher-dimensional surface over the image domain.
Generally, image gray level based methods face difficult challenges such as poor image contrast, noise, and diffuse or even missing boundaries, especially for certain subcortical structures. In most of these situations, the use of prior model based algorithms can solve these issues. The method proposed in [START_REF] Wang | Boundary finding with correspondence using statistical shape models[END_REF] used a systematic approach to determine a boundary of an object as well as the correspondence of boundary points to a model by constructing a statistical model of shape variation. Ghanei et al. [START_REF] Ghanei | Segmentation of the hippocampus from brain MRI using deformable contours[END_REF] used a deformable contour technique to customize a balloon model to the subjects' hippocampus. In order to avoid local minima due to mismatches between model edge and multiple edges in the image, their technique incorporates statistical information about the possible range of allowable shapes for a given structure. Geodesic active contours were extended in [START_REF] Leventon | Statistical shape influence in geodesic active contours[END_REF] by incorporating shape information into the evolution process. PCA and level set functions of the object boundaries were employed to form a statistical shape model from the training set. The segmenting curves evolved according to image gradients and a maximum a posteriori (MAP) estimated the shape and pose.
The use of level set methods to formulate the segmentation problem has been reported to increase the capture range of DM and constrain the deformation through the incorporation of some prior shape information. Because of these advantages geometric DMs have been extensively used to carry out the segmentation task of brain subcortical structures [START_REF] Ghanei | Segmentation of the hippocampus from brain MRI using deformable contours[END_REF][START_REF] Leventon | Statistical shape influence in geodesic active contours[END_REF][START_REF] Tsai | Mutual information in coupled multi-shape model for medical image segmentation[END_REF][START_REF] Yang | 3D image segmentation of deformable objects with joint shape-intensity prior models using level sets[END_REF][START_REF] Wang | Boundary finding with correspondence using statistical shape models[END_REF][START_REF] Duncan | Geometric strategies for neuroanatomic analysis from MRI[END_REF][START_REF] Bekes | Geometrical modelbased segmentation of the organs of sight on CT images[END_REF][START_REF] Lee | Segmentation of interest region in medical volume images using geometric deformable model[END_REF].
In some situations, texture information is also required to constrain the deformation on the contours. As a consequence, statistical models of both shape and texture are used in addition to only shape prior based segmentation methods [START_REF] Cootes | A unified framework for atlas matching using active appearance models[END_REF][START_REF] Cootes | Active appearance models[END_REF]. The modeled structure can be located by finding the parameters, which minimize the difference between the synthesized model image and the target image in conjunction with the statistical model of the shape based on landmark points and texture.
Strengths and Weaknesses
Contrary to statistical models, no training or previous knowledge is required by deformable models. These models can evolve to fit into the desired shape, showing more flexibility than other methods. Nevertheless, the definition of stopping criteria might become hard to achieve, and it depends on the characteristics of the problem.
Parametric deformable models have been successfully employed in a broad range of applications and problems. An important property of this kind of representation is its capability to represent boundaries at a sub-grid resolution, which is essential in the segmentation of thin structures. However, they present two main limitations. First, if variation in size and shape between the initial model and the target object are substantial, the model must be reparameterized dynamically to faithfully recover the boundary of the object. The second limitation is related with the complications that they present to deal with topological changes, such as splitting or merging model parts. This property is useful to recover either multiple objects or an object with unknown topology. Geometric models, however, provide an elegant solution to address these main limitations of parametric models. Due to these models are based on curve evolution theory and the level set method, curves and surfaces evolve independently of the parameterization. Evolving curves and surfaces can therefore be represented implicitly as a level set of a higher-dimensional function, resulting in automatic handling of topological transitions.
Although topological adaptation can be useful in many applications, it can sometimes lead to undesirable results. Geometric deformable models may generate shapes that have inconsistent topology with respect to the actual object, when applied to noisy images with significant boundary gaps. In these situations, the significance of ensuring a correct topology is often a necessary condition for many subsequent applications. Parametric deformable models are better suited to these applications because of their strict control on topology. Additionally, in practice, design of parametric deformable models is more straightforward because of its discrete representation rather than a continuous curve or surface, like in the geometric deformable models. A common disadvantage that share both geometric and parametric models is that their robustness is limited to specific type of images. Suitable images to apply any of the deformable models here presented must provide sufficient edge or region-based information for an explicit modeling in a deterministic or probabilistic manner with parametric assumptions. As a consequence, traditional deformable models generally fail to segment images with significant intensity inhomogeneity and/or poor contrast.
Machine learning methods
Machine Learning (ML) techniques have been extensively used in the MRI analysis domain almost since its creation. Artificial Neural Networks (ANN), or Support Vector Machines (SVM), are among the most popular learning methods used not only for segmentation of brain anatomical structures [START_REF] Hult | Grey-level morphology combined with an artificial neural networks approach for multimodal segmentation of the Hippocampus[END_REF][START_REF] Morra | Automatic subcortical segmentation using a contextual model[END_REF][START_REF] Morra | Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation[END_REF][START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF][START_REF] Pierson | Manual and semiautomated measurement of cerebellar subregions on MR im-Bibliography ages[END_REF][START_REF] Golland | Detection and analysis of statistical differences in anatomical shape[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF][START_REF] Spinks | Manual and automated measurement of the whole thalamus and mediodorsal nucleus using magnetic resonance imaging[END_REF][START_REF] Akselrod-Ballin | Atlas guided identification of brain structures by combining 3D segmentation and SVM classification[END_REF][START_REF] Moghaddam | Automatic segmentation of brain structures using geometric moment invariants and artificial neural networks[END_REF] ,but also for tumors classification [START_REF] Zhou | Extraction of brain tumor from MR images using one-class support vector machine[END_REF][START_REF] Bauer | Fully automatic segmentation of brain tumor images using support vector machine classification in combination with hierarchical conditional random field regularization[END_REF][START_REF] Gasmi | Automated segmentation of brain tumor using optimal texture features and support vector machine classifier[END_REF] or automatic diagnosis [START_REF] Glotsos | Automated diagnosis of brain tumours astrocytomas using probabilistic neural network clustering and support vector machines[END_REF]. Although to a lesser extent, some brain structures others than WM, GM and CSF have also benefit from the use of some other machine learning approaches, such as k-Nearest Neighbors (KNN) [START_REF] Anbeek | Automatic segmentation of eight tissue classes in neonatal brain MRI[END_REF][START_REF] Murino | Evaluation of supervised methods for the classification of major tissues and subcortical structures in multispectral brain magnetic resonance images[END_REF][START_REF] Larobina | Self-Trained Supervised Segmentation of Subcortical Brain Structures Using Multispectral Magnetic Resonance Images[END_REF]. Such supervised learning based segmentation methods first extract image features with information often richer than intensity information alone, and then construct a classification model based on the image features using supervised learning algorithms. We will first review typical features utilized in supervised learning based classification schemes (Section 3.7.1). Next, in section 3.7.2, some of the most common machine learning techniques employed to segment brain structures are presented.
Features used in segmentation
Among all possible information that can be extracted to segment brain structures in medical images, intensity-based, probability-based and spatial information are the most commonly employed features. They represent the simplest cases of features, in terms of complexity.
Intensity Features
Intensity features exploit intensity information of a voxel and appearance of its vicinity. Researchers have extracted neighborhood information in several ways. In its simplest representation, square patches around a given pixel are used in 2D, with typical patch size values ranging from 3 to 9 pixels(Figure 3.6). To catch texture appearance of a voxel and its neighbors cubic patches of different sizes -usually of size 3,5 or 7-are extracted in 3D (Figure 3.7). Extracting such cubic patches represents to have a subset of 27, 125 and 343 intensity values, respectively. These amounts of voxels, however, become sometimes very expensive and impractical, especially for large structures, where a larger number of instances is required in training. To offer a "cheaper" solution that still catches information as far away as these cubic patches, some works have proposed to use crosses orthogonal to the voxel under examination instead. In this way, capturing texture appearance in a radius of size 2 from the voxel v, for example, will lead to a total of 12 voxels, instead of 125 in the case of the cubic patch of size 5, while having the same scope. As alternative to square and cubic intensity patches and crosses, gradient direction has been used to capture relevant information of texture appearance. Here, intensity values along the gradient descents are used to characterize the voxel v and its surroundings. Taking intensity values along the maximum gradient direction from a few voxels from inside to outside has a distinct advantage over using neighbor intensity values based on a rectilinear coordinate system. Image intensity has been largely used to segment objects in medical images. Indeed, it represents the fundamental feature utilized by the algorithms pioneering the use of ANN in the area of tissue classification [START_REF] Cagnoni | Neural network segmentation of magnetic resonance spin echo images of the brain[END_REF][START_REF] Clarke | MRI segmentation: methods and applications[END_REF] . Nevertheless, image intensity information individually is not good enough for distinguishing different brain structures since most of them share similar intensity patterns in MRI. To address such a problem, in learning based segmentation methods, more discriminative features are often extracted from MRI. In addition to image intensity values, which we will denote as IIV onwards, of voxels and their neighborhood, probabilistic and spatial information is often used.
Probability based Features
Probability based features are spatial probabilistic distribution maps for the different structures. They analyze the likelihood of a voxel to belong to a determined structure. The higher the value of a structure at a given location, the more likely the voxel at that location to be the structure. Probability maps generated for machine learning based systems can be seen like a sort of probabilistic atlases, but with more relaxed registration constraints. Labeled patients in the training set are employed to build a map of probabilities. To ensure the probabilities on the map make sense, labels must be referred to the same reference system. To do so, an alignment of both MRI images and labels is required. Once all the patients have been aligned, labels are added to a common volume, creating the probability map (Figure 3.8).
Spatial based Features
Apart from image intensity and probability information, spatial knowledge of the voxel under examination can be employed. Although Cartesian coordi-nates (x,y,z) are frequently exploited, spherical coordinates (r, θ, ϕ) have also been used to capture the spatial information [START_REF] Kim | Multi-structure segmentation of multi-modal brain images using artificial neural networks[END_REF]. Spatial information can aid in classification in several ways. First, the number of possible anatomical classes, such as the brainstem or the optic chiasm, at a given global position in the brain as specified by an atlas coordinate is often relatively small. Second, neuroanatomical structures occur in a characteristic spatial pattern relative to one another. For instance, taking the amygdala as example, it is anterior and superior to the hippocampus. And third, many tissue classes, such as gray or white matter, have spatially heterogeneous MRI intensity properties that vary in a spatially predictable fashion.
Learning Methods
The goal of many learning algorithms is to search a family of functions so as to identify one member of the mentioned family which minimizes a training criterion. The selection of this family of functions, as well as how members of that family are parameterized is of vital importance. Even though there is no universally optimal choice of parametrization of a family of functions (also called architecture) it might happen that some architectures are appropriate, or not, for a broad class of learning tasks and data distributions. Different architectures have different peculiarities that can be appropriate or not, depending on the learning task we are interested in. One of these characteristics, which has prompted a lot of interest in the research community in latest years, is the depth of the architecture. Depth corresponds to the number of hidden and output layers in the case of multilayer neural networks, which will be later introduced. Typical shallow neural networks are built of one to three hidden layers. In the case of support vector machines, for instance, depth is considered to be equal to two [START_REF] Bengio | Scaling learning algorithms towards AI[END_REF]. These architectures composed by very few layers are known as shallow architectures. Multilayer neural networks and support vector machines are among the most employed shallow architectures to perform classification. On the other hand, there are some other methods that, although they represent the simplest form of machine learning, have been employed to segment brain structures: KNN. Even though there exist some other methods inside this category that have been employed to segment either tumors or the brain in its primary classes, their contribution to segment critical brain structures has been marginal. Therefore, they are not considered in this review.
K-Nearest neighbors
K-Nearest neighbors (KNN) classification is based on the assignment of samples, i.e. image voxels, to a class, i.e. tissue type, by a search for samples in a learning set with approximately the same features. The learning set, generated from the labeled voxels, is entered into the feature space according to the feature values of its samples. A new image voxel is classified by inserting it in the feature space and further inspection of the K learning samples which are closest in a distance measure d to it. Then the tissue label is assigned to the target voxel based on a voting strategy among the tissues assigned to the K training voxels [START_REF] Webb | Statistical pattern recognition[END_REF]. A common way to do this is to assign the most frequent class among the K neighbors to this voxel.
Although KNN it is very simple and easy to understand it has been successfully employed for segmentation on brain structures on MRI [START_REF] Anbeek | Automatic segmentation of eight tissue classes in neonatal brain MRI[END_REF][START_REF] Murino | Evaluation of supervised methods for the classification of major tissues and subcortical structures in multispectral brain magnetic resonance images[END_REF][START_REF] Larobina | Self-Trained Supervised Segmentation of Subcortical Brain Structures Using Multispectral Magnetic Resonance Images[END_REF]. Anbeek et .al [START_REF] Anbeek | Automatic segmentation of eight tissue classes in neonatal brain MRI[END_REF] proposed an automatic approach based on KNN and multi-parametric MRI for probabilistic segmentation of eight tissue classes in neonatal brains. Among evaluated structures, brainstem and cerebellum were included. Intensity values from the different MRI modalities were employed as features: T1-and T2-weighted (T1 w and T2 w , respectively). In addition to intensity values, spatial information for each voxel was also used. Thus, each voxel was described with intensity and spatial features. Based on these features, each voxel was assigned to one of the eight tissue classes using a KNNbased classifier. Another attempt to segment brain structures by employing multi-parametric MRI in a KNN-based classifier was presented in [START_REF] Murino | Evaluation of supervised methods for the classification of major tissues and subcortical structures in multispectral brain magnetic resonance images[END_REF]. In addition to T1 w and T2 w sequences, Proton Density weighted (PD w ) images were used to generate the voxel intensity information. As in [START_REF] Anbeek | Automatic segmentation of eight tissue classes in neonatal brain MRI[END_REF], authors including spatial information into the features array by employing the x, y and z coordinates of the voxel under examination. More recently, Larobina et al. [START_REF] Larobina | Self-Trained Supervised Segmentation of Subcortical Brain Structures Using Multispectral Magnetic Resonance Images[END_REF] investigated the feasibility of KNN to segment the four subcortical brain structures: caudate, thalamus, pallidum, and putamen. As in previous works, a combination of intensity and spatial-based information is employed to classify voxels. In their work, multispectral MRI from two studies were used.
While the first group was composed by T1 w , T2 w and PD w , the second group contained T1 w , T2 w and FLAIR images. Additionally, they proposed the use of atlas-guided training as effective way to automatically define a representative and reliable training dataset, giving supervised methods the chance to successfully segment brain MRI images without the need for user interaction.
One of the main advantages of KNN-based classifier is that it is a very simple classifier that works well on basic recognition problems. Due to the nature of its mathematical background, training is performed relatively fast. Nevertheless, it does not learn anything from the training data and simply uses the training data itself for classification. To predict the label of a new instance the KNN algorithm will find the K closest neighbors to the new instance from the training data, the predicted class label will then be set as the most common label among the K closest neighboring points. The main disadvantage of this approach is that the algorithm must compute the distance and sort all the training data at each prediction, which can be slow if there are a large number of training examples. Another disadvantage of not learning anything from the training data, is that it can result in a model not generalizing well and also not being robust to noisy data. Further, changing K may affect the resulting predicted class label. In addition, if the available training set is small there exist a high risk of overfitting. Another drawback of KNN is that prediction accuracy can quickly degrade when number of attributes grows. Computation cost is very high because distance for each query instance to all training samples must be computed.
Artificial neural networks
An artificial neural network (ANN) represents an information processing system containing a large number of interconnected individual processing components, i.e. neurons. Motivated by the way the human brain processes input information, neurons work together in a distributed manner inside each network to learn from the input knowledge, process such information and generate a meaningful response. Each neuron n inside the network processes the input through the use of its own weight w n , a bias value b n , and a transfer function which takes the sum of w n and b n . Depending on the transfer function selected and the way the neurons are connected, distinct neural networks can be constructed.
Because of their efficacy in solving optimization problems, ANN have been integrated in segmentation algorithms to define subcortical structures [START_REF] Hult | Grey-level morphology combined with an artificial neural networks approach for multimodal segmentation of the Hippocampus[END_REF][START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF][START_REF] Pierson | Manual and semiautomated measurement of cerebellar subregions on MR im-Bibliography ages[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF][START_REF] Spinks | Manual and automated measurement of the whole thalamus and mediodorsal nucleus using magnetic resonance imaging[END_REF][START_REF] Moghaddam | Automatic segmentation of brain structures using geometric moment invariants and artificial neural networks[END_REF]. In the method proposed in [START_REF] Hult | Grey-level morphology combined with an artificial neural networks approach for multimodal segmentation of the Hippocampus[END_REF], grey-level dilated and eroded versions of the MR T1 and T2-weighted images were used to minimize leaking from the HC to surrounding tissue combined with possible foreground tissue. An ANN was applied to a manually selected bounding box, which result was used as an initial segmentation and then used as input of the greylevel morphology-based algorithm. Magnotta et al. [START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF] used a three-layer ANN to segment caudate, putamen and whole brain. The ANN was trained using a standard back-propagation algorithm and a piecewise linear registration was used to define an atlas space to generate a probability map which was used as input feature of the ANN. This approach was later employed by [START_REF] Spinks | Manual and automated measurement of the whole thalamus and mediodorsal nucleus using magnetic resonance imaging[END_REF] and extended by [START_REF] Pierson | Manual and semiautomated measurement of cerebellar subregions on MR im-Bibliography ages[END_REF] through the incorporation of a landmark registration to segment the cerebellar regions. Based on the success of applying ANN approaches to segment cerebellar regions by incorporating a higher dimensional transformation, Powel et al. [START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF] extended the initial algorithm of [START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF] to use a high dimensional intensity-based transform. Further, they compared the use of ANN with SVM, as well as with more classical approaches such as singleatlas segmentation and probability based segmentation. In [START_REF] Moghaddam | Automatic segmentation of brain structures using geometric moment invariants and artificial neural networks[END_REF], a two-stage method to segment brain structures was presented, where geometric moment invariants (GMI) were used to improve the differentiation between the brain regions. In the first stage, GMI were used along voxel intensity values as an input feature and a signed distance function of a desired structure as an output of the network. To represent the brain structures, the GMI were employed in 8 different scales, using one ANN for each of the scales. In the second stage, the network was employed as a classifier and not as a function approximator. Some limitations must be taken into account when ANN are employed. Their performance strongly depends on the training set, achieving good results only in those structures for which a suitable training can be developed. This may limit their value with inherently difficult structures that human beings have difficulty delineating reliably, such as the thalamus [START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF]. As a consequence, ANN must be well designed, and different types of ANN may require specific training data set development, depending on the structureidentification task.
Support vector machine
Another widely employed ML system, which also represents a state-of-the-art classifier, is Support Vector Machines (SVM). It was originally proposed by Vapnik [START_REF] Cortes | Support-vector networks[END_REF] and [START_REF] Vapnik | Statistical learning theory[END_REF] for binary classification. In contrast with other machine learning approaches like artificial neural network which aims at reducing empirical risk, SVM implements the structural risk minimization (SRM) that minimizes the upper bound of generation error.
Support vector machines (SVM), often called kernel-based methods, have been extensively studied and applied to several pattern classification and function approximation problems. Basically, the main idea behind SVM is to find the largest margin hyperplane that separates two classes. The minimal distance from the separating hyperplane to the closest training example is called margin. Thus, the optimal hyperplane is the one providing the maximal margin, which represents the largest separation between the classes. This will be the line such that the distances from the closest point in each of the two groups will be farthest away. The training samples that lie on the margin are referred as support vectors, and conceptually are the most difficult data points to classify. Therefore, support vectors define the location of the separating hyperplane, being located at the boundary of their respective classes. By employing kernel transformations to map the objects from their original space into a higher dimensional feature space [START_REF] Burges | A tutorial on support vector machines for pattern recognition[END_REF], SVM can separate objects which are not linearly separable (Figure 3.10). Their good generalization ability and their capability to successfully classify non-linearly separable data have led to a growing interest on them for classification problems. Support vector machines is a non-probabilistic supervised binary classifier that learns a model which represents the instances as points in space, mapped in such a way that instances of different classes are separated by a hyperplane in a high dimensional space. However, if the dataset is not linearly separable in that space the hyperplane will fail in classifying properly. This can be solved by mapping the dataset instances into a higher dimensional space using a kernel function, thus making easier the dataset division Support vector machine represent one of the latest and most successful statistical pattern classifiers. It has received a lot of attention from the machine learning and pattern recognition community. Although SVM ap-Chapter 3. Segmentation methods: State of the art proaches have been mainly employed for brain tumor recognition [START_REF] Zhou | Extraction of brain tumor from MR images using one-class support vector machine[END_REF][START_REF] Bauer | Fully automatic segmentation of brain tumor images using support vector machine classification in combination with hierarchical conditional random field regularization[END_REF][START_REF] Gasmi | Automated segmentation of brain tumor using optimal texture features and support vector machine classifier[END_REF] in the field of medical image classification, recent works have also used them for tissue classification [START_REF] Akselrod-Ballin | Atlas guided identification of brain structures by combining 3D segmentation and SVM classification[END_REF] and segmentation of anatomical human brain structures [START_REF] Morra | Automatic subcortical segmentation using a contextual model[END_REF][START_REF] Morra | Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation[END_REF][START_REF] Golland | Detection and analysis of statistical differences in anatomical shape[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF].
The growing interest on SVM for classification problems lies in its good generalization ability and its capability to successfully classify non-linearly separable data. First, SVM attempts to maximize the separation margini.e., hyperplane-between classes, so the generalization performance does not drop significantly even when the training data are limited. Second, by employing kernel transformations to map the objects from their original space into a higher dimensional feature space [START_REF] Burges | A tutorial on support vector machines for pattern recognition[END_REF], SVM can separate objects which are not linearly separable. Moreover, they can accurately combine many features to find the optimal hyperplane. Hence, as can be seen, SVM globally and explicitly maximize the margin while minimizing the number of wrongly classified examples, using any desired linear or non-linear hypersurface.
Powell et al. [START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF] compared the performance of ANN and SVM when segmenting subcortical (caudate, putamen, thalamus and hippocampus) and cerebellar brain structures. In their study the same input vector was used in both machine learning approaches, which was composed by the following features: probability information, spherical coordinates, area iris values, and signal intensity along the image gradient. Although results obtained where very similar, ANN based segmentation approach slightly outperformed SVM. However, their employed a reduced number of brains to test (only 5 brains), and 25 manually selected features, which means that generalization to other datasets was not guarantee. PCA was used in [START_REF] Golland | Detection and analysis of statistical differences in anatomical shape[END_REF] to reduce the size of the input training pool, followed by a SVM classification to identify statistical differences in the hippocampus. In this work, in addition to the input features used in [START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF], geodesic image transform map was added as input vector of the SVM. However, selection of proper discriminative features is not a trivial task, which has already been explored in the SVM domain. To overcome this problem, AdaBoost algorithm was combined with a SVM formulation [START_REF] Morra | Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation[END_REF]. AdaBoost was used in a first stage to select the features that most accurately span the classification problem. Then, SVM fused the selected features together to create the final classification. Furthermore, they compared four automated methods for hippocampal segmentation using different machine learning algorithms: hierarchical AdaBoost, SVM with manual feature selection, hierarchical SVM with automated feature selection (Ada-SVM), and a publicly available brain segmentation package (FreeSurfer). In their proposed study, they evaluated the benefits of combining AdaBoost and SVM approaches sequentially.
Discussion
Generally, none of the presented methods can singly handle brain subcortical structures segmentation with the presence of brain lesions. Typically, methods discussed in this survey rely on the existent information in a training set. However, subjects presenting brain lesions are not usually representative for a large set of patients, because of lesions may strongly differ and produce random deformations on the subcortical structures. As a consequence, they are not included in the training stage and the deformations on the structures caused by the lesion cannot be therefore modeled. A summary of referenced methods to segment subcortical structures is presented in Table 3.1. Additionally, details of the validation process for these methods are presented in tables 3.2 and 3.3. Definition and description of a validation process is of vital importance to evaluate segmentation methods in medical images. Nevertheless, since this process is not standardized there exist a lot of works that do not fully present all these details. In these two tables, we did our best to try to summarize all this important information.
Model based approaches, such as atlas or statistical models trend to perform reasonably well when there is no high anatomical deviation between the training set and the input case to analyze. Nevertheless, these approaches might completely fail if shape variability is not properly modeled, which often occurs in the presence of brain lesions. Additionally to the shape variability, registration plays an important role in atlas-based approaches. Registrations with large initial dissimilarity in shape between the atlases and the target might not be handled properly. This can lead to inappropriately weights when there are initially large shapes differences resulting in incorrect image correspondences established by the atlas registration. In the other hand, in statistical model approaches, which are only capable of generating a plausible range of shapes, the presence of a tumor might deform a determined structure to an unpredictable shape. This will cause the failure of SM approaches, because of their incapability to generate new unknown shapes which considerably differs from the shapes in the training set.
In the context of SMs, PCA was originally used in a framework called Active Shape Model(ASM) [START_REF] Cootes | Active shape modelstheir training and application[END_REF] and has become a standard technique used for shape analysis in segmentation tasks, and the preferred methodology when trying to fit a model into new image data. Compared to ASM, AAM makes an excessive usage of the memory when it creates the 3D texture model, and the implementation of ASM is relatively easier than the AAM implementation. While ASMs search around the current location and along profiles, AAMs only examine the image under its current area of interest, allowing the ASMs to generally have a larger capture range. However, the use of information solely around the model points makes that ASMs may be less reliable, since they do not profit from all texture information available across a structure, unlike AAM. Another interest advantage of the AAMs reported by [START_REF] Cootes | A unified framework for atlas matching using active appearance models[END_REF] is related with the number of landmarks required to build a statistical model. Compared to the ASMs, AAMs can build a convincing model with a relatively small number of landmarks, since any extra shape variation may be encoded by additional modes of the texture model. Consequently, although the ASM is faster and achieves more accurate feature point location than the AAM, the AAM gives a better match to the image texture, due to it explicitly minimizes texture errors. Furthermore, ASM is less powerful in detecting the global minima and may converge to a local minimum due to multiple nearby edges in the image. These situations make AAM usually more robust than ASM. Although the main advantage of using PCA in SMs is to constraint the segmentation task to the space spanned by the eigenvectors and their modes of variation, it has two major limitations. First, the deformable shapes that can be modeled are often very restricted. Secondly, finer local variations of the shape model are not usually encoded in these eigenvectors. Consequently, new instances containing these small variations will not be properly fitted in the model instance.
Contrary to statistical models, DM provide flexibility and do not require explicit training, though they are sensitive to initialization and noise. SMs may lead to greater robustness, however they are more rigid than DM and may be over-constrained, not generalizing well to the unsampled population, particularly for small amounts of training data relative to the dimensionality. This situation can appear on new input examples with pathologies, lesions or presenting high variance, different from the training set. Models having local priors similar to DM formulation do not have this problem. They will easily deform to highly complex shapes found in the unseen image. Hence, many methods attempt to find a balance between the flexibility of the DM and the strict shape constraints of the SM by fusing learned shape constraints with the deformable model.
Notwithstanding, some main limitations have to be taken into account when working with generic parametric DM. First, if the stopping criterion is not defined properly, or boundaries of the structures are noisy, DM may get stuck in a local minimum which does not correspond to the desired boundary. Second, in situations where the initial model and the desired object boundary differ greatly in size and shape, the model must be reparameterized dynamically to faithfully recover the object boundary. Methods for reparameterization in 2D are usually straightforward and require moderate computational overhead. However, reparameterization in 3D requires complicated and computationally expensive methods. Further, it has difficulties when dealing with topological adaptation, caused by the fact that a new parameterization must be constructed whenever the topology change occurs, which may require sophisticated schemes. This issue can be overcome by using LSs. Moreover, as DM represent a local search, they must be initialized near the structure of interest.
By introducing machine learning methods, algorithms developed for medical image processing often become more intelligent than conventional techniques. Improvements in the resulting relative overlaps came from the application of the machine learning methods including ANN and SVM [START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF]. A comparison done in this work between four methods (template based, probabilistic atlas, ANN and SVM) showed that machine learning algorithms outperformed the template and probabilistic-based methods when comparing the relative overlap. There was also little disparity between the ANN and SVM based segmentation algorithms. ANN training took significantly longer than SVM training but can be applied more quickly to segment the regions of interest. It was reported that it took a day to train an ANN for the classification of only one structure from the others even though a random sampled data was used instead of the whole dataset.
Machine learning techniques have therefore demonstrated to outperform other, more traditional, approaches in segmenting brain structures. Recent developments of medical imaging acquisition techniques have led to an increase of complexity on the analysis of images. This brings new challenges where the analysis of large amount of data is compelled. On this context, we believe that machine learning techniques suit perfectly to deal with these new challenges.
However, a new area of Machine Learning has recently emerged with the intention of moving machine learning closer to one of its original purposes: Artificial Intelligence. This area is known as deep learning. Recent progress on using deep networks for image recognition, speech recognition, and some other applications has shown that they currently provide the best solutions to many of these problems. Therefore, we are going to consider the use of deep learning to address the problem of segmentation of brain structures in radiation therapy. Next chapter will introduce the reader in the context of deep learning and its use in this dissertation. Chapter 4
Our Contribution " I have not failed. I have just found 10.000 ways that will not work."
Thomas A. Edison
This chapter introduces the main contributions of this thesis. Typical setting of a machine learning classifier mainly involves two elements: the learning method and the set of features. On the one hand, we propose to employ a stack of denoised auto-encoders in a deep fashion to segment the OARs. On the other hand, we propose the use of new features to achieve better performance. These two components of the classifier are the cornerstone of our dissertation. To understand the context of our proposal, some fundamental notions of machine learning, such as the representation of the data and the classification task, are briefly presented in the first section,. Next, the deep learning technique employed in our work is introduced. Following, in section 4.3, features proposed throughout our work are detailed. And in the last section of this chapter the steps to train the deep network are explained.
Introduction to Machine Learning
The endeavor to understand intelligence implies building theories and models of brains and minds, both natural as well as artificial. From the earliest writings of India and Greece, this has been a central problem in philosophy. With the arrival of the digital computer in the 1950's, this became a central concern of computer scientists as well. Thanks to the parallel development of the theory of computation, a new set of tools with which to approach the problem through analysis, design, and evaluation of computers and programs exhibiting some aspects of intelligent behavior was provided. The ability to recognize and classify patterns or to learn from experience were some of these intelligent behaviors [START_REF] Honavar | Artificial intelligence: An overview[END_REF].
Among the different ways to define the notion of intelligence, we interpret it as the ability to take the right decisions, according to some criterion. Taking appropriate decisions generally requires some sort of knowledge that is utilized to interpret sensory data. Decisions are then taken based on that information. Nowadays, as a result of all the programs that humans have crafted, computers possess somehow their own intelligence. This understanding allows computers to easily carry out tasks that might be intellectually difficult for human beings. Nevertheless, tasks that are effortlessly done by humans and animals might still remain unreachable for computers. Many of these tasks fall under the label of Artificial Intelligence (AI).
Reasons of failure in such tasks can be summarized in the lack of explicit information when trying to transfer the knowledge to the machine. That is, in other words, in situations where to solve a given problem a computer program cannot be directly written. This commonly occurs when we, humans, know how to perform an action or a task and are not able to explain our expertise. Learning is therefore required by the machine to execute such a task. In this way, computers learn from experience and understand the world in terms of a hierarchy of concepts, where each concept is defined in terms of its relation to simpler concepts. This hierarchical distribution will allow the computer to learn complex concepts by depicting them with simpler ones. The capability of AI-based systems to acquire their own knowledge by extracting patterns from raw data is known as machine learning (ML). Consider as example the problem of speech recognition. This task can be done apparently without any difficulty, but explanation on how we do it is not straightforward. Due to differences in gender, age or accent, for example, there exists a speaker variability, which makes different people utter the same word differently. We can easily recognize who speaks, or to which kind of population a given utterance belongs because of our experience. Nevertheless, in machine learning, the approach consists on collecting a large collection of sample utterances from different people and learning how to map all these to words. Thus, the machine learns how to automatically extract the algorithm to perform this task. In short, machine learning involves training a computer system to perform some task, rather than directly programming the system to perform the task.
The "Task"
One of the main strengths for which machine learning has been especially interesting is the variety of tasks that can be achieved with it. From an engineering point of view, ML has brought us the capability to approach some tasks that would be too hard to solve with hand-crafted computer programs. On the other hand, from a scientific point of view, understanding machine learning has provided us the knowledge of the principles that govern intelligent behavior, which establishes the basis to accomplish certain tasks.
Hence, the learning process itself is not the aforementioned task. Learning is the process of obtaining the ability to achieve the task. For instance, if we want a car to be able to autonomously drive, then driving is the task. To complete the task we could either program the car to learn to drive, or directly write a computer program that specifies how manually drive, instead. We therefore understand that machine learning can be employed to solve many kinds of tasks. Nevertheless, one of the most common tasks, which is also the task to perform in this dissertation, is classification.
Classification entails assigning an observation to a category or class. To solve this task, the learning algorithm is typically asked to build a function f : R n → {1, ..., k} which can be applied to any input. The output of this function, f (x), can be then interpreted as an estimation of the class to which x belongs to. Methods used for classification often predict the probability of an observation of each of the categories, or classes, of a qualitative variable as the basis for later on providing the classification. Let's consider object recognition as example of classification. Here, an image usually represents the input, x, and the output, f (x), is a numeric value which identifies the object in the image.
Learning how to classify objects to one of a pre-specified set of categories or classes is a characteristic of intelligence that has been of keen interest to researchers in psychology and computer science. Identifying the common core characteristics of a set of objects that are representative of their class is of enormous use in focusing the attention of a person or computer program. For example, to determine whether an animal is a zebra, people know that looking for stripes is much more meaningful rather than examining its tail or ears. Stripes alone are not sufficient to form a class description for zebras, since tigers have them also, but they are certainly one of the important characteristics. Thus, stripes strongly figure in our concept or generalization of what zebras are. The ability to perform classification and to be able to learn to classify gives people and computer programs the power to make decisions. The efficacy of these decisions is affected by performance on the classification task, which in turn strongly depends on the representation of the data.
Data Representation
The choice of data representation plays a crucial role on the performance of a ML-based classifier. In a typical machine learning task, data is represented as a table of examples or instances. Each instance is described by a fixed number of measurements, or features, along with a label that denotes its class. Features, which are also sometimes called attributes, are typically one of two types: nominal or numeric. While the former are members of an unordered set, the later are represented by real numbers. Table 4.1 shows ten instances of benign and malignant tumors according to some of their characteristics. Each instance is a tumor described in terms of the attributes size, homogeneity and shape, along with the class label which indicates whether a tumor is benign or malignant. During learning, correlation between these features and various outcomes will be learned, and this will be employed to make predictions on new unseen instances. To illustrate the importance of selecting the proper representation of the data for a given problem, two different representations of the same data are shown in figure 4.1. Available data is sampled according to points location or coordinates. A simple classification task would be to separate the two data categories by just drawing a line between the two groups. However, whilst on the example where data is represented by Cartesian coordinates the task is impossible, in the example representing the data with polar coordinates the task becomes simple to solve with a vertical line.
#instances
This dependence on data representations is a phenomenon that commonly appears throughout computer science. Operations such as searching a collection of data can proceed exponentially faster if the collection is structured and indexed intelligently. Thus, we can assume that many AI tasks can be easily solved by designing the proper set of features for a specific task. For example, as illustrated in the case of tumor characterization (table 4.1), a useful feature for representing a tumor is its shape. It may be useful for tumor characterization because type of tumor it is often, together with other factors, determined by the nature of its shape. Shape gives therefore a strong clue as to whether a tumor is benign or malign.
However, for many tasks, knowing which features should be extracted is not trivial. For instance, following the tumor example, suppose that we would like to write a computer program to detect tumors in medical images. We, or doctors, know how tumors may look like. So we might like to use the appearance of a tumor as a feature. Unfortunately, it is very difficult to exactly describe how a tumor looks like in terms of pixel values. This is particularly harder when combining multiple image sequences. One solution to tackle this problem is to use ML to discover not only the mapping from representation to an output, but also the data representation itself. This approach is known as representation learning.
We have seen that, in general, a good data representation is the one that makes the further learning task easier. Generally, designing features aims at separating the factors of variation that explain the observed data. Hence, hand-designed representations of the data usually provide satisfactory classification performances. Nevertheless, learned representations typically result in much better performance, since the best data configuration is represented in a more compressed and meaningful way. Despite there exist sophisticated algorithms to learn data representations, factors or sources of variation still introduce a major source of difficulty in many real world AI applications: they influence every single piece of observed data. In such situations, factors of variations must be unscrambled and careless factors discarded. Nevertheless, this is not straightforward and complex understanding of the data is required to identify such high-level abstract features. To solve this main issue in representation learning, representations that are expressed in terms of other, simpler representations are introduced. And this is exploited in deep learning, which will be detailed later on.
Learning Algorithms
A learning algorithm, or an induction algorithm, forms concept descriptions from known data or experience. Concept descriptions are often referred to as the knowledge or model that the learning algorithm has induced from the input data. It models then the function that will perform the classification task from the representation of the given data. Knowledge may be represented differently from one algorithm to another.
Advantageously, while most conventional computer programs are explicitly programmed for each process, ML-based systems are able to learn a given task, regardless of its complexity. By following the lemma "divide and conquer", a complex problem can be decomposed into simpler tasks, in order to be able to understand it and solve it. Artificial Neural Networks (ANN), represent one approach to achieve this. An ANN is a massively parallel computing system consisting of an extremely large number of simple processors, i.e. neurons, with many interconnections between them, i.e. weights. Learning in ANN is performed by using algorithms designed to optimize the strength of the connections in the networks. A network can be subject to supervised or unsupervised learning. In order to be referred to as supervised learning, an external criteria has to be used and matched by the network output. Otherwise, learning is termed as unsupervised, or also self-organizing. In this approach, no sample outputs are provided to the network against which it can measure its predictive performance of a given vector of inputs. As a result, there exist more interaction between neurons. Interaction is often performed by employing feedback and intralayer connections between neurons, which promotes self-organization. A detailed explanation of ANNs them can be found in Appendix A.
The purpose of this section is to review the deep learning technique explored, which can be applied to the problem of segmenting critical structures on MRI scans during the radiation treatment planning for brain cancer. Throughout this thesis, two learning algorithms are used as a basis for comparison between their performance. The first of these learning algorithms is Support Vector Machines, which constitutes one of the most successful classifiers inside the classic machine learning techniques. Nevertheless, it is important to note that during the research conducted for this work, it has been found that there is a gap missing in the state-of-the-art, as no deep architectures seem to have been fully explored yet to tackle the problem of brain structures segmentation on MRI. We try to make a step towards this direction in the framework of this project, and we propose the use of a deep learning classification system based on Stacked Denoising Autoencoders (SDAE). Since SVM does not represent the core of this thesis, and it is only employed for com-parison purposes, a theoretical introduction has been included in Appendix B.
Deep Learning
Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data. Modern deep learning research takes a lot of its inspiration from neural network research of previous decades. Whereas most current learning algorithms correspond to shallow architectures with 1 up to 3 levels of abstractions, the mammal brain is organized in a deep architecture with multiple levels, each level corresponding to a different cortex region. Inspired by the architectural depth of the brain, neural network researchers had wanted for decades to train deep multilayer neural networks, but no successful attempts were reported before 2006 (except convolutional NNs).
Historical context
Inspired by the understanding of biological neurons, straightforward algorithms were proposed to create artificial neural networks in the 60's [START_REF] Rosenblatt | The perceptron: a probabilistic model for information storage and organization in the brain[END_REF]. Although this discovery created a great excitement and expectations over the scientific community, initial enthusiasm soon declived because of the inability of these simple learning algorithms to learn representations. This shortcoming in learning what the hidden layers of the network should represent led to a strong influence of symbolic computation and expert systems in the Artificial Intelligence domain during the subsequent years. The introduction of the backpropagation algorithm to learn patterns that were not linearlyseparable [START_REF] Rumelhart | Learning internal representations by error propagation[END_REF] made possible the use neural networks to solve problems which were previously insoluble. This caused a replenishment on the research of neural networks. Lately, in the 90's and 2000's, and despite the remarkable results of artificial neural networks to perform some tasks [START_REF] Bengio | A neural probabilistic language model[END_REF], some other approaches dominated the field [START_REF] Cortes | Support-vector networks[END_REF][START_REF] Burges | A tutorial on support vector machines for pattern recognition[END_REF][START_REF] Scholkopf | Learning with kernels: support vector machines, regularization, optimization, and beyond[END_REF].
One of the main reasons to abandon artificial neural networks in favor of these more limited approaches was the difficulty of training deep networks. Training deep architectures was a difficult task and classical methods that had proved to be effective when applied to shallow architectures were not as efficient when adapted to deep architectures. Simply adding more layers did not necessarily lead to better solutions. On the contrary, as the number of hidden layers increased -i.e. architecture got deeper-it become more difficult to obtain good generalization. For example, the deeper the network, the lesser the impact of the back-propagation algorithm on the first layers. The error from the output layer that was back propagated to the inner layer was getting smaller at each time a layer was passed over, making that the multilayer network in fact did not learn. Gradient-based training of deep supervised multi-layer neural networks starting from random initialization then tended to get stuck in local minima [START_REF] Bengio | Greedy layer-wise training of deep networks[END_REF]. Additionally, a neural network composed by three layers -i.e. only a hidden layer-was mathematically demonstrated to be a universal approximator [START_REF] Cybenko | Approximation by superpositions of a sigmoidal function. Mathematics of control[END_REF]. As a consequence, solutions obtained with deeper networks corresponded to poor solutions, with worse performance than shallow networks. Hence, until some years ago, most machine learning techniques exploited shallow structures architectures, where networks were typically limited to one or two hidden layers.
However, it was not until 2006 when the concept of Greedy Layer-Wise Learning was introduced [START_REF] Bengio | Greedy layer-wise training of deep networks[END_REF][START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF][START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF]. This new concept profits from a semi-unsupervised learning procedure. Unsupervised learning is used in a first stage to initialize the parameters of the layers, one layer at a time, and then a fine-tuning of the whole system is done by a supervised task. Since then, deep structured learning, or more commonly known as deep learning or hierarchical learning, has emerged as a new area of machine learning research [START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF][START_REF] Bengio | Deep learning of representations for unsupervised and transfer learning[END_REF], impacting a wide range of research fields.
Advantages respect to shallow architectures
We have seen that a simple neural network with two hidden layers already theoretically represents a universal function approximator capable of approximating any function to any arbitrary accuracy. However, one of the main benefits of using deep networks comes from the side of computational efficiency. Indeed, complex functions can often be approximated with the same accuracy using a deeper network that has much fewer total number of units compared to a typical two-hidden-layer network containing large hidden layers. The size of the training set is often a limiting factor when using neural networks based systems. By employing deeper network instead, models with smaller degree of freedom, which require smaller datasets to train [START_REF] Schwarz | Estimating the dimension of a model[END_REF], are built. This leads to a shrinkage on the training dataset size required.
Another, probably more compelling, factor is that typical approaches for classification must be generally preceded by a feature selection step, where most discriminative features are privileged for a given problem. Such step, however, is not needed in deep learning-based classification schemes. What differentiates deep learning approaches from other conventional machine learning techniques, therefore, is their ability to automatically learn features from data which largely contributes to improvements in terms of accuracy. In other words, deep learning learns a better and more compact representation of the input data. This represents an important advantage and removes a level of subjectivity from conventional approaches, where the researcher typically has to decide which set of features must be tried. With the inclusion of deep learning techniques in the classification scheme this step is thus avoided.
Furthermore, as it has been shown in the previous section, one of the problems of classical shallow networks is its difficulty to train networks with more than two or three hidden layers. By employing a learning algorithm that greedily trains one layer at time deeper networks can be used. Apart from allowing the use of networks with more hidden layers, pre-training each layer with an unsupervised learning algorithm might result in the achievement of much better results [START_REF] Erhan | Why does unsupervised pre-training help deep learning?[END_REF][START_REF] Palm | Prediction as a candidate for learning deep hierarchical models of data[END_REF]. Unsupervised pre-training allows, indeed, to achieve good generalization performance when the training set is limited in terms of size by positioning the network in a region of the parameter space where the supervised gradient descent is less likely to drop in a local minimum of the loss function.
A worthy point to highlight is that deep learning approaches are recently breaking records in several domains, such as speech, signal, image and text mining and recognition and improving state of the art classification methods in accuracy by, sometimes, more than 30 %, where the prior decade struggled to barely achieve 1-2 % of improvements [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF][START_REF] Le | Building high-level features using large scale unsupervised learning[END_REF].
The main shortcoming of deep learning techniques, which is actually one of its advantages, is the large amount of data required to unsupervisedly craft the features during its first stage.
Different levels of abstraction
Following the analogy with the human brain, the process of object recognition in the visual cortex begins in the low-level primary area V1. Then, the process proceeds in a roughly bottom-up fashion through areas V2 and V4, ending in the inferotemporal cortex (IT), figure 4.2. Once the information reaches the IT, it travels to prefrontal areas, where it plays a role in perception, action, planning and memory. These hierarchically organized circuits in the human brain exploit circuit modularity and reuse general subcircuits in order to economize on space and energy consumption. Thus, in a hierarchical model, lower layers might include dictionaries of features that are general and yet applicable in the context of many specific classification tasks.
We have seen that deep learning is a kind of representation learning in which there are multiple levels of features. These features are automatically discovered and they are composed together in the various levels to produce the output. Each level represents abstract features that are discovered from the features represented in the previous level. Hence, the level of abstraction increases with each level. This type of learning enables discovering and representing higher-level abstractions. In neural networks, the multiple layers correspond to multiple levels of features. These multiple layers compose the features to produce the output. While the first layers use to be more generic, last layers are often strongly task-specific. Therefore, the higher the layer, the more specialized the features are.
Convolutional neural networks
Among all the deep learning approaches, convolutional neural networks (CNNs) have demonstrated to be very powerful when classifying medical images. These artificial networks are made up of convolutional, pooling and fully-connected layers. These type of networks are mainly characterized by three main properties: local connectivity of the hidden units, parameter sharing and the use of pooling operations.
A CNN consists of a succession of layers which perform several operations on the input data. First, convolutional layers C convolve images presented at their inputs with a predefined number of kernels, k. These kernels have a certain size, s, and are typically followed by activation units that rescale the convolution results in a non-linear manner. Pooling layers reduce the dimensionality of the responses produced by the convolutional layers through downsampling. Different strategies can be adopted to perform the pooling: average or max-pooling, for example. At the end, fully connected layers are the responsible of extracting compact, high level features from the data. A typical workflow for a convolutional neural network is shown in figure 4.3. In these networks, two or three-dimensional patches are commonly fed into the deep network, which unsupervisedly learns the best features representation of those given patches. In other words, it learns a hierarchical representation of the input data and is able to decode the important information contained on the data. By doing this, a deep network is able to provide a hierarchical feature representation of each patch and ensure discriminative power for the learned features. Networks based on convolutional filters, i.e. CNN, perfectly suit to deal with data presenting a grid structured representation, such as 2D or 3D image patches. However, when input data composed by features not presenting a grid-based representation is employed, CNNs might not represent the best solution. Valuable information inherited from classical machine learning approaches to segment brain structures is not therefore included into the CNNs. This knowledge may come in the form of likelihood voxel values, voxel location, as well as textural information, for example, which is greatly useful to segment structures that share similar intensity properties. Because we wish to employ arrays composed by concatenation of different features, which will be introduced in Section 4.3, we consider the use of denoised auto encoders (DAE) instead, which is able to deal with such type of features arrays. Another reason for employing DAE is because of the limited size of the number of training and labeled data. Instead of random initialization of the network weights, values are obtained by using DAEs which act as a pre-training step in an unsupervised fashion. Thanks to this the network can be trained with such limited amount of data while avoiding overfitting.
Auto-Encoders
Autoencoders are a method for performing representation learning, an unsupervised pretraining process during which a more useful representation of the input data is automatically determined. Representation learning is important in machine learning since the performance of machine learning methods is heavily dependent on the choice of data representation in which they are applied. For many supervised classification tasks, the high dimensionality of the input data means that the classifier requires a huge number of training examples in order to generalize well and not overfit. One solution is to use unsupervised pretraining to learn a good representation for the input data and during actual training, transform the input examples into an easier form for the classifier to learn. Autoencoders are one such representation learning tool.
Classical auto-encoders (AE) have been recently developed in the deep learning literature in different forms [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF]. In its simplest representation, an AE is formed by two components: an encoder h(•) that maps the input x ∈ R d to some hidden representation h(x) ∈ R d h , and a decoder g(•), which maps the hidden representation back to a reconstructed version of the input x, so that g(h(x)) ≈ x (Fig. 4.4). Therefore, an AE is trained to minimize the discrepancy between the data and its reconstruction. This discrepancy represents the difference between the actual output vector and the expected output vector that is the same as the input vector. As a result, AEs offer a method to automatically learn features from unlabeled data, allowing for unsupervised learning. Let formulate an autoencoder in more detail. When a traditional autoencoder takes an input x ∈ [0, 1] d , first thing that it does it to map this input -with the encoder-to a hidden representation y ∈ [0, 1] d through a deterministic mapping, as follows
y = f θ (x) = s(W x + b) (4.1)
which is parameterized by θ = {W, b}. In addition, s is a non-linearity function, such as the sigmoid, W is a d × d weight matrix and b is the bias vector. The resulting latent representation y is then mapped back -with the decoder-to a "reconstructed" vector z ∈ [0, 1] d of the same shape as x. This reconstruction is defined as
z = g θ (y) = s(W y + b ) (4.2)
where parameterization is given by θ = {W , b } in this case. The weight matrix W of the reverse mapping may optionally be constrained by W = W T to be the transpose of the forward mapping. If this happens, the auto-encoder is said to have tied weights. Each training x (i) is thus mapped to a corresponding y (i) and a reconstruction z (i) . In other words, z can be seen as a prediction of the input x, given the latent representation y. The parameters of this model are optimized such that the average reconstruction error is minimized
θ , θ * = arg min θ ,θ * 1 n n i=1 L(x (i) , z (i) ) = arg min θ ,θ * 1 n n i=1 L(x (i) , z (i) ) (4.3)
where L(•) is a loss function such as the traditional squared error (for real-valued x)
L(x, y) = x -z 2 (4.4)
Alternative loss functions can be used in 4.3. For example, if x and z are interpreted as either bit of vectors of bit probabilities, the cross entropy loss reconstruction can be used
L H (x, y) = - d k-1 [x k log z k + (1 -x k ) log(1 -z k )] (4.5)
Using the cross entropy reconstruction formulation in 4.3, the average reconstruction error can be defined then as
θ , θ * = arg min θ ,θ * E q 0 (X) [L H (X, g θ (f θ (X)))] (4.6)
where q 0 (X) denotes the empirical distribution associated to the n training inputs and E refers to the Expectation
E p(X) [f (X)] = p(x)f (x)dx (4.7)
To compute the Expectation, Eq. 4.7, we have assumed X and Y to be two random variables with joint probability density p(X, Y ), with marginal distributions p(X) and p(Y ). Note that in the general auto-encoder framework, other forms of parameterized functions for the encoder or decoder, as well as other suitable choices of the loss function (corresponding to a different p(X, Y ) may be used. In particular, the usefulness of a more complex encoding functions was investigated in [START_REF] Larochelle | Deep learning using robust interdependent codes[END_REF]. According to all this, it can be said that training an auto-encoder to minimize reconstruction error amounts to maximize a lower bound on the mutual information between input X and learned representation Y. Intuitively, if a representation allows a good reconstruction of its input, it means that it has retained much of the information that was present in that input.
The autoencoder yields lower reconstruction errors than other related batch algorithms based on matrix factorization. It efficiently generalizes to new inputs very accurately, with no expensive computations. This makes autoencoders fundamentally different from classical matrix factorization techniques. An example of neural encoding of an input and its corresponding reconstruction is shown in figure 4.5. In this figure, a reconstruction example of a handwritten digit input by employing neural encoding is shown. The input is represented by x, while x symbolizes its reconstruction. The input and the output are connected with the hidden layer h by the weights W and W T , respectively. Weights W are responsible of encoding the input through the hidden layer, whereas weights W T will decode the information in the hidden layer through the output, or reconstructed input.
Denoising Auto-Encoders
One serious potential issue when working with AE is that if there is no other constraint besides minimizing the reconstruction error (4.3), then an AE with n inputs and an encoding of dimension at least n could potentially just learn the identity function, for which many encodings would be useless, leading to just copy the input. That means that an AE would not differentiate test examples from other input configurations. There are different ways that an AE with more hidden units than inputs could be prevented from learning the identity, and still capture some valuable information about the input in its hidden representation. Adding randomness in the transformation from input to reconstruction is one option, which is exploited in Denoising Auto-Encoders (DAEs) [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF][START_REF] Maillet | Steerable Playlist Generation by Learning Song Similarity from Radio Station Playlists[END_REF][START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF][START_REF] Glorot | Domain adaptation for large-scale sentiment classification: A deep learning approach[END_REF][START_REF] Vincent | A connection between score matching and denoising autoencoders[END_REF][START_REF] Mesnil | Unsupervised and Transfer Learning Challenge: a Deep Learning Approach[END_REF]. To force a hidden layer to discover more robust features and prevent it from simply learning the identity, a slight modification to the normal AE setup is done by corrupting the input x before mapping them into the hidden representation. This leads to a partially destroyed version x by means of a stochastic mapping x ∼ q D (x|x). Therefore, to convert an AE class into a DAE class, only adding a stochastic corruption step that modifies the input is required, which can be done in many ways.
Thus, following the formulation in classical AE in Section 4.2.5, the corrupted input x is mapped to a hidden representation
y = f θ (x) = s(W x + b) (4.8)
from which z can be reconstructed (Figure 4.6).
z = g θ (y) = s(W y + b ) (4.9)
As before, the parameters of the model are trained to minimize the average reconstruction error L H (x, z) (4.5) over a training set.
Hence the DAE tries to predict the corrupted values from the uncorrupted values, for randomly selected subsets of missing patterns, i.e., corrupted. The DAE is therefore a stochastic version of the AE.
Let define now the following joint distribution where δ u (v) puts mass 0 when u = v. Thus Y is a deterministic function of X. Note also that the joint distribution function q 0 (X, X, Y ) is parameterized by θ. The objective function minimized by the stochastic gradient descent becomes
q 0 (X, X, Y ) = q 0 (X)q D ( X X)δ f θ ( X) (Y ) (4.10)
arg min θ ,θ * E q 0 (X, X) [L H (X, g θ (f θ ( X)))] (4.11)
Therefore, from the point of view of the stochastic gradient descent algorithm, in addition to picking an input sample from the training set, we will also produce a random corrupted version of it, and take a gradient step towards reconstructing the uncorrupted version from the corrupted version. In this way, the denoising auto-encoder cannot learn the identity, unlike the basic auto-encoder, thus removing the constraint that d < d or the need to regularize specifically to avoid such a trivial solution.
Types of corruption. Corruption processes can be incorporated in many ways. The most common corruption processes are:
• Additive isotropic Gaussian noise (GS): x x ∼ N (x, σ 2 I);
• Masking noise (MN): a fraction v of the elements of the input x, that can be randomly selected, is forced to be 0;
• Salt-and-pepper noise (SP): a fraction v of the elements of the input x, that can be randomly selected, is set to their minimum or maximum possible value (typically 0 or 1) according to a fair coin flip.
Additive Gaussian noise is a very common noise model, and is a natural choice for real valued inputs. The salt-and-pepper noise will also be considered, as it is a natural choice for input domains which are interpretable as binary or near binary such as black and white images or the representations produced at the hidden layer after a sigmoid squashing function. For example, in [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF], the stochastic corruption process consists in randomly setting some of the inputs to zero.
Stacked Denoising Auto-Encoders
Several DAEs can be stacked to form a deep network by feeding the hidden representation of the DAE found on the layer below as input to the current layer [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF] (Figure 4.7), leading to what is known as Stacked Denoising Autoencoder (SDAE). On this configuration, DAEs are stacked and trained bottom up in unsupervised fashion, followed by a supervised learning phase to train the top layer and fine-tune the entire architecture.
Weights between layers of the network are initially learned via an unsupervised pre-training step. Unsupervised pre-training of such architecture is done greedily, i.e. one layer at a time. Each layer is trained as a DAE by minimizing the reconstruction of its input. Once the first k layers are trained, the (k +1) th layer can be trained because the latent representation from the layer below can be then computed.
Once all the weights of the network are unsupervisedly computed, the highest level of the output network representation can be fed into a standalone supervised algorithm. Alternatively, and as in this work, a logistic regression layer can be added on top of the encoders. This yields a deep neural network amenable to supervised learning. Thus, the network goes through a second stage of training called fine-tuning, where prediction error is minimized on a supervised task [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF]. A gradient-based procedure such as stochastic gradient descent is employed in this stage. The hope is that the unsupervised initialization in a greedy layer-wise fashion has put the parameters of all the layers in a region of parameter space from which a good local optimum can be reached by local descent. The unsupervised pre-training helps to mitigate the difficult optimization problem of deep networks by better initializing the weights of all layers [START_REF] Bengio | Greedy layer-wise training of deep networks[END_REF]. From there, the procedure can be repeated (right) [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF].
Logistic Regression
Regression analysis is a field of mathematical statistics well explored which has been used for many years. In this type of analysis, given a set of observations, regression analysis can be employed to find a model that best fits the observation data. For instance, in linear regression, given an example i th of a set of samples, x, a value for y (i) is predicted by a linear function y = h θ (x) = θ x. Although the linear regression model is simple and used frequently it is not adequate for some purposes, such as our goal, i.e. classification. Here, we aim at trying to predict binary values, such as labels (y (i) ∈ 0, 1). A linear model has no bounds on what values the response variable can take, and hence y can take on arbitrary large or small values. However, it is desirable to bound the response to values between 0 and 1. For this we would need something more powerful than linear regression. In logistic regression a different hypothesis class used to predict the probability that a given sample belongs to the class A ('1') versus the probability that it belongs to the class B ('0') is employed. Particularly, the function learned will be of the form:
P (y = 1 | x) = h θ (x) = 1 1 + exp(-θ x) ≡ σ(θ x) P (y = 0 | x) = 1 -P (y = 1 | x) = 1 -h θ (x) (4.
12)
The function σ(z) ≡
1 1 + exp(-z)
is widely employed, and often referred to as sigmoid or logistic function. This function squeezes the value of h θ (x) into the range [0,1]. By doing that, h θ (x) can be interpreted as a probability. The goal is therefore to search a value of θ that makes the probability P (y = 1 | x) large when x belongs to the class A and small if x belongs to the class B instead. Imagine that we have a set of training samples with binary labels, (x(i), y(i)) : i = 1, ..., m). To measure how well a given hypothesis h θ (x) fits the training dataset, a cost function is defined as follows:
J(θ) = - i y (i) log(h θ (x (i) )) + (1 -y (i) ) log(1 -h θ (x (i) )) (4.13)
The next step is to learn to classify our training data by minimizing J(θ) in order to find the best choice of θ. Once training has been performed, new points can be classified either as class A or B by simply checking which of these classes is most probable. Basically, if for a given sample P (y = 1 | x) > P (y = 0 | x), it will be labeled as class A('1'). Otherwise, it will belong to class B('0'). To minimize J(θ) the same tools typically employed for linear regression can be applied. This means to provide a function that computes J(θ) and ∇θJ(θ) for any request of the choice of θ. The derivative of J(θ) can be written as:
∂J(θ) ∂θ j = i x (i) j (h θ (x (i) ) -y (i) ) (4.14)
If this is written in its vector form, the entire gradient can be expressed as:
∇θJ(θ) = i x (i) (h θ (x (i) ) -y (i) ) (4.15)
See the lecture notes of Andrew for a complete explanation of logistic regression [167].
Features used for classification
Whatever the efficacy of the machine learning strategy applied, the choice of relevant features is highly crucial on classification problems. Recent research on segmentation of brain clinical structures by machine learning techniques has tended to focus on the use of several learning algorithms rather than in the addition of more discriminative features into the classification scheme. Traditional features explained in Chapter 3, section 3.7.1 have been commonly employed when segmenting brain structures with a considerable success. However, the use of alternative features may (i) improve classification performance, while (ii) reducing, in some cases, the number of features used to describe the texture information of a given region. Apart from the application of SDAEs to the OARs segmentation problem, one of the main contributions of this work is the use of features that have not been previously employed to segment brain structures.
Among the full set of OARs involved in the RTP, there are some that present a sort of homogeneity in texture and variation in shape is less strong than in the other OARs. In this group we can include the brainstem, eyes and lens. Contrary, there are some other OARs which texture is more heterogeneous, shape variations across patients are more pronounced and/or its small size and localization variation makes automatic segmentation more complex. This second group is comprised by the optic nerves, optic chiasm, pituitary gland and pituitary stalk. Because of dissimilarities between characteristics of both groups, some of the suggested features are organ dependent, not being suitable for all the organs investigated in this work. While segmentation of some organs will exploit the use of the Geodesic Distance Transform and 3D-Local binary pattern to achieve better results, for example, the segmentation of some other will make use of texture and contextual analysis to improve the results.
Gradient and contextual features
In the image domain, the image gradient can be seen as a directional change in the intensity or color in an image. The image gradient is composed by two components: horizontal and vertical component (Figure 4.9). The horizontal component shows the variation of gray levels in an image along the horizontal direction, usually from left to right. This change is encoded in the grey level of the image showing the horizontal component. Thus, mean levels represent no change, bright levels represent variation from a dark value to a brighter value, and the dark level represents a change from a bright value to a darker one. Analogous observations can be made for the vertical component, which shows image variations in the vertical direction, in a top-to-bottom fashion. Combining both components, the magnitude and the orientation (Fig. 4.10) of the gradient can be obtained.
Although image gradient brings a more exhaustive description of an in- stance, i.e. a single voxel, supplementary knowledge has been included in the features vector. This is the case of the augmented features vector. The term of augmented features vector, and the inclusion of gradient and contextual features into it, was already introduced by [START_REF] Bai | Multi-atlas segmentation with augmented features for cardiac MR images[END_REF]. In their work, gradient ori- entations of all the voxels on each patch were used. Following their work, to describe relative relations between an image patch and its surroundings, contextual features are used. For each voxel v, a number of regions around its surroundings are sampled, radiating from voxel v with equal degree intervals and at different radius ( Fig. 4.11). To obtain a continuous description of the context, intensity difference between the voxel v and a patch P is defined:
d v,P = µ P -I v (4.16)
where µ P is the mean intensity of the patch P and I v is the intensity of the voxel v. In addition, a compact and binary context description is obtained by employing the Binary Robust Independent Elementary Features (BRIEF) descriptor [START_REF] Calonder | BRIEF: Computing a local binary descriptor very fast[END_REF]:
b v,P = 1 I v < µ P 0 otherwise (4.17)
Then, for each patch, the contextual feature includes both the continuous and binary descriptor for all the neighbor regions sampled.
Features from texture analysis
Additionally to the information extracted from the context, texture analysis (TA) has proven to be a potentially valuable and versatile tool in neuro MR imaging [START_REF] Kassner | Texture analysis: a review of neurologic MR imaging applications[END_REF]. MR images contain a lot of microscopic information that may not be assessed visually and texture analysis technique provides the means for obtaining this information. Therefore, we also considered the use of some these features. This is the case of statistical features of first order statistics and spectral features. TA can be divided into categories such as structural, modelbased, statistical and transform, according to the means employed to evaluate the inter-relationships of the pixels. Statistical methods are the most widely used in medical images. On these methods, the spatial distribution of grey values are analyzed by computing local features at each point in the image, and deriving a set of statistics from the distributions of the local features. Local features are defined by the combination of intensities at specific position relative to each point in image. In the literature, the use of these features to characterize textures have been mainly employed for classification of images [START_REF] Aggarwal | First and second order statistics features for classification of magnetic resonance brain images[END_REF] or for the characterization of healthy and pathological human cerebral tissues [172]. Nevertheless, their use as discriminant factor in the segmentation of critical structures in brain cancer has not been investigated yet.
To quantitatively describe the first order statistical features of an image patch P, useful image features can be obtained from the histogram. In the proposed work the following features were employed: mean, variance, skewness, kurtosis, energy and entropy. The mean takes the average level of intensity of the image or texture being examined, whereas the variance describes the variation of intensity around the mean. Skewness is a measure of symmetry, or more precisely, the lack of symmetry. A distribution, or data set, is symmetric if it looks the same to the left and right of the center point. The skewness for a normal distribution is zero, and any symmetric data should have a skewness near zero. Negative values for the skewness indicate data that are skewed left and positive values for the skewness indicate data that are skewed right. Kurtosis is a measure of whether the data are peaked or flat relative to a normal distribution. That is, data sets with high kurtosis tend to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Data sets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. A uniform distribution would be the extreme case. Energy is a measure of local homogeneity. Energy values range from 0 to 1, where the higher the energy value, the bigger the homogeneity of the texture. Thus, for a constant image, its energy is equal to 1. Contrary, entropy is a statistical measure of randomness that can be used to characterize the texture of the input image. It represents the opposite of the energy. A completely random distribution would have very high entropy because it represents chaos. An image with a solid tone would have an entropy value of 0.
Probability density of occurrence of the intensity levels can be obtained by dividing the value of intensity level histogram by the total number of pixels in an image:
P (i) = h(i)/n x * n y i = 0, 1, ...G -1 (4.18)
where n x and n y are the number of pixels in the horizontal and vertical image domain, respectively. G represents the total gray levels on the image. Thus, features obtained from the histogram are calculated as follows:
M ean : µ = G-1 i=0 ip(i) (4.19)
V ariance :
σ 2 = G-1 i=0 (i -µ) 2 p(i) (4.20)
Skewness :
µ 3 = σ -3 G-1 i=0 (i -µ) 3 p(i) (4.21)
Kurtosis :
µ 4 = σ -4 G-1 i=0 (i -µ) 4 p(i) (4.22)
Energy :
E = G-1 i=0 [p(i)] 2 (4.23)
Entropy :
H = - G-1 i=0 p(i)log 2 [p(i)] (4.24)
Statistical based features may lack the sensitivity to identify larger scale or more coarse changes in spatial frequency. To evaluate spatial frequencies at multiple scales wavelet functions can be employed [START_REF] Mallat | A theory for multiresolution signal decomposition: the wavelet representation[END_REF]. The basic idea of the algorithm is to divide the input images into respective decomposed sub-images using the wavelet transform. A wavelet transform decomposes a signal to a hierarchy of sub-bands with sequential decrease in resolution. The idea of using the wavelets to extract information in texture classification context is not entirely new. Specifically, in the medical field, Discrete wavelet transform (DWT) has been used for sub-domains such as image fusion, image resolution enhancement or image segmentation [START_REF] Jin | Wavelets in medical image processing: denoising, segmentation, and registration[END_REF]. Despite this, a major usage of DWT has been noticed for classifying MR brain images into normal and abnormal tissue [START_REF] John | Brain tumor classification using wavelet and texture based neural network[END_REF].
Geodesic Distance Transform Map
To encourage spatial regularization and contrast-sensitivity, geodesic distance transform map (GDTM) of the input image is used as additional feature. The addition of GDTM in the features vector used by the classifier exploits the ability of seed-expansion to fill contiguous, coherent regions without regard to boundary length. As explained in the work of Criminisi et al. [START_REF] Criminisi | Geos: Geodesic image segmentation[END_REF], given an image I defined on a 2D domain ψ, a binary mask M (with M (x) ∈ {0,1} ∀x) and an "object" region Ω with x ∈ Ω ⇐⇒ M (x) = 0, the unsigned geodesic distance of each pixel x from Ω is defined as:
D(x; M, ∇I) = min {x |M(x )=0} d(x, x ), with (4
3D Local Binary Texture Pattern
In order to catch neighborhood appearance of the voxel under examination with the fewest number of features, Local Binary Patterns (LBP) are investigated. The idea of LBP is to give a pattern code to each voxel. Particularly, an extended version of 3D-LBP presented by [START_REF] Montagne | 3D Local Binary Pattern for PET image classification by SVM, Application to early Alzheimer disease diagnosis[END_REF] (Fig. 4.14) is proposed. In their work, classical LBP [START_REF] Pietikäinen | Rotation-invariant texture classification using feature distributions[END_REF] were adapted by selecting the 6 nearest voxels and ordering them to create the encoding patterns (Figure 2). By encoding patterns in that manner, 2 6 = 64 possible patterns would be created. However, those 64 possible combinations were merged in 10 different groups according to geometrical similarities (Figure 4.14). In accordance with this classification, each group is filled with patterns that have the same number of neighbor voxels with a gray level higher than the central voxel c. Thus, rotation invariance in each group is kept. These groups are defined with (Table 4.2):
card(c) = P -1 i=0 s(g i -g c ) (4. 27
)
where P = 6 is the number of neighboring voxels and R=1 or R=2 the distance between central voxel c and its neighbors i. By using R =1,2 micro and macro-structure appearance of the texture are captured in the 3D-LBTP. In equation 3, card(c) gives the number of neighbors with a higher gray level than the central voxel c. In addition to the encoded value for the 3D patch structure proposed by [START_REF] Montagne | 3D Local Binary Pattern for PET image classification by SVM, Application to early Alzheimer disease diagnosis[END_REF], an additional texture value is included. Let g high the gray values that are higher than the gray value of the center voxel c in the 3D-LPB (Figure 2). Similarly, let's denote g low to the gray values that are lower than the gray value of the center voxel c in the 3D-LPB. Then, the texture value added to the encoded structure value is defined as:
T exture val = mean m i=0 g high (i) -mean n i=0 g low (i) (4.28)
where m and n are the number of neighboring voxels with higher and lower values than the center voxel c, respectively. Thus, the introduction of the 3D-LBTP in the features vector will lead to 4 new features: 3D-LBP and Texture val for R = 1 and 2.
Training the deep network
This section presents the way we combine the deep network with the proposed features. First, pre-processing required for the images to be used in this work is explained. Next, training and classification of the network are detailed.
Pre-processing
Pre-processing involves any of the diverse processes that help the segmentation algorithm to produce a more accurate model. Ideally, the segmentation process should be fully automatic, not requiring any user interaction. Nevertheless, this barely happens. Typical pre-processing steps include registration of images to a common coordinate space, intensity normalization, resampling of images to the same resolution or the bias field correction, for example. Only pre-processing methods applied to the images in this thesis are explained above.
Resampling
MR resolution is not always the same. Particularly, differences in resolution often come from the x and y coordinates. Hence, to make both the training and classification more homogeneous, images which resolution differed from 1mm x 1mm x 1mm were resampled to this resolution.
Patient Alignment
If the whole set of images in a study are first aligned to a common template, a specific region of interest is then already in approximately the same region of the coordinate space for all subjects across the study. This fact makes learning patterns easier and reduces the search space for a particular region of interest. It is therefore a common practice in brain segmentation approaches to apply some sort of registration technique to the MRI images to make them as similar as possible to a common MRI template. Some approaches require a rigid registration step to align the images [START_REF] Moghaddam | Automatic segmentation of brain structures using geometric moment invariants and artificial neural networks[END_REF][START_REF] Kim | Multi-structure segmentation of multi-modal brain images using artificial neural networks[END_REF]. However, in the proposed approach, and as in [START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF] and [START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF], MRI T1 images were spatially aligned such that the anterior commissure and posterior commissure (AC-PC) line was horizontally oriented in the sagittal plane, and the inter hemispheric fissure was aligned on the two other axes. This process therefore represents the initialization step for the segmentation of a new target patient.
It is worthwhile to describe the coordinate system used to define neuroanatomical locations of normalized images. The 'Talairach' coordinate system specifies locations relative to their distance from the anterior commissure (AC). The AC is a thin white matter tract between the olfactory areas of each hemisphere, which despite of being a small region represents an easy spot to localize, making of it an ideal origin for the coordinate system. Each location is described by three numbers, each describing the distance in millimeters from the AC: X is the left/right dimension, Y is the posterior/anterior dimension, and Z is the ventral/dorsal dimension. The diagram below shows the location of the AC (blue dot) on a midsagittal view. Note that the orientation of axial plane in Talairach space officially lies immediately dorsal to the AC and ventral the posterior commissure (PC, yellow dot), as in Figure 4.15.
Image Normalization
Image normalization is a process that changes the range of pixel intensity values. Normalize an image by setting its mean to zero and variance to one. To do so, images intensity values are shifted and scaled so that the voxels in the image have a zero mean and unit variance. The filter NormalizeImageFilter from the Insight Segmentation and Registration Toolkit (ITK) [START_REF] Johnson | The ITK Software Guide[END_REF] was used to normalize the images.
Training
Supervised learning based approaches involve the existence of two distinct groups of images: training and testing images. The first group is composed by images that have been manually segmented by experts. Manually labeled images are often referred to as reference or standard contours. This set of images, comprising both clinical images and manual labels, are utilized to learn patterns associated with a particular structure. On the other hand, the testing images group is composed by an independent set of images, which are not included in the training set. Testing images are used to validate how well the patterns were learned. To compare an algorithm's performance with the reference contour defined by an expert, the testing images must also be manually segmented.
The whole process used in the training step is shown in figure 4. [START_REF] Hunt | Treatment planning and delivery of intensity-modulated radiation therapy for primary nasopharynx cancer[END_REF]. The first step in the training consists on creating a common binary mask for each of the OARs. This mask was computed by applying an "or" operation to all the reference masks in the training set for a given OAR. This mask was employed to prune the voxels in all the images, both in training and classification, and thus reducing the research region of each OAR. Therefore, only voxels allocated inside the common mask are taken into account when extracting the features that will be used in the classifier. Once all the features are extracted, scaling is applied over all of them. At last, the training model is computed for the desired classifier.
Probability map and common mask creation.
A detailed example of how the probability map and the common mask are created during the training phase is shown in Fig. 4.17. Masks contained in the training set are added into a volume to create a probability map for each OAR, which yielded voxel-wise continuous probabilistic measures in the range of [0,1], indicating the likelihood of the organ class. This map represents the frequency with which an OAR appears in the training set and therefore the probability of a given voxel to belong to some structure. The probability map is also used to reduce the number of samples that are fed into the classifier. From this map, a region of interest (ROI) mask is generated. The pruning criterion is based on the probability of a voxel to belong to any of the structures of interest. Thus, any voxel containing a probability higher than zero is taken into account to create the common mask, for each structure, which will be used to prune the voxels in the feature extraction stage. To ensure that OARs of unseen patients will be inside this common mask a security margin was given to the generated mask by applying a morphological dilation.
Features extraction
Traditional features used in the segmentation of brain structures were already introduced in Section 3.7.1. In addition to these ones, new proposed features were detailed in Section 4.3. However, it is important to note that features employed in the classification may slightly vary from one organ to another, depending on some characteristics of the organs to segment. Thus, for example, the use of additional spatial information, such as distance to the center of the brain, and angle with respect to the horizontal can help to the segmentation of symmetric thin structures, such as the optic nerves. In the other hand, the use of features that encourage spatial regularization over the entire structure improves the classification in large and/or well-defined structures, such as the brainstem or the eyes. We can say, therefore, that each structure requires its own specific descriptors.
Features are extracted on voxels that belong to the inner part of the ROI mask defined in previous section. Hence, we avoid to analyze voxels which do not give any relevant information to solve our problem.
Scaling
As explained in [START_REF] Sarle | Neural network FAQ. Periodic posting to the Usenet newsgroup comp ai neural-nets[END_REF], scaling the features before applying non-scale invariant techniques, such as SDAE, is very important for a good performance of the classifier. Among the main advantages of scaling, it can be mentioned that it helps to: 1) avoid attributes in greater numeric ranges dominating those in smaller numeric ranges, and 2) avoid numerical difficulties during the calculation. Complications in the calculations can be caused by large attribute values when the inner products of feature vectors are used to compute the kernel values.
Ranges [-1, +1] and [0, +1] are typically employed to scale the attributes of the features vectors. Range selected to scale training data must be coherent with range used to scale the testing data. This means that the same scaling factors must be used for both training and classification data and not scale them separately. Let's imagine that we have a features vector of intensities with 8 attributes indicating grey levels in the training that we scale in the range [0, +1] (second row of table 4.3). For a given features vector on the testing, if scaling is done independently of the data contained in the training set (fourth row of table 4.3), the scaled values are not correlated with those in the training. As a consequence, the classification performance will be unsatisfactory in comparison with features correctly scaled (row five of table 4.3). In appendix B of [START_REF] Hsu | A practical guide to support vector classification[END_REF] a real example showing differences in classification accuracy between wrong and right scaled values is detailed. Performance of classification algorithms also depends on the selection of their parameters, which must be carefully selected by the user. However, suitable combination of parameters depends on the training data. As a result, the parameters choice of the different classifiers can be viewed as an optimization process, where parameters values are iteratively modified until a satisfactory result is achieved. Best parameters may be selected by users based on a priori knowledge and\or expertise [START_REF] Scholkopf | Learning with kernels: support vector machines, regularization, optimization, and beyond[END_REF]. Nevertheless, this often implies the user to manually test a wide range of parameters and select the best combination of them, which is time-consuming. Additionally, a risk of overfitting still prevails when different settings for the classifiers are evaluated. Parameters can be adjusted until the classification optimally performs, allowing a "leakage" of knowledge about the testing set into the model which would no longer provide a generalization on its performance. To tackle these issues some validation techniques for model selection have been adopted. Next section introduces the use of cross-validation to select a successful combination of the classifier's parameters.
Cross-validation for model selection
In this section we consider how to use methods of cross-validation (CV) for model selection. The parameters of a classifier have to be optimized based on the training available data. An independent testing set is therefore required for making a reliable assessment of the applicability of the classifier to new data. Cross-validation provides a simple way to measure this generalization performance when no such test data are available. A common strategy is to separate the training data set into two disjoint sets. One of these sets is actually used for training, and the other, the validation set, which is used to monitor the performance. The prediction accuracy obtained from the unknown set more precisely reflects the performance on classifying an independent data set. The performance on the validation set is used as a proxy for the generalization error and model selection achieved using this measure.
In practice, a shortcoming of hold-out method is that only a fraction of the full data set can be used for training. In addition, if the validation set is small, the performance obtained might have large variance. To minimize these problems, CV is very often used in the k-fold cross-validation setting: the k-fold cross-validation data is split into k disjoint, equally sized subsets. Validation is then done on a single subset and training is done using the union of the remaining (k-1) subsets. The entire procedure is repeated k times, each time with a different subset for validation. Thus, a large fraction of the data can be used for training, and all cases appear as validation cases. The price is that k models must be trained instead of one. Typical values for k are in the range 3 to 10, whereas 10-fold CV has been shown to be accurate enough for model selection [START_REF] Breiman | Submodel selection and evaluation in regression. The X-random case[END_REF].
The following subsection highlights and details the task of parameters selection for the SDAE approach followed in this thesis. Parameters selection for SVM are detailed in Appendix B.
SDAE Parameter Setting
One of the most crucial, and at the same time most complex decisions to make when working with any kind of neural networks is the architecture configuration. This comprises the choice of the depth of the network, as well as the number of hidden units in each layer. Trying to find the best network configuration by performing a grid search becomes much harder than in the case of the SVM, where only two parameters were searched. The strategy followed to find a suitable network structure was based on the error convergence during training. Thus, the faster the convergence and the lower the error, the more suitable the network structure. In order to constrain the search and avoid having to test hundreds or even thousands of different network architectures, typical network configurations were employed, where the size of layer l+1 is half of the precedent layer l.
In SDAE there are another parameters that must be carefully selected. These parameters include layer-wise learning rate, the activation function and the corruption level of the denoised autoencoder.
Classification
Classification is done at one class at each time. That means that a binary classifier is used for each of the structures. In this context, classes for each classifier are: one structure of interest and the background. Classification scheme, although very similar, is slightly different from the scheme used during the training phase. In figure 4.19 the pipeline followed to segment a new patient, or target patient, is presented.
Pre-processing
Pre-processing steps required for classifying a target patient are the same than those presented on the training section ( Section 4.4.1). These steps are: resampling, patient alignment and image normalization.
Features extraction
Voxel pruning is done with the common mask generated during the training (section 4.4.2.1) for each new target patient and each OAR. Then, features to be used in the classifier are extracted from voxels inside each ROI. As happened in the training phase, features will slightly vary from one organ to each other. However, for the same organ, features composing the features array are the same both in training and classification.
Classification
Classification basically consists on applying the weights learned during the training stage to each input sample. Thus, once features for all samples have been extracted and scaled, they are fed into our trained network. Input fea-tures are multiplied by learned weights from the first layer. Output from first layer is multiplied by weights on the second layer. This process is repeated until the last layer, which gives a value indicating whether the sample belong to the OARs class.
Post-processing
After classification, a post-processing layer, which was mainly a filter applying morphological operations was introduced before providing the output. Particularly, a closing operation to remove small isolated regions and to fill small holes was employed.
Chapter 5
Materials and Methods
"The best time to plant a tree was 20 years ago. The second best time is now."
Chinese Proverb
In this chapter the materials employed to conduct this work, as well as to evaluate the performance of the proposed approach are presented. First section introduces the software used to develop all the content of this thesis. Then, imaging data employed on the experiment is presented. Medical imaging analysis, and particularly segmentation, often lacks from a universal ground truth. Thus, multiple observers are typically required to manually delineate a set of structures on a group of patients, from which reference contours can be therefore generated. This second section details the process followed to generate the reference standard. Third section details the evaluation metrics employed to analyze results and how important they are for the assessment of our proposed classification scheme in clinical context. To evaluate whether there exist significant differences between groups, statistical analysis are often employed. This type of analysis is described in last section.
Software
All the code that has been employed in this thesis has been implemented using the following platforms: MATLAB( The MathWorks Inc., Natick, MA, 2000) and Microsoft Visual Studio (MSVS) 2010.
There are two main processes in the code developed in this thesis: image processing step (i.e. features extraction) and learning/classification. For the former step, a whole set of functions were developed in MSVS 2010 by using C++ programming language. The learning and classification steps for the deep networks were implemented on MATLAB based on the toolbox provided by Palm [START_REF] Palm | Prediction as a candidate for learning deep hierarchical models of data[END_REF]. The publicly available library libsvm [START_REF] Chang | LIBSVM: A library for support vector machines[END_REF] was used to compare our classification scheme with SVM.
Apart from the research contribution provided by this work, we aim at developing some prototype, that is why we also employed MSVS. The main program run on this platform. To connect MSVS with the MATLAB functionalities of the deep learning toolbox, the MATLAB run-time compiler was employed to create the dynamic libraries (i.e. dlls) to be included in the MSVS project. Thus the whole process is as follows:
1. Features extraction is performed by employing functions implemented in C++.
• All features have been extracted.
• An array containing all the features is created.
2. The MATLAB dll that contains the deep learning functionalities is called from MSVS.
• Either training or classification is performed.
• According to the operational mode (training/classification) some information is received (trained model or an array containing the predicted labels).
3. The segmentation is reconstructed by employing C++ code. For the manual labeling, Artiview 3.0 (AQUILAB) was used by the observers that participated in the study of this thesis.
Method validation
Validation of medical image processing methods is of crucial importance because the performance of such methods can have an impact on the performance of the larger systems in which they are embedded. Definition of a standard protocol for validation may therefore have a high relevance to facilitate the complete and accurate reporting of validation studies and results and the comparison of such studies and results. Following the guidelines suggested by Jannin et al. [START_REF] Jannin | Model for defining and reporting referencebased validation protocols in medical image processing[END_REF] towards this standardization we designed the validation protocol of our method.
Validation objective
In the clinical context of segmentation of organs at risk of brain cancer patients undergoing radiotherapy or radio surgery, a segmentation method based on a stack of denoised auto-encoders fed by a wide range of image-based features extracted from MR-T1 images is able to segment those organs at risk with an accuracy that is significantly better than other state-of-the-art methods and that lies between experts variability.
Validation process
The validation process is performed on a validation dataset, which detailed description is of high importance. Image data employed in this experiment was composed by clinical images, which description is presented in Section 5.3.1. Given the validation datasets, the outcome of the segmentation method has to be validated. The segmentation method computes an estimate of the reference standard, being the reference standard the theoretical ideal result. In this work, the reference was provided by expert observers (Sections 5.3.2 and 5.3.3). By comparing outcomes of the segmentation method and reference standard, a validation criterion aims at characterizing different properties of the method to be validated. These properties may include accuracy, robustness or efficiency, for example. Evaluation metrics employed in this work to validate our segmentation method are introduced in Section 5.5. To do these comparisons, output volumes from the segmentations are used. It is commonly to compare results from a proposed method against a well known state-of-the-art method. In our case, support vector machines (SVM) was the approach chosen for comparison purposes. The last part of the validation process comprises the analysis of results (Section 6.2). First, results computed by the proposed segmentation method and the reference method, i.e. SVM, are compared. Segmentation results are also compared against manual annotations. Then, comparison results are tested against the validation hypothesis (Section 5.2.2) in order to provide the validation result.
Imaging Data
Dataset
MRI data from 15 patients who underwent Leksell Gamma Knife Radiosurgery were used in this work. Two different MRI facilities were employed to acquire images according to the radiosurgery planning protocol (Table 5.1). Pathologies in this dataset included trigeminal neuralgia, metastases, and brainstem cavernoma. Although the employed dataset was limited in size, it was representative of the population. Examples of the original input sequences from several patients are shown in Figure 5.2. In this figure, axial slices showing some tumors on these patients are presented.
Experiments were retrospectively performed on all the patients. All data analyzed was collected as part of routine diagnosis and treatment. Prior to being processed all images were anonymized. Patients were diagnosed and treated according to national guidelines and agreements. Therefore, no consent approval for our study was required. Table 5.1: Acquisition parameters on the 2 MRI devices.
Figure 5.3 shows the intensity profile of some OARs for a given patient. From this image, it can be seen that structures share intensity bands between them, which makes no possible to only employ voxel intensity values to separate them. In addition, some properties of the OARs across the patients included in this study are presented in Table 5
Manual Contouring
Altogether, four experts participated in this experiment. This group of experts was comprised by: two neurosurgeons, one physician and one medical physicist. All of them were trained and qualified for radiosurgery delineation. However, the number of available manual contours differed from one OAR to each other. Thus, the composition of the manually labeled dataset, per patient, was: four manual contours of the brainstem in 9 patients, three manual contours of the optic nerves, optic chiasm, pituitary gland and pituitary stalk in 15 patients, and only one manual contour of the eyes and lenses in 15 patients. The reason for having only one manual contour per patient for the eyes and lenses is because they do not represent a complex structure to segment. Thus, less inter-observer variation is expected, being meaningless to employ several contours to generate a reference standard. Protocol for delineation was described before contouring session. Artiview R 3.0 (Aquilab) was used after a training session to achieve Dicom RT contouring structures. Average manual segmentation times per organ are listed in table 5 The pie chart in 5.4 represents the total time for manual segmentation averaged over all the patients. The sections show the time for the OARs delineated. Looking at the section, it can be observed that brainstem, eyes and optic nerves represented the structures where the experts spent more time in the segmentation task.
Imaging
Simulated Ground Truth
To conduct a validation analysis of the quality of image segmentation, it is typically necessary to know a voxel-wise reference standard. Nevertheless, image segmentation in the medical domain often lacks from a universal known ground truth. Even though a single manual rater provides realistic data, contours may suffer from intra-and inter-observer variability. Thus, a number of observers and target patients that provide a good statistical analysis is often required. Accordingly, this study has been designed to quantify variation among clinicians in delineating OARs and to assess our proposed classification scheme in this context. Therefore, available manual contours from the experts were used to create the simulated ground truth, which will be onwards referred to as reference. Reference contours have been obtained in this thesis by using the computationally simple concept of probability maps. In this method, which is analogous to the voting rule approach, probability maps are thresholded at a variable level in order to create the mask. The threshold was fixed at 50%, or at 75%, depending on whether the number of available manual contours from the physicians was three or four, respectively. Hence, reference contours for big structures such as the brainstem will be generated by thresholding the probability map at 75% of the maximum level. For small structures, however, threshold level will be fixed at 50% of the probability map values. This choice Figure 5.5: Creation of a reference contour example. In the left there row are contours from observers in a 2D axial slice. In the middle row, contours overlapping is shown. Last, in the right, the reference contour is created by majority voting rule from the overlapping map.
for thresholds corresponds to the values proposed by the work of Biancardi et al. [START_REF] Biancardi | A comparison of ground truth estimation methods[END_REF]. In their work threshold values of 50% and 75% tended to produce consistently large or small estimates, respectively. In figure 5.5, the generation of the reference standard for the brainstem and the optic nerves in our study is shown. When only one manual contour was available, it was directly employed as reference standard (i.e. for eyes and lenses).
Due to differences between observers, generated reference could not always be satisfactory and considered as corrupted data, particularly if they are employed for learning. To ensure this not to happen, an external expert reviewed the generated reference contours and performed small modifications, if needed.
Leave-One-Out-Cross-Validation
Typical validation techniques to evaluate the performance of a classifier comprises the separation of the available dataset into two independent groups: training and testing group. Accordingly, the training group is used to train 5.4. Leave-One-Out-Cross-Validation 105 the classifier, whereas the testing group is employed to evaluate its performance. Nevertheless, there could be some cases where the availability of images is limited and such division cannot be conducted if a relevant evaluation is envisaged. Such is the case of the dataset employed in this thesis. In these situations, a strategy called Leave-one-out cross-validation (LOOCV) is usually employed. LOOCV is closely related to the validation set approach explained in section 4.4.2.4.1. The difference lies in the attempt of addressing the drawbacks of the later. Like the k -fold CV approach, LOOCV involves splitting the training set into two parts. However, instead of creating k subsets of comparable size, a single observation (x 1 , y 1 ) is used for the validation set, and the remaining observations (x 2 , y 2 ), ..., (x n , y n ) are used to carry out the training. The learning method is fit on the n -1 training observations, and a prediction ŷ1 is made for the excluded observation, using its value x 1 . The procedure can be repeated by employing the observation (x 1 , y 1 ) as validation set, and training the statistical learning process on the n -1 remaining observations, (x 1 , y 1 ), (x 3 , y 3 ), ..., (x n , y n ). This variation of CV can be seen as the k -fold cross-validation where k is equal to the number of samples in the sample set. There is no need to generate random permutations for leave-one-out cross-validation and repeat the process, because the training and validation datasets for each of the folds are always the same, and therefore the result of the accuracy estimation is determined.
One of the major advantages of using LOOCV over the validation set approach is that it has less bias. If we remember, in the validation set method, the training set is commonly half size of the entire dataset. On the other hand, when using LOOCV, the statistical learning approach is repeatedly fitted using training sets which contain n -1 observations. This helps to the LOOCV strategy to have a tendency of not overestimating the test error rate as much as the validation set approach does. Second, since there is no randomness in the training and validation groups, performing LOOCV multiple times will necessarily produce the same outcomes. Contrary to the validation set approach, where results will be different due to the randomness when creating the training and validation sets.
Unlike in Section 4.4.2.4.1, where the partitioning of the data was done by grouping single instances randomly selected from all the patients into the different subsets, in this stage a patient is considered as a sample. That is, during model selection each observation represented a voxel and its features, whereas in classification an observation is assumed to be a patient.
Evaluation metrics
Medical image segmentation is an important processing step in medical image analysis. Segmentation methods with high precision, high reproducibility and low bias are a main goal in radiotherapy because they directly impact the results. Accurately recognizing some patterns is of great value when segmenting medical images. Consequently, assessing the accuracy and the quality of segmentation algorithm is of great importance. There are different quality aspects in medical image segmentation according to which types of segmentation errors can be defined. Evaluation metrics are expected to indicate some or all of theses errors, depending on the data and on the segmentation task. Requirements of medical segmentation evaluation were categorized by [START_REF] Fenster | Evaluation of segmentation algorithms for medical imaging[END_REF] into accuracy, precision as a measure of repeatability and the efficiency. The accuracy category represents the degree of agreement of the segmentation with respect to the reference contours. Under this category, two quality aspects were mentioned, namely the contour, or delineation of the boundary, and the size, or volume of segmented object.
As pointed out by [6], evaluation methods have lacked consensus as to comparison metrics. Since each metric yields different information, their choice is important and must be considered in the appropriate context. Although volume-based metrics, such as Dice Similarity Coefficient (DSC) [START_REF] Dice | Measures of the amount of ecologic association between species[END_REF], have been broadly used to compare volume similarities, they are fairly insensitive to edge differences when those differences have a small impact on the overall volume. Therefore, two segmentations with high degree of spatial overlapping may exhibit clinically relevant differences at the edges. As a consequence distance-based metrics, such as Hausdorff distances, are also used to evaluate segmentation results.
Let us now introduce some metric definitions that will be used throughout this chapter. Let a medical volume be represented by a point set X = {x 1 , ..., x n }, where x n represent the voxel n. Let denote |X| as w × h × d = n, where w, h and d are the width, height and depth on the grid where the volume is defined. To facilitate the understanding of following sections, let assume that we only deal with segmentations that have two classes: the class or structure of interest and the background. To refer to the class of interest we will use the number 1, while we will employ the number 2 to refer to the background.
Let denote the volume used as reference V ref , which is represented by the partition {V 1 ref , V 2 gt } of X. The assignment function f i ref (x) therefore provides the membership of the structure x in the subset S i ref , where:
f i ref (x) = 1 if x ∈ V i ref 0 if x / ∈ V i ref (5.1)
On the other hand, let refer to V a as the automatic segmentation to be evaluated, which is represented by {V 1 a , V 2 a } of X. Similarly to the case of the reference volume, the assignment function f i a (x) provides the membership of x in the class S i a , which is analogously defined.
Spatial overlap based metrics
Spatial overlap based metrics can be derived from the four basic cardinalities of the so-called confusion matrix: the true positives (TP), the false positives (FP), the true negatives (TN) and the false negatives (FN).
Basic cardinalities
Let S a and S b be two segmentations, the confusion matrix represents the four common cardinalities which reflect the overlap between them: TP, FP, TN and FN. For each pair of subsets i ∈ S a and j ∈ S b , the cardinalities provides the sum of agreement m ij between them as follows:
m ij = |X| n=1 f i ref (x n )f i a (x n ) (5.2)
where T P = m 11 , F P = m 10 , F N = m 01 and T N = m 00 . To simplify its definition we can refer to TP as the positive samples that were correctly labeled by the classifier, while TN denote the negative samples correctly labeled. On the other hand, FP represent the negative samples incorrectly classified, i.e. erroneously indicates the presence of a condition, such as a disease, when in reality it is not, for example. Last, and contrary to FP, FN represents an error indicating no presence of a condition when it actually exists. In medical domain, and more generally in binary classification, it is a common practice to directly use these basic cardinalities to assess the performance of a classifier.
Dice Similarity Coefficient
The Dice Similarity Coefficient (DSC) has been broadly used in the field of segmentation as a measure of spatial overlapping [START_REF] Dice | Measures of the amount of ecologic association between species[END_REF]. As it has been used in the literature, it compares a pair of volumes (binary masks) and provides a similarity index between these two structures. The similarity index or coefficient is defined as the ratio of twice the common area to the sum of the individual areas. Following the nomenclature already introduced, the Dice similarity coefficient is defined as
DSC = 2|V 1 ref ∩ V 1 a | |V 1 ref | + |V 1 a | = 2T P 2T P + F P + F N (5.3)
According to 5.3, DSC values closer to 1 reflect high spatial agreement, while DSC values closer to 0 show poor agreement between the volumes.
Sensitivity and specificity
Additionally to volume and distance-based metrics, sensitivity and specificity were also investigated. Sensitivity measures the percentage of actual positives values which are correctly identified whereas specificity measures the percentage of negative values which are correctly identified. To do this, the numbers of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) voxels were determined. These two metrics are defined as follows:
Sensitivity = Recall = T P R = T P T P + F N (5.4)
Specif icity = T N R = T P R = T N T N + F P (5.5)
The sensitivity might be equal to 1 for a poor segmentation much bigger than the ground truth. On the other hand, the specificity, is therefore the necessary counterpart of the sensitivity, but it might tend to 1 for a very poor segmentation that does not detect the object of interest at all. Consequently, a good segmentation system should have high sensitivity and specificity values. It is worth to notice that both measures are very sensitive to the size of the structure of interest. Thus, they penalize errors in small segments more than in large segments [START_REF] Fenster | Evaluation of segmentation algorithms for medical imaging[END_REF].
Receiver operating characteristic (ROC) analysis is usually employed to analyze classifiers performance. In this evaluation, curves defining the relation between sensitivity and (1 -specificity) are plotted. If the ROC analysis is considered from a radiotherapy point of view, FN and FP voxels must be taken into consideration when analyzing the segmentation performance. While FN voxels might lead to overirradiation of OARs voxels, FP voxels could result in a possible underirradiation of target volume voxels. Thus, the higher the sensitivity, the lower risk of overirradiation of normal tissue and the higher the specificity, the lower the risk of underirradiation of tumor tissue. Following the suggestion of [START_REF] Andrews | Benefit, risk, and optimization by ROC analysis in cancer radiotherapy[END_REF], instead of employing ROC curves to evaluate performance of a given classifier, the ROC space is used. The ROC space can be divided into four sub-spaces. This sub-division scheme is shown in Figure 5.6. Thus, results spread over the left-top sub-space indicate acceptable contours, with the OAR spared and the PTV covered. Results lying on the right-top subspace present a high-risk, since the OAR may be spared but with PTV not covered. Poor contours are considered when they ROC representation are present on the left-bottom sub-space. There, although the PTV is covered, it is considered that the OAR is not spared. And last, the right-bottom side of the ROC subdivision contains the unacceptable contours, with OARs not spared and PTV not covered.
Volume based metrics
It is important to note that, although the concept of OAR is purely oncological or anatomical, a representation of these volumes is used in the planning process. Therefore, the defined volume of a critical structure plays a crucial role in the dose distribution planned. As can be seen in the table 2.1, where the dose limits for the OARs in both radiotherapy and radio-surgery are defined, especially in the case of radio-surgery, variations in the volume may lead to variations in the planned dose.
Consequently, volume based metrics are of significant importance when generating contours to be used in the RTP. As its name indicates, volume based metrics are measures that consider the volumes of the segmentations to indicate similarity. To measure volume differences between manual and automatic contours with the reference, we consider the the following formula
∆V (%) = V 1 a -V 1 ref V 1 ref * 100 (5.6)
which will be referred to as relative Volume Differences (rVD). An important point to note in this metric is that, while the absolute value of ∆V (%) will be used to plot rVD values and to compute mean values, values directly obtained from eq. 5.6 (either negative or positive) will be used in the statistical analysis. The reason to employ absolute values of rVD to compute means is to evaluate total relative differences between contours. If, for example, we consider two contours that differ from the reference standard in -10 and 10%, the mean will be 0 if negative values are also taken into account. However, both contours will have a difference with respect to the reference of 10%, independently of the sign, leading a mean deviation of 10%.
Spatial distance based metrics
To tackle with edge dissimilarities that have a small impact on the overall segmented volume, volume-based metric are not sufficient. If a given segmentation is planned to be used in RTP, an analysis on shape fidelity of the segmentation outline is highly recommended. Any underinclusion on the OAR delineation might lead to a part of the healthy tissue exposed to radiation. Spatial distance based metrics have been also widely employed in the literature to evaluate image segmentations as dissimilarity measures. They are strongly recommended when the segmentation overall accuracy is crucial, as in the case of its inclusion in the RTP. Therefore, a surface distance measure (Hausdorff distance [START_REF] Huttenlocher | Comparing images using the Hausdorff distance. Pattern Analysis and Machine Intelligence[END_REF]) was also used to evaluate the segmentation results.
Hausdorff Distance
The Hausdorff Distance is a mathematical construct to measure the "closeness" of two sets of points that are subsets of a metric space. It represents the "maximum distance of a set to the nearest point in the other set". More formally, Hausdorff distance from the finite point set X = {x 1 , ..., x p } to the finite point set Y = {y 1 , ..., y p } is a maximin function, defined as Using Figure 5.7 as example: ROI X and ROI Y are two different segmentation proposals in a single MRI slice under evaluation. Somewhere on the edge of ROI X there is a point, x, that is further away from any point on Y than all other points on X's edge. This point has a minimum distance, l 2 , to ROI Y. This is the Hausdorff distance from X to Y. Similarly on the edge of ROI Y there is a point y that is further away from any point on X than all other points on the edge of Y. The minimum distance, l 1 , from point y to a point on the edge of X is the Hausdorff distance from Y to X. The maximum of these two values (the longer of the two lines), in this case l 1 , is the Hausdorff distance between ROI X and ROI Y in this MRI slice.
H(X, Y ) = max(h(X, Y ), h(Y, X)) (5.
Thus, this distance can be used to determine the degree of resemblance between two objects that are superimposed on to another [START_REF] Huttenlocher | Comparing images using the Hausdorff distance. Pattern Analysis and Machine Intelligence[END_REF].
Efficiency
Efficiency describes the practical viability of the segmentation method. It refers to the practical viability of a segmentation method. Two factors need to be considered to fully characterize efficiency: computational time and the human operator time required to complete segmentation of each study in a routine setting in the application domain. As it was already presented, user interaction is minimized to a simple alignment of that target patient before to send it to the classification process. Therefore, to assess efficiency, the computational time required for algorithm execution should be measured and analyzed.
Processing Time
For comparison purposes, segmentation time observed for each physician when manually segmenting the OARs was recorded. The segmentation of each structure was timed individually both for the manual segmentation and for the automatic contours. The total time per patient for each of the methods was then compared as well as the time consumed per structure. For the purpose of our application, we can consider two different times through the whole segmentation process: features extraction and classification time.
Statistical analysis
Among different types of inferential statistical tests, analysis of variance (ANOVA) are the most suitable one for the purpose of our evaluation. ANOVA is a parametric method for means comparison of several groups and it tests the significance of group differences between two or more groups. It is important to point out that it only determines that there is a difference between groups, but it does not tell us which is different. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups.
The first of the techniques encompassed in ANOVA approaches is the one-way ANOVA. It is used to determine whether there are any significant differences between the means of two or more groups. However, one of the assumptions is that samples contained in the groups must be independent, which is not the case in our study.
Nevertheless, one-way repeated measures ANOVA is the equivalent of the one-way ANOVA, but for related -not independent-groups. A repeated measures ANOVA is also referred to as a within-subjects ANOVA or ANOVA for correlated samples. All these names imply the nature of the repeated measures ANOVA, that of a test to detect any overall differences between related means. One-way repeated measures ANOVA compares how a within-subjects experimental group performs in three or more experimental conditions. This means that it is used when you have a single group on which you have measured something a few times. The analysis compares whether the mean of any of the individual experimental conditions differ significantly from the aggregate mean across the experimental conditions.
Statistical analysis 113
Particularly, by employing statistical analysis we aim at demonstrating that differences in volume and surface were significantly different between SVM and our SDAE-based classification system. On the other hand, we also employed statistical analysis between manual annotations and contours generated by our system. In this case, we expect to prove that, although results from some manual observers were better than results provided by our approach, differences were not significantly important.
Chapter 6 Experiments and Results
" There is only one way to avoid criticism: do nothing, say nothing, and be nothing.
" Aristotle
This chapter focuses on the experiments that were carried out to achieve the results presented on this work, and how they were implemented. Settingup of these experiments is detailed in the first section on the chapter. The parameterization of all the values involved in any step of the proposed workflow are detailed in this section. This includes steps such as generation of probability map or common mask, the composition of the features vector, how features were extracted and choice of SDAE parameters, for example. The second section presents the results that come from the experiments. For comparison purposes, the proposed method is always compared against a classifier based on SVM. Then, manual observers are also taken into account to evaluate the performance of our proposed scheme in clinical settings. The main objective of this section is to demonstrate that our proposed scheme outperforms SVM when classifying OARs in brain cancer, as well as it lies in the variability of the experts. Accordingly, results are subdivided into subsections that details the obtained results. Last section summarizes the results, and a discussion about them is presented. Comparison with other presented methods to segment OARs in brain cancer is presented in this section.
Experiments set-up
In chapter 4, the theoretical introduction on how parameterization must be done has been introduced. Now, in this section, values obtained through the parameterization employed in all the steps are detailed.
Parametrization
In previous sections, a detailed theoretical explanation of how training and classification has been performed was introduced. There, reasonings about procedures followed to train the learning based systems were detailed in order to support their use. In following sections, parameters used in each of these processes are presented.
Probability map and common mask creation
We have previously seen that the first step right after aligning the images contained in the training set is generating a spatial probabilistic distribution map (SPDM) for each of the OARs. To generate the SPDM, aligned manual labels are added into a volume. The resulted image is then smoothed by using a Gaussian filter with a kernel size of 3x3x3. To reduce the number of input samples that contain consistent information, the voxel space was first binarized by setting its values greater than 0.005 to 1, and the others to 0. Then, a dilation operation with a square kernel type of size 3x3x3 was applied over the binary image. Only those voxels that belonged to the inner part of the dilated image were kept to extract the features.
Composition of the features vector
As we introduced in 4.3, dissimilarities between characteristics of OARs cause that some of the suggested features are organ dependent, not being suitable for all the organs investigated. Thus, two groups of OARs have been identified: large and/or well-defined organs with no large shape variations, and organs which texture is heterogeneous and/or large shape variations and which localization also presents a high variation. From now onward, they will be referred to as group A, and B, respectively. See table 6.1 for a classification of OARs in both groups.
OARs groups classification
Group A
Brainstem Eyes Lenses
Group B
Optic nerves Pituitary gland Pituitary stalk Chiasm Table 6.1: Classification of the OARs in groups A or B.
To demonstrate that including proposed features, for each group, positively impacts on the segmentation performance, different features sets have been evaluated. Thus, the first set for each group is composed by features that have already been proposed in other works. This set will be referred to as classical features in all the groups. Several features sets were investigated depending on the OARs group. A complete list describing the composition of features vectors used is presented in table 6.2. Next section presents the details of how features were extracted for each of the groups. Following the same reasoning, intensity neighborhood properties in OARs of group C were extracted in a patch of size 5. Nevertheless, instead of extracting the information of a three-dimensional vicinity, only the 2D space was taking into account. Therefore, vector sizes for classical features resulted to be of 39, 137 and 34, for groups A,B and C, respectively. Suggested features to segment OARs of group A include the use of a geodesic distance transform map (GDTM)(section 4.3.3), the proposed 3D-Local binary texture pattern (3D-LBTP) (section 4.3.4) and the gradient value of the voxel under examination. The GDTM was generated by employing the 3D input image. To calculate the value of the GDTM at each voxel, we used a patch of size 3x3x3 and λ was set to 0.75. As detailed in section 4.3.4, 6 voxels around the central voxel and ratios equal to 1 and 2 were employed to capture the neighborhood appearance. In total, 4 values were extracted for this feature: 1 texture and 1 binary values at each ratio. This led to a features set composed by 19 features.
Features proposed to segment OARs belonging to group B are divided into three groups: augmented, textural and augmented-enhanced features vectors. In addition to specific features for each group, they include features described for the classical features set. Gradient information was extracted on a two-dimensional patch of size 5x5 around each voxel for each of the gradient properties (horizontal and vertical gradient values, as well as gradient orientation). Thus, 75 gradient values were obtained for each voxel. In addition to gradient, contextual features were also included in this set. As in the work of [START_REF] Bai | Multi-atlas segmentation with augmented features for cardiac MR images[END_REF] regions of size 3x3x1 voxels were sampled around the voxel under examination by radiation from it at every 45 • , and at four different radius: 4,8,16 and 32. By combining the continuous and the binary value at each sampled patch, this led to a total of 64 contextual features for each voxel. Textural features set comprises features related with texture. To compute first-order textural features, patches of size 3x3x3 were extracted around each voxel. Additionally, for the skewness, kurtosis and entropy, an additional patch of size 5x5x5 was also employed, leading to a two values of these features for each voxel (one value per patch). In addition to these patch sizes, other different patches configurations were investigated. Particularly, patches of size 7, 9 and 11 were included in the features vector. However, their inclusion did not lead to significant performance improvement, but it considerably increased the computation time to extract the features. Therefore, they have not been included in our evaluation. Regarding the use of wavelet-based features, first to fourth order high-pass components from discrete wavelet decomposition were employed. Total number of features used in each features set is shown in table 6.2. And last, the features set named augmented-enhanced features vector encompasses all the sets previously presented for OARs of group B. Therefore, sizes for each of the features sets are as follows: 137 for the classical set, 276 for the augmented set, 149 for the textural set and 288 for the proposed AE-FV set.
Features scaling
Figure 6.1 shows the distribution of some features representing optic chiasm and non optic chiasm samples for one patient. For the purpose of visualization, only few features have been selected. The idea is to show that features included in the vector incorporate additional discriminative information for the segmentation. To avoid features with greater values dominating the classification, the features vector was normalized before training or testing. Except for the BRIEF descriptor features, all the rest were normalized in the range of [-1, 1]. The same scaling factors applied during training are employed in the classification. To demonstrate that normalizing the features values does not affect to their discriminative power, the distribution of normalized features is plotted in the lower row of figure 6.1. The two parameters that can be tuned in the RBF kernel and which depend on the input data are: C and γ. A coarse grid search, followed by a finer search was performed to find the best combination of both parameters. For example, for the brainstem case it was found from this search that best values for C and γ were approximately 6 and 5.5, respectively, with an accuracy close to 97% and a precision nearly of 95% (Fig. 6.2). These values for C and γ were kept for the training and classification in all the features set.
2 6 2 4 2 2
Accuracy (Coarse Search)
2 0 γ 2 -2 2 -4 2 -6 2 -6 2 -5 2 -4 2 -3 2 -2 2 -1 2 0 2 1 2 2 2 3
2 6 2 4 2 2 2 0 Precision (Coarse Search) γ 2 -2 2 -4 2 -6 2 -6 2 -5 2 -4 2 -3 2 -2 2 -1 2 0 2 1 2 2 2 3
Parameter setting for SDAE
The deep network used in the proposed classification scheme was formed by stacking DAEs (Fig. 6.3). Weights between layers of the network are initially learned via the unsupervised pre-training step. Once all the weights of the network are unsupervisedly computed, a supervised refinement is carried out by using the labeled classes, and final values of the network' weights are updated (Sec. 4.2.7). The stack of DAEs forms the intermediate layers of the deep network (See Figure 6.3). Nevertheless, defining the number of hidden layers, as well as their size, is not an easy task. Training was run multiple times with different configurations of the deep architecture to find a proper combination of parameters. As introduced in section 4.4.2.4.2, the strategy followed to find a network configuration is based on the error convergence during training. Curve plotted in figure 6.4 shows the progression of this error for several network configurations. With this procedure we obtained two optimal network configurations, which depends on the number of elements composing the features vector (Table 6.2). Architecture of networks aiming at segmenting OARs of group A was composed by 4 hidden layers, with 100, 50, 25 and 10 units, from input to output, respectively. On the other hand, for OARs of groups B, the network structured was composed by 4 hidden layers, with 400, 200, 100 and 50 units, from input to output, respectively. The learned representation of the input had therefore a dimensionality of 10 for the structures of group A and 50 for structures of group B.
Since our network is composed by 4 hidden layers, during the unsupervised pre-training, the weights vectors {W 1 , W 2 , W 3 , W 4 } were initially learned. Denoising corruption level for the DAEs was set to 0.5, since a value of 50% of noise level has already been proved to perform well in other problems [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF]. Following the same architecture than in the unsupervised pre-training, four hidden layers of DAEs were used for the fine-tuning step, with the same number of units than before. At the end of the last layer of DAEs a logistic regression layer is used as output with the sigmoid function as activation function.
Number of features
It is very common in practice to have a training set with unbalanced number of positives and negatives samples. Not taking the proper balance between them might lead to unsatisfactory results. Since the performance of the classifier depends on the available data, there exist no rule to define the best balance
Results
123
between positive and negative samples in a training set. Therefore, we evaluated the impact of different unbalanced training sets on the DSC. Figure 6.5 plots the evolution of the DSC in relation with the proportion of positive and negative samples for a patient in a given OAR. Since the number of negative samples was much higher than positive samples, we employed all the positive samples and increased by steps the number of negative samples. We observed that, in general, DSC increased up to having a number of negative samples equal to 32 times the number of positive ones. Increasing the number of negative samples beyond 32 did not significantly improve DSC values. The reason of this behavior can be attributed to the amount of data often required from deep learning methods to learn input representations. Some structures in our experiment were composed by an average amount of voxels ranging from 80 to 785. This is the case, for instance, of the chiasm, which mean volume was composed by 235 voxels. Due to the limited available dataset, the training set for the chiasm in the balanced case was composed by nearly 6580 voxels (235 × 14 patients × 2). This number of samples showed to be insufficient for providing the best volume similarity performance in our experiment. Particularly, a low amount of available samples for training makes the situation even worst in cases presenting a large variability between samples, such as for the optic nerves. Regardless of type or sample, i.e. either negative or positive, by adding more samples on the training set increased the volume similarity performance. Thus, for training purposes, the number of negatives samples was 30 times the number of positives samples, when that amount of negative samples were available. Otherwise, all the samples were taken into consideration to train the classifier.
Leave-one-out-cross-validation
As explained in section 5.4, we employ this strategy to evaluate our method. This technique consists in leaving one of the patients of the dataset out, and train the classifier by using the remaining patients. This process is repeated as many times as available patients we have. Thus, taking into account that our dataset is composed by 15 patients, we will use 14 patients for training and 1 for classification, which will be repeated 15 times, leaving one different patient in each iteration.
Results
Since SVM has proven to be a state-of-the-art classifier, we use it in this thesis for comparison purposes. To demonstrate that employing a deep network scheme to classify OARs can outperform SVM, different configurations were evaluated. Changes on configurations comprise: i) the use of either SVM or SDAE for classification and ii) the use of one of the features sets described in Section 6.1.1.2. Accordingly, the first configuration will always be composed by SVM and classical features, which will be referred to as SVM 1 . Next, classical features will be employed in the SDAE based system, which leads to the configuration known as SDAE 1 . Depending on the OARs group, several configurations will be evaluated (Table 6.2). Accordingly, configurations will be referred to as SDAE n , where n denotes the features group used. Finally, SVM will be employed with the last features set of each configuration, i.e. proposed set, leading to the SVM 2 or SVM AE-F V set for organs from group A or B, respectively.
Structures considered as OARs in the present work differ between them in texture and/or shape appearance. Nevertheless, as explained in section 4.3, despite these differences, there are some structures that present a sort of homogeneity in texture and variation in shape and location is less strong than in others. Therefore, OARs are classed into two main groups, A and B. As a remainder, the group A is composed by the brainstem, eyes and lenses. On the other hand, optic nerves, optic chiasm, pituitary gland and pituitary stalk are considered to belong to the group B. Because of proposed features vary between group A and B, results for both groups are presented separately.
Additionally, number of patients and manual contours available to evaluate
Results
125
the performance of the proposed approach over different OARs was different (See section 5.3 to see the dataset composition). Hence, results that show comparisons between manual and automatic contours are presented when available.
OARs group A
This section presents results of the automatic approaches to segment OARs that belong to group A. For evaluation purposes, in those organs separately present in both left and right brain sides, each of the sides are individually analyzed. With regards to features sets employed, we refer to the table 6.2, where different groups of features were presented. Following this definition and as previously explained, the deep learning scheme that employs classical features will be referred to as SDAE 1 , whilst the one employing the proposed set will be referred to as SDAE 2 in this section. In addition, the setting employing SVM and classical features will be referred to as SVM 1 , while the configuration employing the proposed features will be referred to as SVM 2 .
Comparison with respect to the reference standard
Performance of the four automatic configurations with respect to the reference standard is evaluated in this section. The objective is to quantitatively demonstrate that our proposed learning scheme outperforms the SVM settings, as well as the SDAE scheme configured with classical features.
Dice Similarity Coefficients. Dice similarity coefficients obtained by the automatic segmentations of the OARs of group A are plotted in Figure 6.6. Box plots are grouped for each OAR. Inside each group, results for the reference SVM 1 , the SVM scheme employing proposed features (SVM 2 ), the SDAE setting with classical features (SDAE 1 ) and our proposed system (SDAE 2 ) are displayed. Median values for each group were taken to compute the 50% percentile of the distribution, q 50 . To calculate the first and third quartile, i.e q 25 and q 75 , median values of elements lower and higher than q 50 were respectively employed. Then, the Interquartile range (IQR) was equal to q 75 -q 25 . The lower and upper inner fences were estimated taking 1.5×IQR from the quartile (the "inner fence") rather than the max or min. Last, outliers were those values that were either 1.5×IQR or more above the third quartile or 1.5×IQR or more below the first quartile. Looking across all structures, segmentations produced by SVM 1 system achieved the lowest results in comparison with the other three settings. While it reported a mean DSC of 0.77 (± 0.05) over all the structures, a mean value of 0.79 (± 0.04) was achieved by the SVM 2 set- ting. Schemes based on deep learning, i.e. SDAE 1 and SDAE 2 , obtained mean DSC values of 0.83 (± 0.05) and 0.85 (± 0.05), respectively. Decomposing into single structures, it can be observed that solely by employing SDAE in the classification scheme instead of SVM, segmentation performance improved in all the structures, as well as variability was reduced. If, in addition, the proposed features are fed into the classifier, performance still improved in most of the OARs, particularly in SDAE frameworks. Specifically for the proposed configuration, SDAE 2 , whereas the mean DSC for the brainstem was greater than 0.9 (0.92 ± 0.02), it was close to 0.9 for both eyes. For both lenses, however, mean DSC was nearly 0.75. Furthermore, the overall minimum DSC in large structures was typically above 0.85. For small organs, this minimum value was just below 0.7.
Automatic segmentations presented small but significant (p < 0.05) differences across machine and deep learning environments when conducting a within-subjects ANOVA test on the DSC of all the groups. The small pvalue indicated that at least one method significantly differed from the others. Paired repeated measures ANOVAs (Table 6.4) shows the p-values obtained when comparing results between only two groups. This table pointed out that differences were particularly notorious on the scheme employing SVM combined with classical features as classifier (first row of each group), which values were lower than 0.05. Regarding the inclusion of proposed features in the deep learning scheme, with exception of the brainstem case (p = 0.0181), no significant differences were found on the DSC between the groups using SDAE as classifier.
- - - 1 - - - 1 Brainstem SVM 1 SVM 2 SDAE 1 SDAE 2 SVM 1 1 0.0307 0.0158 6.3953x10 -4 SVM 2 - 1 0.7379 0.1440 SDAE 1 - - 1 0.2885 SDAE 2 - - - 1
Table 6.5: Paired ANOVA tests for the Hausdorff distances between the automatic approaches to segment OARs from group A.
The ANOVA test demonstrated that there existed also differences between automatic segmentations in relation to Hausdorff distances (p < 0.05). As in the case of Dice similarities, significant differences mainly come from the SVM 1 setting, particularly if it is compared with SDAE groups (Table 6.5). Although results plotted on Figure 6.7 shows that including the proposed features on the classification scheme slightly decreased the values of HD, the
Results
129
ANOVA analysis indicates that no significant differences between (SDAE 1 ) and (SDAE 2 ) existed with regards to HD. With exception of the brainstem, where differences in the mean of HD was larger, we can say that improvement is therefore marginal in terms of surface difference. Relative Volume differences. Figure 6.8 plots relative volume differences (rVD) distributions of the four automatic schemes for each of the organs from group A. Schemes employing SVM as classifier presented the largest volume differences for all the structures. Such differences in volume often doubled the value of differences obtained by SDAE classifiers. For example, while volumes generated by SVM 1 and SVM 2 when segmenting the lenses were sometimes around 100-120% larger than the reference standard, these differences were reduced to 50-55% when employing SDAE tistically significant across the four groups (p < 0.05). Results from volume differences showed more dissimilarities in the paired ANOVA tests than previous metrics (Table 6.6). Paired ANOVA tests pointed out that differences between volumes generated by the SVM 1 framework and volumes generated with the other three settings were statistically significant in nearly all the structures. This situation was almost similarly repeated by the configuration SVM 2 , where differences on generated volumes in comparison from those generated by the deep networks were often statistically significant. Last, volumes generated by the two SDAE settings did not present significant differences, with exception of the brainstem, where the paired ANOVA test provided a p-value of 0.0188.
Sensitivity and specificity. Mean sensitivity and specificity values across
OARs of group A for the four different classifier configurations are reported in table 6.7. Sensitivity ans specificity obtained with the proposed SDAE 2 framework were commonly among the top-ranked results for all the organs of group A. Particularly, sensitivity values achieved by the proposed setting were the highest in the cases of the brainstem and lenses, and among the two highest when segmenting the eyes. Furthermore, standard deviation of sensitivity was reduced on the configurations that employed SDAE as classifier. We can also observe that the inclusion of proposed features into SDAE schemes slightly improved sensitivity values across all the structures. Nevertheless, this trend was not observed in settings employing SVM as classifier. For example, the combination of proposed features with SVM, SVM 2 , achieved lower sensitivity values than SVM 1 when segmenting both eyes. In terms of specificity, however, results varied across the four configurations. For large structures, for example, configurations including the proposed features reported the highest specificity values, in comparison with classical features sets. Contrary, for small organs, i.e. lenses, classical features settings achieved marginally higher results than their homologous with proposed features. Differences between SVM and SDAE settings, employing the same features set, mainly come from the brainstem and lenses. Mean specificity values obtained from brainstem segmentations were around 5% higher in SDAE than in SVM configurations. In the case of lenses segmentations, highest specificity values went to the SVM side, which mean values ranging from 5-10% higher than SDAE settings.
A good classifier should ideally be a combination of both high sensitivity and specificity values. We can thereby say that SDAE settings were better classifiers than configurations employing SVM. Additionally, the introduction of proposed features into the classifier improved, although marginally, sensitivity and specificity values of segmentations of OARs from this group.
Following ROC subdivision presented in Section 5.5.1.3, figure 6.9 is presented. On this figure, crosses indicate the correspondence between sensitivity and (1 -specificity) for each patient for the four automatic settings. Thus, each cross represents a single patient and its color indicates the setting em-ployed. It can be observed that for the four analyzed configurations, nearly all results lie on the left-top sub-space, which indicates contours would be considered acceptable for RTP. However, automatic lenses contours for two patients lie on the "high risk" region when employing SDAE 1 and SDAE 2 . Furthermore, it is important to note that for the case of both lenses, there are some results that dangerously approach the "high risk" and "poor" regions. While some segmentations generated by SDAE based classifiers are closer to the "high risk" region, segmentations generated by SVM are typically closer to the "poor" region.
Comparison across manual contours and the proposed scheme
This section evaluates manual segmentations in relation with the generated reference standard and compares with the automatic segmentations obtained with our approach. The goal is to quantitatively demonstrate that segmentations generated by our proposed learning scheme lies on the variability of the experts. Since the brainstem was the only structure in group A from which we obtained more than one manual contour per patient, this section only contains results for the brainstem. DSC value of 0.92, with a minimum value of 0.89 and a maximum value of 0.93. It can be observed on figure 6.11,left that mean DSC achieved by the proposed system is higher than values reported by manual segmentations when compared with the reference standard. The within-subjects ANOVA test conducted on the DSC of all the groups (p<0.05) indicated that there were significant differences among them. These differences were especially notorious on observers 1 and 4. In the right side of this figure, the ANOVA multi-group comparison is presented. The proposed scheme is represented by a blue line, while groups which have means significantly different from SDAE group are drawn in red. These groups represent to the observer 1 and 4.
.12: Hausdorff distance results of manual and our proposed approach for the brainstem.
Hausdoff distances. Left side of figure 6.12 plots Hausdorff distances distributions for the group of manual and automatic contours. While mean HD values for the four observers ranged from 6.52 to 10.09 mm, our proposed system achieved a mean HD of 5.87 mm. Minimum and maximum HD values obtained by the group of manual raters were 4.12 and 16.93, respectively.
Although minimum HD values were not decreased when employing the deep learning scheme, maximum values were reduced to almost the half in relation to several observers. Furthermore, variability of the reported HD was also decreased by the proposed system. The within-subjects ANOVA test conducted on the HD of all the groups indicated that there were not significant differences among them (p = 0.0225). However, despite dissimilarities observed across the observers and the automatic approach, only segmentations from observer 1 presented significant differences with respect to automatic contours (Figure 6.12, right). On this figure, groups with means significantly different from our approach, in blue, are displayed in red. Obs. 2
Obs. 1
Relative volume differences ANOVA multi-group comparison Relative volue differences (%) Figure 6.13: Volume differences results of manual and our proposed approach for the brainstem.
Relative volume differences. Mean relative volume differences with regards to reference contours across the four manual observers were reported to be of 29.39%, 18.92%, 23.59% and 39.44%, for observer 1,2,3 and 4, respectively. By employing the deep learning based classification scheme, relative volume difference was reduced to a mean value of 3.10% with respect to reference volume. The within-subjects ANOVA test conducted on volume differences of all the groups indicated that there were significant differences among them (p = 5.5216x10e (-12) ). These differences come from the manual groups with respect to the automatic method (Figure 6.13). In this figure the ANOVA multi-group comparison for volume differences is shown. The blue Table 6.8: Comparisons across the four observers and the proposed approach when segmenting the brainstem.
line represents the automatic SDAE setting. Red lines symbolize the group comprising the manual raters. As it can be observed, mean of SDAE have significant differences with respect to all the raters of manual segmentations group. Table 6.8 summarizes the performance of manual annotations of the brainstem done by the four observers in comparison with the proposed approach. For the three metrics, the proposed approach significantly outperforms manual annotations, particularly in terms of relative volume differences. Figure 6.14 shows a visual example of manual contours (top) and contours generated by our approach (bottom) when segmenting the brainstem, and its comparison with the reference standard. It can be observed from the manual contours that differences between manual raters usually come from the z axis and from areas where no visible anatomical boundaries exist.
OARs group B
This section presents results of the automatic approaches to segment OARs that belong to group B: optic nerves, pituitary gland, pituitary stalk and chiasm. As in section 6.2.1, organs separately present in both left and right brain sides are split into the two sections, which are individually analyzed. With regards to the features sets employed, we refer to the table 6.2, where different groups of features were presented. Following this definition, the deep learning scheme that employs classical features will be referred to as SDAE 1 . The rest of the groups will be referred to as SDAE Augmented , SDAE T extural and SDAE AE-F V , for the augmented, textural and AE-FV set, respectively. As in the previous section, and to investigate the impact of employing a deep network as classifier instead of some other classification schemes, SVM is used as reference. Both the classical and the AE-FV configurations in combination with SVM will be included in the evaluation. These settings will be referred to as SVM 1 and SVM AE-F V , respectively.
Comparison with respect to the reference standard
Dice Similarity Coefficients. Dice similarity coefficients obtained with the automatic segmentations with respect to the reference standard for the OARs of group B are plotted in Figure 6.15. Box plots are grouped for each OAR. Inside each group, results for SVM references, and the several SDAE settings are displayed. Among all configurations, SVM based classifiers presented the lowest overall mean DSC values, with 0.59 (± 0.16) and 0.64 (± 0.09) for SVM 1 and SVM AE-F V , respectively. Concerning the SDAE settings, the system that included our proposed features, SDAE AE-F V , achieved the highest mean DSC value over all the OARs. Values for the several SDAE configurations were: 0.69 (± 0.11), 0.74 (± 0.07), 0.74 (± 0.07) and 0.79 (± 0.06), for classical, augmented, textural and AE-FV sets, respectively. Analyzing each structure separately, we can observe that again, mean DSC values from SVM configurations were among the lowest ones. In this setting, adding the set of proposed features generally improved the mean DSC. Nevertheless, it often remained below mean values achieved by SDAE based classifiers. Regarding the impact of different features sets on deep architectures, the use of classical features produced segmentations with acceptable mean DSC across all the OARs. However, it did not improve any of the other three features groups. Mean DSC for SDAE 1 were 0.72 (± 0.09), 0.72 (± 0.10), 0.68 (± 0.12), 0.68 (± 0.10) and 0.67 (± 0.13) for left optic nerve, right optic nerve, pituitary gland, pituitary stalk and chiasm, respectively. Introduction of either augmented or textural features improved the segmentation performance of the classifier, which is reflected on its mean DSC values. In the same order, mean DSC values were 0.73 (± 0.04), 0.75 (± 0.06), 0.73 (± 0.08), 0.73 (± 0.09) and 0.74 (± 0.08) for the augmented features set, and 0.76 (± 0.05), 0.76 (± 0.06), 0.73 (± 0.08), 0.70 (± 0.10) and 0.75 (± 0.06) when employing the textural features set. Last, the use of the proposed features set, i.e. AE-FV, achieved the highest mean DSC values across all the structures with values of 0.78 (± 0.05), 0.80 (± 0.06), 0.76 (± 0.06), 0.77 (± 0.08) and 0.83 (± 0.06), respectively.
Automatic segmentations presented significant differences (p < 0.05) across the automatic groups, according to the within-subjects ANOVA test on the DSC of all the groups. Paired repeated measures ANOVAs were conducted over groups that employed only classical and proposed features. The objective of performing paired repeated ANOVAs only in classical and proposed features was to evaluate whether the inclusion of proposed features set in this thesis made a significant difference with respect to classical features set. Results of
Paired ANOVA (DSC) SVM 1 SVM AE-F V SDAE 1 SDAE AE-F V Optiv Nerve (L) SVM 1 1 0.1386 0.0001 5.9524x10 -7 SVM AE-F V - 1 3.8499x10 -5 2.1743x10 -6 SDAE 1 - - 1 0.0159 SDAE AE-F V - - - 1 Optiv Nerve (R) SVM 1 1 0.0737 0.0008 2.8712x10 -7 SVM AE-F V - 1 0.0015 7.8942x10 -6 SDAE 1 - - 1 0.0138 SDAE AE-F V - - - 1
Pituitary Gland SVM 1 1 0.6793 0.9564 0.0173 SVM AE-F V - 1 0.7341 0.0472 SDAE 1 - - 1 0.0291 SDAE AE-F V - - - 1
Pituitary Stalk SVM 1 1 0.7635 0.7507 0.0147 SVM AE-F V - 1 0.4761 0.0014 SDAE 1 - - 1 0.0081 SDAE AE-F V - - - 1 Chiasm SVM 1 1 0.1503 0.0807 9.1865x10 -9 SVM AE-F V - 1 0.4281 8.4951x10 -8 SDAE 1 - - 1 0.0002 SDAE AE-F V - - - 1
Table 6.9: Paired ANOVA tests for the DSC between the automatic approaches to segment OARs from group B.
these tests on DSC values are presented in table 6.9, which shows p-values obtained when comparing results between only two groups. Results demonstrate that no statistically significant differences existed between both SVM based systems in any of the OARs of this group (p > 0.05). Regarding the use of deep networks, the combination of SDAE as classifier with classical features reported significant differences with respect to SVM groups when segmenting both optic nerves. Our proposed scheme, however, presented differences on DSC values that were statistically significant with respect the other groups in all the OARs.
Hausdorff distances. Figure 6.16 plots the distribution of HD across the OARs for all the automatic frameworks. As in the case of DSC distributions, mean HD values over all the structures show that SVM based classifiers presented the worst results. While SVM 1 and SVM AE-F V achieved an overall mean HD of 7.09 (± 5.23) and 6.63 (± 5.09) mm, respectively, mean values for SDAE settings were 5. While in some organs mean HD values were lower for augmented features based classifiers, for some other organs textural features set achieved the lowest mean HD values. Nevertheless, the combination of both features sets into the AE-FV set led to the lowest mean HD values across all the structures. Mean HD values obtained with the proposed features set were 3.51 (± 0.87), 3.67 (± 0.67), 3.34 (± 1.09), 2.78 (± 0.76) and 3.29 (± 1.19), for left and right optic nerve, pituitary gland, pituitary stalk and chiasm, respectively.
Paired repeated measures ANOVAs conducted on HD values (Table 6.10) indicates that including proposed features in the SVM based classifier did not produce segmentations with significant differences with respect to classical configurations (p > 0.05). Employing SDAE as classifier with the classical features set did not report differences statistically significant in four out five structures. Only segmentations of the right optic nerve (p = 0.0098) showed significant different between SVM and SDAE when employing the classical features set. Nevertheless, differences between the two classifiers, i.e. SVM and SDAE, were significant when employing proposed features over all the structures (p < 0.05). Regarding the use of proposed features against classical
Paired ANOVA (Hausdorff distances) SVM 1 SVM AE-F V SDAE 1 SDAE AE-F V Optiv Nerve (L) SVM 1 1 0.9318 0.5786 0.0014 SVM AE-F V - 1 0.4435 0.0034 SDAE 1 - - 1 0.0377 SDAE AE-F V - - - 1 Optiv Nerve (R) SVM 1 1 0.0519 0.0098 7.7642x10 -7 SVM AE-F V - 1 0.3869 0.0001 SDAE 1 - - 1 0.0057 SDAE AE-F V - - - 1
Pituitary Gland SVM 1 1 0.3855 0.3358 0.0077 SVM AE-F V - 1 0.1008 0.0031 SDAE 1 - - 1 0.0836 SDAE AE-F V - - - 1
Pituitary Stalk SVM 1 1 0.8769 0.5265 0.0099 SVM AE-F V - 1 0.5917 0.0017 SDAE 1 - - 1 0.0616 SDAE AE-F V - - - 1 Chiasm SVM 1 1 0.7512 0.7921 0.0003 SVM AE-F V - 1 0.9461 0.0005 SDAE 1 - - 1 0.0165 SDAE AE-F V - - - 1
Table 6.10: Paired ANOVA tests for the Hausdorff distances between the automatic approaches to segment OARs from group B.
features in SDAE settings, segmentation of both optic nerves and chiasm presented significant differences between them, with p-values of 0.0377, 0.0057 and 0.0165, respectively.
Relative Volume Differences. Distributions of relative volume differences of the six automatic schemes for each organ are plotted in figure 6.17. Schemes employing SVM as classifier presented the largest volume differences for all the OARs of group B. Indeed, with exception of the pituitary stalk, mean relative volume differences for SVM based system were double than those reported by SDAE settings, independently on the features set used. Taking results from each structure, it can be observed that by employing either augmented or textural features in SDAE settings did not reduce mean rVD with respect to classical features. Actually, in some cases, such as both optic nerves, differences in volume were higher when employing one of these groups. However, the proposed features set, which comprises all these groups, achieved the lowest rVD among all the configurations. Mean values for relative volume differences across the six groups for the 6 OARs follows. The order of the OARs is: left optic nerve, right optic nerve, pituitary gland, pituitary stalk and chiasm. For SVM Automatic segmentations presented significant (p < 0.05) differences across the automatic groups. Paired repeated measures ANOVAs (Table 6.11) indicate that differences between groups, in terms of volume differences, were significant in most of the cases. Results from the SVM based scheme that employed proposed features were significantly different from those obtained by the classical setting when segmenting both optic nerves and pituitary stalk. Concerning the use of SDAE as classifier, results from SDAE settings were significant different than SVM settings in all the organs, with exception of the pituitary stalk. In this case, with p-values of 0.0961 and 0.7652 for SDAE 1 and SDAE AE-F V , respectively, differences on volume were not statistically significant between SVM 1 and both SDAE groups. On the other hand, the impact of adding proposed features into the deep learning scheme was statistically significant only when segmenting the pituitary stalk and chiasm (p=0.0394 and p=0.0068), in terms of volume differences.
Paired ANOVA (Relative volume differences) SVM 1 SVM AE-F V SDAE 1 SDAE AE-F V Optiv Nerve (L) SVM 1 1 0.0099 3.2411x10 -8 9.3889x10 -9 SVM AE-F V - 1 1.3548x10 -6 3.2796x10 -7 SDAE 1 - - 1 0.2633 SDAE AE-F V - - - 1 Optiv Nerve (R) SVM 1 1 0.0135 8.4021x10 -6 1.0771x10 -5 SVM AE-F V - 1 4.1064x10 -6 7.0974x10 -6 SDAE 1 - - 1 0.8851 SDAE AE-F V - - - 1
Pituitary Gland SVM 1 1 0.3047 0.0114 0.0024 SVM AE-F V - 1 0.0006 6.7363x10 -5 SDAE 1 - - 1 0.7741 SDAE AE-F V - - - 1
Pituitary Stalk SVM 1 1 0.0004 0.0961 0.7652 SVM AE-F V - 1 6.5727x10 -6 0.0005 SDAE 1 - - 1 0.0394 SDAE AE-F V - - - 1 Chiasm SVM 1 1 0.9514 0.0002 0.0021 SVM AE-F V - 1 2.4641x10 -8 2.4966x10 -7 SDAE 1 - - 1 0.0068 SDAE AE-F V - - - 1
Table 6.11: Paired ANOVA tests for volume differences between the automatic approaches to segment OARs from group B.
Sensitivity and specificity. Sensitivity and specificity across OARs of group B for the six different classifier configurations are reported in table 6.12. In general, SDAE based classifiers achieved the highest sensitivity values, whereas SVM settings obtained the highest specificity rates. Mean sensitivity values for both SVM configurations commonly ranged between 60 and 70, with exception of the pituitary stalk, where sensitivity was around 70 for SVM 1 and close to 80 for SVM AE-F V . Employing the SDAE system with classical features improved sensitivity, leading to values close to 80 for all the organs with exception of the chiasm, which mean sensitivity value was 71.67. Adding any single of the investigated features set (SDAE Augmented or SDAE T extural ) typically increased sensitivity with respect to classical settings, with mean values nearly 80, or little bit higher. At last, the proposed system achieved sensitivity values greater than 80 in all the structures. Contrary, concerning the specificity, any pattern was noticed. For instance, regarding the use of SVM with classical or proposed features, specificity was increased when segmenting both optic nerves, whilst it was decreased when segmenting pituitary stalk or chiasm. Combination of higher sensitivity and specificity metrics obtained from the AE-FV based classifier indicated that the proposed system correctly identified more tissue voxels than the others settings did, and also was better at rejecting tissue voxels that were not related to the tissue class of interest.
The performance of the automatic delineations according the features set
Configuration Sensitivity Specificity
Optic nerve (L) employed is also compared by using ROC region analysis (Fig. 6.18). On this figure, crosses indicate the correspondence between sensitivity and (1 -specificity) for each patient for the six automatic settings. Therefore, each cross represents a single patient and its color indicates the setting employed. First, it can be observed that for the six configurations nearly all results lie on the left-top sub-space, which indicates contours would be considered acceptable for RTP. Nevertheless, there are cases which should be taken into consideration. Some contours generated by SDAE 1 approach, or are inside, the "high risk" area when segmenting both optic nerves. In addition, although contours provided by both SVM configurations lie inside the "acceptable" area, they dangerously surround the "poor" region, where the OARs are not spared. Automatic segmentations of pituitary gland and pituitary stalk, from all the settings, also presented some contours that lie outside the acceptable region. Two pituitary gland contours from SDAE Augmented , one from SDAE AE-F V and two from both SVM 1 and SVM AE-F V were in the "poor" and "high risk" regions. Again, several contours generated by both SVM settings were very close to the "poor" region. In the case of the pituitary stalk, one contour for each SDAE configuration and one from SVM AE-F V were found in the "high risk" region. However, more automatic contours from several settings were close to the line that divided the "acceptable" and "high risk" region. Last, only few contours generated by both settings employing classical features, SVM 1 and SDAE 1 , were not in the "acceptable" region when segmenting the chiasm. As in some previous cases, automatic contours generated by SVM 1 were very close to the "poor" region. Figure 6.19 displays the automatic contours generated by the evaluated configurations. To investigate the effect on the segmentation of employing different classifiers, segmentations from configurations employing either SVM or SDAE are presented on the top-row of this figure. Visual results show that SVM based classifiers provided contours much larger than the reference. This was particularly noticeable in the contours from SVM setting employing classical features. In the case of the chiasm, for example, SVM configurations were not capable of distinguish between chiasm and pituitary stalk. Contrary, classifiers based on SDAE correctly classify the chiasm avoiding the neighboring region of the pituitary stalk. Comparison of the impact on the segmentation performance when adding the different features sets on the SDAE settings can be seen in the bottom-row. Including either augmented or textural features into the classification system typically improved segmentations respect to classical features. Nevertheless, combining all features into the AE-FV set achieved the best contours among the SDAE frameworks.
SVM 1 SVM AE-F V SDAE 1 SDAE Augmented SDAE T extural SDAE AE-F V
Comparison across manual contours and the proposed scheme
As for the case of OARs of group A, this section evaluates manual segmentations in relation with the reference standard and compares with the automatic segmentation obtained with our approach. The goal is again to quantitatively demonstrate that segmentations generated by our proposed learning scheme lies on the variability of the experts. Additionally, in cases where performance of automatic segmentation does not lie on the expert variability, we aim at demonstrating that no significant differences exist between the manual raters and the contours generated by our approach.
Dice Similarity Coefficients. Dice similarity coefficients distribution for the three observers and our proposed approach across all the OARs of group B are plotted in Figure 6.20. Each box group contains several columns representing distributions of the segmentations results for a given organ. While the three first columns of each box group represent to the manual raters, the last columns represent our automatic method. Mean DSC over all the OARs of group B is distributed as follows: 0.83(± 0.07) for observer 1, 0.75(± 0.09) for observer 2, 0.76(± 0.10) for observer 3 and 0.79(± 0.06) for our automatic approach. Looking at individual structures, mean DSC values obtained from segmentations made by observer 1 were 0.81(± 0.09), 0.82(± 0.08), 0.86(± 0.03), 0.84(± 0.06) and 0.84(± 0.06), for left optic nerve, right optic nerve, pituitary gland, pituitary stalk and chiasm, respectively. In the same order, mean DSC values were 0.73(± 0.08), 0.74(± 0.11), 0.82(± 0.05), 0.72(± 0.06) and 0.73(± 0.09) for segmentations of observer 2 and 0.80(± 0.05), 0.81(± 0.04), 0.75(± 0.15), 0.77(± 0.05) and 0.69(± 0.10) for segmentations of observer 3. Last, our proposed system achieved mean DSC values of 0.78(± 0.05), 0.80(± 0.06), 0.80(± 0.08), 0.77(± 0.08) and 0.83(± 0.06), respectively. It can be observed on figure 6.20 that mean DSC achieved by the proposed system is always between the highest and lowest values reported by manual segmentations when compared with the reference standard.
Results from the ANOVA analysis conducted between all groups together, as well as between results provided by the proposed automatic approach and each of the manual raters are presented in Table 6.13. The within-subjects ANOVA tests conducted on the DSC values of all the groups indicated that there were significant differences among them (p < 0.05) in all the OARs. Values from the paired ANOVA tests indicated that there were not significant differences on DSC values between observer 3 and our method in four out of five OARs. Only DSC results from the chiasm presented significant differences between observer 3 and our method. However, if we look at the mean DSC distributions (Figure 6.20), we can observe that in this case our approach outperformed the performance of observer 3, in terms of DSC. Significant differences (p < 0.05) between observer 2 and our approach come from the segmentation of left and right optic nerves, pituitary stalk and chiasm. Nevertheless, and as in the previous case, mean DSC distributions (Figure 6.20) indicated that in these cases our approach outperformed the performance of observer 2, in terms of DSC. Regarding the comparison with observer 1, although DSC values were higher for this observer, only segmentations of pituitary gland and pituitary stalk presented significant differences with respect to results provided by our approach. An example of multi-group ANOVA comparison between the four groups (three manuals and one automatic) is shown in Figure 6.21. It displays the multi-group comparison of DSC results when segmenting the chiasm. Blue indicates the group representing results from the proposed approach. Whilst in red are drawn groups with means significantly different from our method, grey indicates that results from group 1, i.e. observer 1, do not present significant differences with respect to our approach. Hausdoff distances. Figure 6.22 plots Hausdorff distances distributions for the group of manual observers and the automatic proposed approach. Mean HD values for the three observers ranged from 1.78 to 4.47 mm across all the OARs. Maximum and minimum values for the automatic approach ranged from 2.58 to 3.67 mm. Mean HD values for each of the OARs for observer 1 were 2.38(± 0.47), 2.91(± 2.17), 1.81(± 0.52), 1.78(± 0.41) and 2.27(± 0.97) mm for left optic nerve, right optic nerve, pituitary gland, pituitary stalk and chiasm, respectively. Manual delineations from observer 2 provided mean HD values of 4.47(± 1.96), 3.93(± 1.89), 2.42(± 0.31), 2.14(± 0.61) and 3.56(± 1.05) mm, while mean HD values for observer 3 were 3.16(± 1.32), 2.86(± 0.85), 2.70(± 1.08), 1.96(± 0.68) and 3.35(± 0.99) mm, respectively. Finally, contours automatically generated by our approach provided the following mean HD values: 3.51(± 0.87), 3.67(± 0.67), 3.09(± 0.85), 2.78(± 0.76) and 3.29(± 1.19) mm, in the same order. Although minimum HD values were not decreased when employing the deep learning scheme, they ranged inside the variability of the experts or very close to values obtained by manual delineation. Furthermore, variability of reported HD values was decreased by the proposed system for some organs in comparison to some observers. Such is the case in both optic nerves in relation with observer 2 and 3. Variability of HD in segmenting the left optic nerve by observer 2 and 3 was of 1.96 and 1.32. Respectively, HD variability of right optic nerve was 1. [START_REF] Warfield | Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation[END_REF] employing the proposed system this variability decreased to 0.87 and 0.66 for the left and right optic nerve. The within-subjects ANOVA test conducted on the HD of all the groups indicated that there were significant differences among them (Table 6.14) in all the OARs. Differences on HD values were significantly important between the automatic approach and observer 1 in almost all the OARs, as reported by the paired ANOVA tests. Nevertheless, when comparing HD values from our approach with those from observer 2 and 3, differences were not significantly important in most of the cases. As example of differences statistically significant between observers 2 or 3 and our proposed approach we can find the HD results from segmentations of pituitary stalk between observer 3 and the proposed system (Figure 6.23). In addition to observer 3, differences with respect to observer 1 were also significantly important in this case. In this figure, blue represents the automatic group, while red represent the manual groups which had means significantly different.
.23: Multi-group comparison of HD results of manual and our proposed approach for the pituitary stalk.
Results
153 Relative Volume differences. Distribution of relative volume differences (rVD) across all the OARs for the four groups is plotted in Figure 6.24. Segmentations from observer 1 presented the lowest rVD among the four groups, with a mean value of 11.55% (± 12.78) over all the OARs. Mean rVD over all the OARs for segmentations of observer 2 and 3 were 22.80% (± 25.24) and 18.17% (± 15.11), respectively. Last, segmentations generated by the proposed classification scheme provided a mean rVD of 17.24% (± 10.67) over all the organs. Isolating results by group and organ, segmentations from observer 1 achieved the lowest mean rVD values across all the OARs. These values were reported to be of 10.34% (± 7.01), 8.78% (± 7.94), 5.69% (± 5.28), 24.51% (± 20.42) and 8.40% (± 8.31) for left optic nerve, right optic nerve, pituitary gland, pituitary stalk and optic chiasm, respectively. For both optic nerves and pituitary stalk, contours from observer 2 obtained the highest mean rVD values, which were 26.26%, 22.78% and 26.12%, respectively. Observer 3 produced the segmentations of the chiasm with highest mean rVD values. And last, our method was ranked at last when segmenting the pituitary gland, with a mean rVD value of 18.09%. An important point to take into consideration for the paired ANOVA analysis is that real values of relative volume differences are analyzed in these tests, instead of absolute values. Thus, results obtained by the ANOVA tests (table 6.15) may not correspond with the graphics on figure 6.24, where absolute volume differences were employed. Results extracted from volume differences
.25: Multi-group comparison of relative volume differences results of manual and our proposed approach for the left optic nerve. presented significant differences between groups in three out of five OARs, as indicated by the within-subjects ANOVA tests (p < 0.05). The paired ANOVA tests showed that rVD results were significant different between observer 1 and our approach when segmenting both optic nerves and pituitary stalk. In the same way, segmentations of both optic nerves presented significant differences, in terms of rVD, between observer 2 and our automatic approach. However, segmentations generated by our method did not show differences significantly important with respect to segmentations of observer 3. An example of ANOVA multi-group comparison is shown in Figure 6.25.
The blue line represents the automatic setting. Red lines symbolize the groups comprising the manual raters which means presented statistically significant differences respect to it.
Some visual examples of manual and contours generated by our approach are shown in Figure 6.26. These images display contours of left and right optic 6.2. Results 155 nerves. From these images, it can be observed that automatic contours (in red) are typically between the variability of manual contours (in blue, yellow and magenta). This fact is supported by the results presented in previous section.
Segmentation time
To compare segmentation times across all the OARs we analyze several classifier configurations. Basically, we are interested in obtaining times of SVM 1 and SDAE 1 configurations to be able to evaluate differences related to the employed classifier. In addition, time for both classifiers containing the proposed features is also evaluated, which represents the last features vector for each group. Therefore, for simplicity, we will refer to this configuration to as SVM Last and SDAE Last , respectively. This is interesting to investigate whether adding more features into the classifier has repercussions on the classification time. Thus, SVM Last or SDAE Last for the OARs from group A will represent the set of enhanced features in Table 6.2. On the other hand, SVM Last or SDAE Last for OARs belonging to group B will be composed by the set of proposed features, AE-FV, which is presented in the same table. Table 6.16 presents mean segmentation times for first and last features sets, as explained, for both SVM and SDAE classifiers. Mean times for SVM based systems ranged from few seconds in small structures, to one or several minutes in large structures or structures presenting large shape variations. The use of proposed features into the SVM classifiers modified segmentation times. While in OARs of group A segmentation time was reduced to nearly half in most cases when employing the proposed features set, segmentation time of OARs of group B increased. This is reasonable if we take into account that sizes of proposed features sets was smaller in group A and larger in group B. On the other hand, SDAE based classification schemes achieved the segmentation in less than a second, for all the OARs. Regarding the use of proposed features, the same trend than in SVM groups is observed.
Discussion
According to some structure characteristics, results have been divided into two groups. As a reminder, group A comprises organs with homogeneous texture and small shape variation, such as the eyes or brainstem. On the other hand, organs with heterogeneous texture, and large variations in shape and locations are included in group B. For instance, optic nerves or chiasm are contained in this second group. Results from proposed system have been compared with a machine learning approach which has been widely and successfully employed for classification, i.e. support vector machines. Additionally to traditional spatial and intensity based features used in machine learning approaches, the inclusion of several features has been proposed and evaluated. These features are usually organ-dependent, and their evaluation has been performed accordingly to the division of groups A and B.
Results provided in this work demonstrate that the proposed deep learningbased classification scheme outperformed all classifier configurations taken into account in the present work. These configurations comprised, either SVM or SDAE as classifier, and one of the features sets evaluated. The basic setting in each of the classifiers was composed by classical features, i.e. spatial and intensity based features. The addition of the novel features, i.e. Geodesic transform map and LBTP-3D for OARs of group A and the AE-FV for OARs of group B, in the classifier increased volume similarity at the same time that reduced Hausdorff distances. Across all the OARs, proposed classifications schemes for groups A and B achieved the best results for similarity, surface and volume differences. Sensitivity and specificity also benefited from the use of the proposed classification scheme. First, sensitivity values were higher in SDAE based configurations than in SVM based settings. Second, the inclusion of suggested features into the classification scheme improved sensitivity values with respect to the other SDAE based settings. This trend was identified in all the OARs from both groups. Specificity values achieved by proposed systems in both groups were in around half of the cases among the top-ranked ones. Unlike in sensitivity case, specificity did not show any pattern with respect to either the classifier or the features set employed. Nevertheless, combination of higher sensitivity and specificity metrics obtained from proposed classifiers indicated that our system correctly identified more tissue voxels than the others settings did, and also was better at rejecting tissue voxels that were not related to the tissue class of interest. Statistical analysis on automatic segmentations demonstrated that results achieved by the proposed system were typically significantly different from the other groups. Particularly, significant differences often came from both SVM settings in relation with the proposed scheme. In addition, significant differences existed between SDAE settings employing classical or proposed features in some OARs.
It is important to note that similarity metrics are very sensitive in small organs. Differences in only few voxels can considerably increase or decrease comparison values. Therefore, we consider that having obtained DSC values higher than 0.7 in small OARs is very satisfactory, in addition with good values for the other metrics. Even in the worst cases, where DSC was above 0.55-0.60 for all the organs analyzed, the automatic contours can be considered as a good approximation of the reference. As example, Figure 6.27 shows the best and worst segmentation for both left and right optic nerves. While best segmentations achieved a DSC of 0.80 and 0.84 for left and right optic nerve (top), respectively, DSC values for worst segmentations were 0.64 and 0.60 In RTP context, a method that is capable of managing deformations and unexpected situations on the OARs is highly desirable. The employed dataset contained some cases where tumors inside the brainstem changed its texture properties. The proposed method correctly discarded voxels inside the brainstem that indeed belonged to tumor regions in some patients. In some others, however, tumor and brainstem were both considered as brainstem. Figure 6.28 presents a successful (top) and an unsuccessful (bottom) case. Images in the first column show segmentations generated by the four settings of group A and the reference contours. While segmentations generated by settings including proposed features, i.e SVM 2 and SDAE 2 , successfully differentiate between brainstem and tumor in the top case, they included both in the segmentation of the brainstem in the bottom case. The reason of this effect mainly lies on the use of the geodesic distance transform map. This feature encourages spatial regularization and contrast-sensitivity. To generate this transformation, as it was presented, a binary mask is required. This mask is obtained in 3D from the probability map of the brainstem and it is used to seed the beginning of the geodesic map. Since this mask is eroded to ensure it will fall inside the brainstem, it will happen that the binary mask to generate the geodesic map will not appear in all the analyzed slices, particularly on both extremes. Hence, if some intensity values are not taken into account when starting the geodesic transform map, they will present differences on the geodesic map. This is the case of patient shown in Fig 6 .28, top, where tumor is located at the limit superior of the brainstem. Therefore, the geodesic map generated in this patient will make a big difference between homogeneous texture (brainstem) and heterogeneous texture (tumor), as can be seen in the top-right image on this figure. Contrary, patient shown in the bottom row presents a tumor approximately in the middle of the brainstem. In this case, binary mask employed to generate the geodesic map will contain brainstem and tumor regions. Consequently, the geodesic transform will assign similar values to these textures, since both are taken into account when creating the geodesic map (Fig 6 .28), right, bottom.
In regard to comparison with manual annotations, the segmentation error we have obtained is comparable to the inter-rater difference observed when contours are delineated without time constraints. This is supported by the results obtained when comparing with manual raters. In those comparisons, we can observe that segmentation results generated by the proposed approach lie on the variability on the experts in most cases. Statistical analysis on results from manual and the proposed classification scheme point out that differences among them were not generally statistically significant. In addition, in cases where differences were significant, our automatic classifier outperformed manual rater that presented those significant differences. We can thereby say that automatic contours generated by the proposed classification system are similar to manual annotations. Therefore, its inclusion in RTP should not represent differences with respect to the use of manual contours.
Among the approaches proposed to segment brain structures included in the RTP, atlas-based techniques have attracted most attention from research. In the evaluation made by Babalola et al. [START_REF] Babalola | An evaluation of four automatic methods of segmenting the subcortical structures in the brain[END_REF], four approaches to segment the brainstem were compared: an atlas-based approach called Classifier Fusion and Labelling (CFL), two statistical based models -Profile Active Appearance Models (PAM) and Bayesian Appearance Models (BAM)-and an Expectation-Maximisation-based approach (EMS). CFL method provided the most accurate results, with a mean DSC of 0.94 and mean percentage rVD of 3.98 for the BS. However, segmentation time for all the OARs was reported to be 120-180 minutes. The two statistical based models -PAM and BAM -provided DSC values of 0.88 and 0.89, and percentage mean rVD of 6.8 and 7.8, respectively. Segmentation time was less than 1 min per structure for the first statistical approach and 5 min for the second one. Nevertheless, while PAM approach required a pre-registration step which took around 20 min, linear registration required by BAM took around 3 min. The last approach, EMS, underperformed the other 3 approaches, with a mean DSC of 0.83 and percentage mean rVD of 21.10, and 30 minutes to segment all the OARs involved.
Other structures, such as optic nerves and optic chiasm, have also benefit from the trend to employ atlas based approaches for segmentation. An atlasnavigated optimal medial axis and deformable model algorithm (NOMAD)to segment these two structures in MRI and CT images was presented in the work of Noble and Dawant [START_REF] Noble | An atlas-navigated optimal medial axis and deformable model algorithm (NOMAD) for the segmentation of the optic nerves and chiasm in MR and CT images[END_REF]. Ten CT/MRI pairs were used for evaluation purposes. Mean DSC values achieved for the testing set were just below 0.8 for both the optic nerves and at 0.8 for the chiasm. In their work, they also reported that segmentation error obtained was comparable to the inter-rater difference observed when contours were delineated without time constraint in a laboratory setting. Segmentation of the optic nerves in a test volume required approximately 20 minutes. As alternative to atlas-based methods, Bekes et al. [START_REF] Bekes | Geometrical modelbased segmentation of the organs of sight on CT images[END_REF] proposed a geometrical model-based segmentation technique. In addition to optic chiasm and nerves, the eyes and lenses were also included in the evaluation, where sensitivity and specificity are used instead of DSC. Mean sensitivity values of 97.71, 97.81, 65.20 and 76.79 were achieved by their method when segmenting the eyes, lenses, optic chiasm and optic nerves, respectively. Analogously, reported mean specificity values were 98.16, 98.27, 93.50 and 99.06. The running time for all the structures was around 6-7 seconds for a whole CT volume. Whilst segmentation of eyes and lenses were satisfactory, segmentation of optic nerves and chiasm was below their expectations. Repeatability and reproducibility of the automatic results made the method not being usable for RTP for these two challenging structures.
Additionally to the presented works, where segmentation is done in healthy patients, other works have focused on the evaluation of segmentation performance of one or a set of OARs in radiotherapy context. Bondiau et al. [5] presented a study aiming to evaluate an atlas-based method to segment the brainstem in a clinical context. To carry out such evaluation, a total of 7 experts and 6 patients were employed. The automatic method achieved a mean sensitivity value of 0.76, which was below the mean sensitivity of any of the experts. However, only in 2 out of the 6 cases the automatic approach presented the lowest sensitivity value. In the other four cases, sensitivity was between the expert variation. With regards to the specificity, means of the experts ranged from 0.86 to 0.99, whilst it was 0.97 for the automatic approach. Volume measurements revealed that, although the automatic results mostly lie between the variability of the experts, it tend to underestimate the segmented volume with respect to the mean of the manual delineations. With these results, authors suggested that this method provided a good trade-off between accuracy and robustness. Additionally, reported results could be comparable to those from the experts. Results reported that the total duration of the automatic segmentation process to obtain a fully labeled MRI was of 20 min.
In the work of Isambert et al. [START_REF] Isambert | Evaluation of an atlas-based automatic segmentation software for the delineation of brain organs at risk in a radiation therapy clinical context[END_REF] another atlas-based segmentation (ABAS) software, which is included in a commercial solution for RTP, was also evaluated in therapy clinical context. Automatic segmentations of the brainstem, cerebellum, eyes, optic nerves, optic chiasm and pituitary gland of 11 patients on MRI T1-weighted images were evaluated. It was found that for large organs, DSC reported values were higher than 0.8; whereas for smaller structures, DSC was lower than 0.4. More specifically, mean DSC distribution across all the OARs was: 0.85, 0.84, 0.81, 0.38, 0.41 and 0.30, for the brainstem, cerebellum, eyes, optic nerves, optic chiasm and pituitary gland, respectively. With exception of the optic nerves, the atlas-based approach underestimated all the volumes from 15 % in the case of the brainstem to 50 % when segmenting the optic chiasm. The mean time required to automatically delineate the set of 6 structures was 7-8 min. Following the ROC analysis that we also employed in the present work, segmentations generated by the automatic approach were clinically acceptable for the brainstem, eyes and cerebellum. On the other hand, all the segmentations for the optic chiasm, and most of the segmentations for optic nerves and pituitary gland were considered as poor.
In a more recent study on RTP context, Deeley et al. [6] compared manual and automated approaches to segment brain structures in the presence of space-occupying lesions. The objective of this work was to characterize expert variation when segmenting OARs in brain cancer, and to assess an au- Babalola [START_REF] Babalola | An evaluation of four automatic methods of segmenting the subcortical structures in the brain[END_REF] Bondiau [5] Deeley [6] Fritscher [START_REF] Fritscher | Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours[END_REF] Hoang [START_REF] Duc | Validation of clinical acceptability of an atlas-based segmentation algorithm for the delineation of organs at risk in head and neck cancer[END_REF] Isambert [START_REF] Isambert | Evaluation of an atlas-based automatic segmentation software for the delineation of brain organs at risk in a radiation therapy clinical context[END_REF] Our Often close to the OARs Fritscher [START_REF] Fritscher | Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours[END_REF] 18 Images 1 expert
Harrigan [START_REF] Harrigan | Robust optic nerve segmentation on clinically acquired computed tomography[END_REF] 501 images from [START_REF] Chang | LIBSVM: A library for support vector machines[END_REF] tomatic segmentation method in such context. To achieve the automation of the segmentation process, a registration-driven atlas-based algorithm was employed. A set comprising the brainstem, optic chiasm, eyes and optic nerves was evaluated. Main results disclosed in their evaluation showed that the analyzed automatic approach exhibited mean DSC values between 0.8-0.85 for larger structures, i.e. brainstem and eyes. Contrary, DSC reported for smaller structures, i.e. optic chiasm and optic nerves, were of 0.4 and 0.5, respectively. Results demonstrated that although both manual and automatic methods generated contours of similar volumes, experts exhibited higher variation with respect to tubular structures. Coefficients of variation across all the patients ranged from 21-93 % of mean structure volume.
Although presented works have demonstrated to perform well when segmenting some structures, most of them have resulted ineffective when applied to a multi-structure environment. In this context, this situation is aggravated if small structures, such as the chiasm, are included in the segmentation. On the other hand, works presenting the highest results represented the longest ones in terms of processing times. These times ranged from 1 or 2 minutes to several minutes, per structure. A summary of the performance from these previous works, as well as from our proposed method, are presented in Table 6.17. In this table we observe that, in terms of similarity metrics (volume and surface), our method beat all other works in most situations. Additionally, a noteworthy aspect to highlight from our approach is its significantly low segmentation time, which is several orders of magnitude in comparison with the others. In order to have a more relevant comparison between methods, table 6.18 is added. In this table, experimental settings up for each work shown in table 6.17 are presented. With all this we may thereby say that the presented approach outperforms, up to date, to all the other segmentation methods to segment OARs in brain cancer.
Results also demonstrate that the proposed deep learning-based classification scheme outperformed all previous works when segmenting the set of OARs analyzed. Nevertheless, it is important to note that differences in data acquisition, as well as metrics used to evaluate the segmentation, often compromise comparison to other works. Although it was not possible in this work to use the same datasets as those used in previous studies, the consistently higher performance our approach achieved, as indicated by the results, suggests that the method presented in this thesis outperforms previously presented approaches to segment these structures. Results show that by employing SDAE as classifier, segmentation time was significantly reduced in comparison to other classical machine learning approaches, such as SVM. This is particularly noteworthy if we take into consideration that most works referenced in this thesis to segment involved structures are atlas-based, and therefore registration dependent. This makes their segmentation times very expensive in comparison with the proposed approach, which is between two and three orders of magnitude faster. Current implementation of the proposed system is not computationally optimized and the bottle neck of the process is the features extraction step, which processing time ranges between 1-6 seconds for each of the OARs. Although it is not an expensive stage, it represents more than 95% of the total process. Since the extraction of the features does not require difficult programming operations, its parallelization is easily affordable. This may substantially decrease the whole segmentation process up to segmentation times below one second for an entire organ.
One of the strengths of deep learning methods relies on their ability to transfer knowledge from human to machine. They 'learn' from a given training data set. Hence, for example, when no visible boundaries are present, the classifier uses its transferred intelligence from doctors to perform the segmentation as they would do. As a prove of this learning, results presented in this thesis have shown how well the proposed system learned from the available dataset.
Nevertheless, one of the main concerns of this thesis was the generation of a simulated ground truth. It was obtained in this work by using the computationally simple concept of probability maps. In this method, which is analogous to the voting rule approach, probability maps were thresholded at a variable level in order to create the mask. Although thresholds values were fixed according to the number of available contours, which also corresponded with the suggestion of Biancardi [START_REF] Biancardi | A comparison of ground truth estimation methods[END_REF], thresholding probability maps at a static predetermined level may be problematic. Determination of the most suitable threshold for each organ presents a challenge. A reasonable first choice is to fix its value to 50% as it represents the threshold for majority voting rule. Nevertheless, as pointed out in the work of Deeley [6], 50% might not be reliable with such statistically small number of raters with unknown individual variance. Thus, an appropriate threshold value for one cohort of experts may not suit for another different cohort. The same reasoning can be extended for different organs, where consensus among raters is dependent on organ. Therefore, to be able of simulating more consistent reference standard we encourage further studies to involve more experts in the manual delineation step.
Chapter 7
Conclusions and Future Work "The two most important days in your life are the day you are born and the day you find out why."
Mark Twain
In this chapter we review the motivations for this work, as well as the important contributions of this thesis. Following this, possible future directions are also discussed.
Discussion
This dissertation addresses the problem of organs at risk segmentation in brain cancer towards enabling its adoption in clinical routine. To achieve this, the work in this thesis puts forth a practical application in the field of medical image segmentation of one of the hottest research topics nowadays, i.e. deep learning.
Segmentation of medical images is a field that have spurred an overwhelming amount of research. However, open issues abound with regard to approaches to segment organs at risk in brain cancer and its usability in clinical practice. Nowadays, and up until a few years ago, atlas and statistical based models have represented the most employed techniques to perform a sort of automatic delineation in medical images, particularly for brain structures. However, they present some disadvantages and therefore suffer from slow adoption in clinical routine.
Atlas-based segmentation approaches rely on registration techniques. In these methods, anatomical information is exploited by means of images already annotated by experts, referred to as atlases, to be matched on the patient under examination. To compute such transformation, deformable registration is often used. After registration, deformed contours are transferred to the target image. The quality of the deformed contours directly depends on the quality of the deformation. Nevertheless, this is difficult to evaluate. Furthermore, registration techniques encompasses regularization models of the deformation field, whose parameters are complex to adjust, particularly in inter-patient cases. Another limitation of atlas-based methods is that contours included in the atlases contain prior knowledge about organs pictured in the image which is not commonly exploited. To perform the segmentation, contours are merely deformed. As a consequence, most of the information conveyed by the contours, such as shape or appearance, remains implicit and likely underexploited. Statistical models present an alternative to address this issue by making a more explicit use of such prior information to assist the image segmentation. Unlike atlases, images are not registered but shapes and, sometimes, the appearance of the organ, are learned in order to be found in a target image. Because of target points are searched in a local constrained vicinity of the current estimation for each location, a sufficiently accurate initialization needs to be provided to make the model converge to the proper shape. Therefore, search of shape and/or appearance requires an initialization. If the initial position is too distant from the searched object, in terms of translation, rotation or scale, this can lead to poor object identification. Details of these, and other published works to segment brain structures were disclosed in Chapter 3.
The objective of this thesis was therefore to propose an approach as alternative to these existing methods that also addresses their limitations. Particularly, an organ segmentation scheme based on a deep learning technique was suggested. This approach, as most of machine learning based methods, attempts to reproduce the way radiation oncologists manually delineate the organs. First, all information required to learn how to segment each of the organs at risk is extracted from images where organs were delineated. This information is transformed into a features array that serves as input of the network. Then, the network learns a hierarchical representation of the input, which is later employed for classification. The strength of deep architectures is to stack multiple layers of nonlinear processing, a process which is well suited to capture highly varying functions with a compact set of parameters. The deep learning scheme, based on a greedy layer-wise unsupervised pre-training, allows to position deep networks in a parameter space region where the supervised fine-tuning avoids local minima. Deep learning methods achieve very good accuracy, often the best one, for tasks where a large set of data is available, even if only a small number of instances are labeled. In addition, deformable registration techniques are no longer required in our approach, but a simple manual rigid alignment. Even though we have not investigated a solution to automatically perform the required alignment, it should be easily automatized.
Details of the technique employed to create the deep network, as well as the features propose to improve the segmentation performance were introduced in Chapter 4, where main contributions of this work were presented. The learning network is generated by stacking denoising auto-encoders in a deep architecture. To train a learning system, a set of features is commonly fed into the network. Particularly, in deep learning architectures, such as convolutional neural networks, restricted Boltzman machines, or even auto-encoders, two or three-dimensional patches from one or multiple images are typically employed as features vector. From these patches, the network unsupervisedly learns the most discriminative representation of the input. Although they have demonstrated to break records in several domains, such as speech recognition or image classification, we consider that by using patches they do not fully exploit relevant information coming from traditional machine learning approaches to analyze medical images in general. This unexploited knowledge may come in the form of likelihood to belong to some class, voxel location or textual properties, for example. Thus, the feature set proposed in this work is very different from features sets employed on most of the deep learning settings applied to medical imaging.
Typical machine learning schemes to segment medical images employ pixel or voxel intensity values, intensity of a neighboring area, likelihood of belonging to a given class and location. Although these hand-crafted features may be sufficient for some well-defined structures, they do not provide the best solution when attempting to segment challenging structures. Thus, additional features have been proposed in this thesis to enhance the discriminative power of the features vector. Since properties are different from one organ to another, some of the proposed features are organ-dependent. Hence, for example, for organs with homogeneous texture and small shape variations we have proposed features that encourage spatial regularization, such as the geodesic distance transform map. On the other hand, for organs with strong variations on shape and intensity we have suggested the combined use of contextual and textural properties. As a consequence, features set varies from one group of organs to another.
We designed an evaluation study to evaluate the performance of the proposed approach, quantify variation among experts in segmenting organs at risk in brain cancer, and assess the proposed automatic classification scheme in this context. First, a reference standard was created from the manual contours, which served as ground truth to compare with. To evaluate the performance of our approach, results were compared to those provided by a state-of-theart machine learning classifier, i.e. support vector machine (SVM). In the second part of the evaluation, automatic contours generated by the proposed approach were also compared to manual annotated contours by experts.
Results demonstrated that by only employing a network composed by a stacked of denoised auto-encoders, segmentation performance increased with respect to SVM. Additionally, when proposed features were included in the features set, reported results showed that improvement on segmentation performance was noticeable. Across the experiments we noticed that segmenta-tions of OARs of group B, in some patients, were highly different when using either augmented or textural features. While in some patients the features set composed by augmented features achieved the best results, in some other patients the best result was obtained by the textural features set. Nevertheless, when combining both of them, results were more homogeneous, which can be observed in the standard deviation on the results section. For the other groups, the classification performed with the proposed features set outperformed the classical set in most cases.
Even though the presented work is not pioneering on the evaluation of automatic segmentation of OARs in the context of brain radiation therapy, it presents important improvements respect to the others (See Table 6.17). Large structures, such as eyes or brainstem, have been successfully segmented in all previous works evaluating segmentation performance in brain cancer context. Contrary, segmentation of small structures was not always satisfactory. By employing the proposed classification system we: i) improved segmentation performance of structures already successfully segmented and ii) provided a satisfactory segmentation for those structures which segmentation could not be always achieved. Furthermore, all presented works to analyze OARs segmentation in radiotherapy context are based on atlas and thus registration dependent. This makes segmentation times to be over several minutes, which might be clinically impractical in some situations. In addition to the segmentation times, other disadvantages of atlas-based methods have already been discussed. Our method, however, performs the segmentation in few seconds for each single OAR. A noteworthy point is that features extraction represented nearly 97.5% of the whole segmentation process. Since this stage is composed by simple and independent image processing steps, this can be easily parallelized. By doing this, the total segmentation time may be drastically reduced to less than a second per structure. Another remarkable difference with respect to some other approaches is that the proposed system does not require combination of more than one image modalities.
When comparing the results with the manual contours, it can be observed that they lie inside the variability of the observes. Statistical tests demonstrated that there were not significant differences between automatic and manual delineations for many of the cases. All this, together with the remarkably low segmentation time reported in the experiments, makes this technique suitable for being used in clinical routine. Therefore, the introduction of such technique may help radiation oncologists to save time during the RTP, as well as reducing variability in OAR delineation.
This thesis has represented therefore a first step in developing and exploring deep denoised autoencoders for being applied to the segmentation of organs at risk on brain cancer. Its evaluation has been assessed in a multi-rater context. In addition, it does so without being subject to fatigue or inattentiveness, which can affect human measurements and diminish reliability in studies of large samples over a long time.
Future work
In this thesis, we have proposed an approach that solely employs information extracted from magnetic resonance images (MRI). More specifically, only the sequence T1 from the MRI set is used. Nevertheless, having employed exclusively T1 sequences might have underestimated the power of our approach. The reason is that contouring of some OARs on FLAIR or T2 sequences would probably have improved the inter-observer reproducibility without degrading learning and automatic segmentation. However, all the sequences were not available in all the patients contained on the employed dataset. We have also shown that for training and classifying we utilize the features vector. Including additional information on this vector is straightforward. Since more MRI sequences other than T1 are typically acquired to plan the treatment and diagnosis, we suggest to combine MRI-T1 with other modalities, such as T2 for example, when available. The combination of different MR sequences can enhance the segmentation, particularly on those regions where these image sequences are complimentary. Independently on the sequence added, any relevant information included into the classifier may help to improve the segmentation performance. We therefore encourage future research on this topic to include other image sequences in both contouring and learning/classification steps. Another main direction for future research is to examine the contribution of other image properties as features during the training and classification.
In this work, good segmentation performance has been reported by the proposed classification scheme by training huge networks with a relatively small amount of data. Indeed, these networks were sometimes composed by several millions of parameters, while number of training samples were of several thousands. Even though trained deep networks overfit training dataset, they still generalized pretty well to unseen samples. This may be explained by the fact that brain MRI images are highly structured, often presenting small variability between regions from one brain to another. Using more patient cases in the training set aiming to capture more variability would ideally be the best solution to prevent from overfiting. Additionally, this increase of the training set might also positively impact on classification performance. Unfortunately, labeled datasets are rare and difficult to obtain. Consequently, generation of artificial MRI cases from existing ones should be considered in further works, i.e. data augmentation. This could include small transformations such as rotations, scaling, noise or some other small distortions.
In addition, in our experiments, and in most of proposed works employing deep architectures, the number of hidden units in each layer, as well as the number of layers is manually determined. Therefore, the network architectures employed might not be necessarily optimal. By employing deep learning a better representation of an input is automatically extracted. However, network architecture is still manually tuned. I believe that performing more intensive studies such as learning optimal network structure from input data for its practical use in clinical setting would bring more power to neural networks.
The work presented in this thesis has been mostly developed within an enterprise, in an industrial environment. As such, one of the main goals of the company is to integrate this work into its products. The code developed represents a first functional prototype that can be employed to obtain results such as the ones presented in this dissertation. Nevertheless, its use in clinical routine in its current state still requires some efforts from the development side. Development of an optimized prototype would represent one of the first tasks to carry out. Although its current performance allows to segment a structure in relatively small amount of time, this process can still be optimized by programming the features extraction step on GPU. In addition to processing time, user experience is of high importance when trying to develop software that will be employed by non-experts users through an interface. Before deploying a clinical usable version of the final product, a clinical validation with a larger dataset should be also envisaged.
Introduction
Le cancer représente un groupe de maladies communes, non transmissibles, chroniques et potentiellement mortelles affectant la plupart des familles dans les pays développés, et un contributeur de plus en plus important à une mort prématurée au sein de la population de ces pays [2]. En particulier, les tumeurs cérébrales sont la deuxième cause la plus fréquente de décès par cancer chez les hommes âgés de 20 à 39 ans et la cinquième cause la plus courante de cancer chez les femmes âgées de 20 à 39 ans [4].
Une tumeur cérébrale est toute masse provoquée par une croissance anormale ou incontrôlée de cellules qui surviennent à l'intérieur ou à proximité du cerveau. En général, ces tumeurs sont classées en fonction de plusieurs facteurs, y compris l'emplacement, le type de cellules impliquées, et le taux de croissance. Les tumeurs cérébrales dites "primaires" sont des tumeurs à croissance plus ou mois rapide, localisées dans le parenchyme cérébrale et sans capacité de ce propager sur des sites distants. Leur degré de croissance constitue notamment un facteur de malignité et elles sont ainsi classées "benignes" (ex. neurinomes, méngiomes) ou "malignes" (ex. gliome de bas grade, glioblastome). Les tumeurs issues d'une localisation distante (ex. poumon, foie, sein) ont une croissance généralement plus rapide et sont dites "secondaires". Il s'agit de métastases cérébrales consécutives à un cancer d'une localisation extracérébrale. Ces dernières sont toujours des tumeurs malignes. Cependant, primaire ou secondaires, bénigne ou maligne, les tumeurs cérébrales restent toujours potentiellement invalidantes et critiques pour la survie du patient.
La radiothérapie (RT) et la radiochirurgie (SRS) sont parmi l'arsenal de techniques disponibles pour traiter les tumeurs cérébrales. Le terme radiothérapie décrit les applications médicales des rayonnements ionisants pour détruire les cellules malignes en endommageant leur ADN [START_REF] Ward | DNA damage produced by ionizing radiation in mammalian cells: identities, mechanisms of formation, and reparability. Progress in nucleic acid research and molecular biology[END_REF]. La RT est souvent organisé en deux phases: la planification et la délivrance. Les images sont acquises, les régions d'intérêt sont identifiées et la balistique est planifiée à partir de ces données d'imagerie. Le traitement planifié est ensuite délivré au patient. Afin de calculer la dose a délivrer, la position des volumes cibles doit être précisément déterminée.
Un objectif majeur de RT est de priver les cellules cancéreuses de leur potentiel de multiplication et éventuellement tuer les cellules cancéreuses. Cependant, le rayonnement créée également des lésions aux tissus sains. Par conséquent, l'objectif principal de la radiothérapie est délivrer une dose importante à la tumeur, tout en veillant à ce que les tissus sains avoisinant soient épargnés autant que possible. En particulier pour les traitements radiochirurgicaux, où la dose de rayonnement est considérablement plus élevée et délivrée en séance unique, des erreurs de configuration ou de localisation peu-vent entraîner une surdose sévère du tissu sain adjacent. Cette surexposition aux rayonnements peut conduire à des complications sévères, progressives et irréversibles, qui se produisent souvent des mois ou des années après le traitement. Ces structures critiques à conserver sont désignées comme organes à risque (OAR). Dans la RT cérébrale, les nerfs optiques, le chiasma, le tronc cérébral, les yeux, le cristallin, l'hippocampe et l'hypophyse sont généralement considérés comme OARs.
Au cours des dernières décennies, l'imagerie médicale, initialement utilisée pour la visualisation des structures anatomiques, a évolué pour devenir un outil essentiel au diagnostic, au traitement et au suivi de l'évolution des pathologies. En particulier, dans l'oncologie, l'évolution des techniques d'imagerie a permis d'améliorer la compréhension du cancer, de son diagnostic à sa prise en charge thérapeutique et du suivi évolutif. Les techniques d'imagerie médicale avancées sont donc utilisées pour la chirurgie et pour la radiothérapie. Il existe un large éventail de modalités d'imagerie médicale. Les premières méthodes d'imagerie, invasives et parfois risquées, ont depuis été abandonnées en faveur de modalités non-invasives, de haute résolution, telles que le scanner (CT) ou, en particulier, l'imagerie par résonance magnétique (IRM). L'IRM possède une sensibilité plus élevée pour détecter une tumeur, ou des changements au son sein et un meilleure contraste pour délimiter les structures cérébrales saines. Pour ces raisons, et parce que l'IRM ne repose pas sur des rayonnements ionisants, l'IRM a progressivement supplanté le CT comme pilier de l'imagerie en neuro-oncologie clinique, devenant la modalité de référence pour le diagnostic, le suivi et la planification des traitements de lésions cérébrales [START_REF] Sheehan | Controversies in Stereotactic Radiosurgery: Best Evidence Recommendations[END_REF].
Parce que RT et SRS s'appuient sur une irradiation importante, la tumeur et les tissus sains environnants doivent être précisément définies. En particulier pour les OARs pour lesquels la connaissance de leur localisation et de leur forme est nécessaires pour évaluer et limiter le risque de toxicité sévère. Parmi les modalités d'image disponibles, les images IRM sont largement utilisées pour segmenter la plupart des OARs. Dans la pratique, cette délinéation est principalement réalisée manuellement par des experts avec éventuellement un faible support informatique d'aide à la segmentation [START_REF] Whitfield | Automated delineation of radiotherapy volumes: are we going in the right direction? The British journal of radiology[END_REF]. Il en découle que le processus est fastidieux et particulièrement chronophage avec une variabilité inter ou intra observateur significative. Une part importante du temps médical s'avère donc nécessaire à la segmentation de ces images médicales. Si en automatisant le processus, il devient possible d'obtenir un ensemble plus reproductible des contours acceptés par la majorité des oncologues, cela permet d'améliorer la planification et donc la qualité du traitement. En outre, toute méthode de réduction du temps nécessaire à cette étape contribue à une une utilisation plus efficace des compétences de l'oncologue.
Pour remédier à ces problématiques, divers systèmes assistés par ordinateur pour (semi-) automatiquement segmenter les OARs ont été proposés et publiés au cours des dernières années. Néanmoins, la segmentation (semi-)automatique des structures cérébrales reste encore difficile, en l'absence de solution générale et unique. De plus, en raison de l'augmentation du nombre de patients à traiter, les OARs ne peuvent pas toujours être segmentés avec précision, ce qui peut conduire à des plans sous-optimaux [START_REF] D'haese | Automatic segmentation of brain structures for radiation therapy planning[END_REF]. Cela rend l'implémentation en routine clinique d'un outil de segmentation des OARs assistée par ordinateur hautement souhaitable.
Etat de l'art
La segmentation d'image est un problème de partitionnement d'une image d'une manière sémantiquement résolue. La subdivision de l'image en régions significatives permet une représentation compacte et plus facile de l'image. L'agrégation des pixels d'une forme donnée se fait selon un critère prédéfini. Ce critère peut être basé sur de nombreux facteurs, tels que l'intensité, la couleur ou la texture, la continuité des pixels, et certaines autres connaissances de niveau supérieur sur le modèle d'objets. Pour de nombreuses applications, la segmentation se résume à trouver un objet dans une image donnée. Cela implique le partitionnement de l'image en deux classes de régions uniquement. La segmentation d'image reste souvent une étape préalable et essentielle pour une analyse plus approfondie de l'image, la représentation de l'objet ou la visualisation.
Segmentation des images médicales
Comme la segmentation joue un rôle central dans la récupération des informations significatives à partir d'images, l'extraction efficace de toutes ces informations et des caractéristiques des images multidimensionnelles est de plus en plus importante. Dans leur forme brute, les images médicales sont représentées par des tableaux de valeurs représentant des quantités qui montrent le contraste entre les différents types de tissus du corps. Le traitement et l'analyse des images médicales sont utiles pour transformer cette information brute en une forme symbolique quantifiable. L'extraction de cette information quantitative significative peut aider au diagnostic, ainsi que dans l'intégration de données complémentaires provenant de multiples modalités d'imagerie. Par conséquent, dans l'analyse d'images médicales, la segmentation a une grande valeur clinique car elle est souvent la première étape dans l'analyse quantitative de l'image.
Néanmoins, la segmentation d'images médicales se distingue des tâches de segmentation d'images classiques et reste généralement difficile. Premièrement, de nombreuses modalités d'imagerie médicale produisent des images très buitées et floues en raison de leurs mécanismes d'imagerie intrinsèques. Deuxièmement, les images médicales peuvent être relativement mal échantillonnés. De nombreux voxels peuvent contenir plus d'un seul type de tissu, (effet de volume partiel). Dans ce cas, la perte de contraste entre deux tissus adjacents rend plus difficile leur délimitation. En plus de ces effets, certains tissus ou organes d'intérêts partagent des niveaux d'intensité similaires avec les régions voisines, conduisant à une absence de limites francches des objets. Cela implique que ces structures d'intérêt restent très difficiles à isoler de leur environnement. Par ailleurs, si l'objet à une forme complexe, ce manque de contraste sur ses limites rend la segmentation encore plus fastidieuse. Enfin, en plus des informations de l'image, une connaissance approfondie de l'anatomie et de la pathologie peut s'avérer importante pour segmenter les images médicales. L'expertise médicale est donc nécessaire afin de mieux comprendre et interpréter l'image de sorte que les algorithmes de segmentation puissent répondre aux besoins du clinicien.
Les approches initiales pour segmenter le cerveau en IRM se sont principalement concentrées sur la classification du cerveau en trois classes principales : la substance blanche (SB), la substance grise (SG) et le liquide céphalorachidien (LCR) [START_REF] Xuan | Segmentation of magnetic resonance brain image: integrating region growing and edge detection[END_REF]. Des méthodes plus récentes intégrent la segmentation des tumeurs et régions adjacentes, telles que les zones nécrotiques [START_REF] Lee | Segmenting brain tumors with conditional random fields and support vector machines[END_REF]. Ces méthodes ne sont basées que sur l'intensité du signal. Cependant, la segmentation des structures sous-corticales (à savoir les OARs) peut difficilement être réalisée uniquement sur la base de l'intensité du signal, en raison des faibles limites visibles et des valeurs d'intensité similaires entre les différentes structures sous-corticales. Par conséquent, des informations additionnelles, telles qu'un a priori de forme, de apparence ou de localisation, sont nécessaires pour effectuer la segmentation.
Parmi les techniques de segmentation qui ont exploité cette information, on peut citer : les méthodes basées sur les "atlas", les méthodes statistiques, les modèles déformables et les techniques basées sur l'apprentissage (Machine learning).
Methodes basées sur les atlas
Les méthodes basées sur les atlas sont largement utilisées pour l'exploitation de connaissances a priori. Un "atlas" est une image d'un patient "type" préalablement segmentée et qui sert de référence à l'image du patient à segmenter. Ces informations anatomiques sont exploitées au moyen des atlas pour être adaptés au patient en cours d'examen. La procédure générale pour effectuer des segmentations sur les images d'un patient en utilisant un ou plusieurs atlas respecte le plus souvent le même principe : recalage et propagation des contours. Tout d'abord, un champ de déformation qui met en correspondance l'atlas avec l'image du patient à segmenter est calculé en utilisant des méthodes de recalage appropriées [START_REF] Toga | The role of image registration in brain mapping[END_REF]. En second lieu, le champ de déformation ainsi calculé est appliqué aux structures d'intérêt déjà segmentées sur les atlas vers l'image originale.
Presque toutes les techniques basées atlas exigent une recalage d'images durant la phase initiale. Cela signifie que le succès de la propagation des atlas dépend fortement de l'étape de recalage. L'utilisation d'un seul atlas pour propager des structures segmentées au sein d'un seul patient est généralement suffisante. Cependant, compte tenu de la grande variabilité inter-individuelle, l'utilisation d'un seul atlas peut conduire à des résultats insatisfaisants. L'utilisation de plus d'un atlas améliore la qualité de la segmentation dans ces situations. En augmentant le nombre d'atlas dans la base de données, la méthode devient plus représentative de la population et donc plus robuste lors du traitement des patients qui présentant des variations anatomiques. Cependant, lors de l'utilisation de plusieurs atlas, le point clé est de déterminer quel atlas doit être utilisé pour un patient donné. Pour ce faire, certains paramètres de similitude sont utilisés après l'étape de recalage afin de sélectionner l'atlas le plus "proche" parmi toutes les autres dans la base de données. Comme alternative à la sélection des atlas les plus proches de l'image cible, plusieurs atlas peuvent être propagés, conduisant à plusieurs solutions de segmentation, fusionnées à la fin du processus. La fusion des solutions peut finalement générer des artefacts, notamment des organes discontinus non représentatif de l'anatomie. Du point de vue clinique, la nécessité de corriger manuellement les contours automatiques a fait l'objet de plusieurs évaluations cliniques récentes [START_REF] Daisne | Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation[END_REF].
Une des principales limitations des méthodes basées sur l'atlas est que les connaissances a priori incluses dans les contours du modèle ne sont pas exploitées. Pour effectuer la segmentation, ces contours sont simplement déformés. En conséquence, la plupart de l'information intégrée dans les contours, telles que la forme ou l'apparence, reste implicite et probablement sous-exploitée. Les modèles statistiques sont une alternative qui abordent cette problématique en faisant une exploitation plus explicite de ces informations pour aider à la segmentation d'images. Contrairement, aux atlas, les images ne sont pas recalées, mais les formes et, parfois, l'aspect de l'organe, sont appris afin d'être identifiés sur une image cible.
Modèles statistiques
Les modèles statistiques (MS) ont largement été utilisés dans le domaine de la vision par ordinateur et de la segmentation des images médicales au cours de la dernière décennie [START_REF] Hu | Appearance-based modeling for segmentation of hippocampus and amygdala using multi-contrast MR imaging[END_REF][START_REF] Olveres | Midbrain volume segmentation using active shape models and LBPs[END_REF][START_REF] Cootes | A unified framework for atlas matching using active appearance models[END_REF][START_REF] Duchesne | Appearance-based segmentation of medial temporal lobe structures[END_REF][START_REF] Bailleul | Segmentation of anatomical structures from 3D brain MRI using automatically-built statistical shape models[END_REF][START_REF] Babalola | 3D brain segmentation using active appearance models and local regressors[END_REF][START_REF] Tu | Brain anatomical structure segmentation by hybrid discriminative/generative models[END_REF][START_REF] Hu | Nonlocal regularization for active appearance model: Application to medial temporal lobe segmentation[END_REF][START_REF] Cootes | Training models of shape from sets of examples[END_REF][START_REF] Cootes | Active shape modelstheir training and application[END_REF][START_REF] Brejl | Object localization and border detection criteria design in edge-based image segmentation: automated learning from examples[END_REF][START_REF] Cootes | Active appearance models[END_REF][START_REF] Van Ginneken | Interactive shape models[END_REF][START_REF] Pitiot | Expert knowledge-guided segmentation system for brain MRI[END_REF][START_REF] Zhao | A novel 3D partitioned active shape model for segmentation of brain MR images[END_REF][START_REF] Koikkalainen | Methods of artificial enlargement of the training set for statistical shape models[END_REF][START_REF] Rao | Hierarchical statistical shape analysis and prediction of sub-cortical brain structures[END_REF][START_REF] Heimann | Statistical shape models for 3D medical image segmentation: A review[END_REF][START_REF] Babalola | Using parts and geometry models to initialise Active Appearance Models for automated segmentation of 3D medical images[END_REF][START_REF] Patenaude | A Bayesian model of shape and appearance for subcortical brain segmentation[END_REF][START_REF] Bagci | Hierarchical scale-based multiobject recognition of 3-D anatomical structures[END_REF][START_REF] Bernard | Improvements on the Feasibility of Active Shape Model-based Subthalamic Nucleus Segmentation[END_REF][START_REF] Adiva | Comparison of Active Contour and Active Shape Approaches for Corpus Callosum Segmentation[END_REF]. Fondamentalement, les MS utilisent une connaissance a priori de la forme par apprentissage de sa variabilité observée sur une base de données convenablement annotée. L'espace de recherche est contraint par le modèle ainsi défini. La procédure basique de MS de forme et/ou de texture est : 1) les sommets ou points de contrôle d'une structure sont modélisés comme une distribution gaussienne multivariée; 2) la forme et la texture sont modélisées en termes de moyenne et de vecteurs propres; 3) des nouvelles instances du contour sont générées grâce aux modes de variations définis par les vecteurs propres et en respectant les contraintes des sous-espaces de formes et de textures acceptables. Par conséquent, si la taille de la forme à segmenter est supérieure à la taille des données d'apprentissage, les seules formes et textures acceptables sont des combinaisons linéaires des données d'apprentissage initial.
Contrairement aux méthodes de segmentation basées sur atlas, les modèles statistiques ont besoin d'un modèle d'apprentissage. Les formes moyennes, les textures et leurs modes de variations qui définissent ce modèle sont appris à partir de d'une base de données manuellement segmentée. Si le nombre d'échantillons utilisés pour construire le modèle d'apprentissage est insuffisant, il y a un risque important de surestimation de la segmentation. De plus, la présence de bruit sur les images de la base d'apprentissage affecte la robustesse lors de la segmentation des images cibles.
Selon l'objet cible, des points du contour sont recherchés au voisinage de la forme en respectant des contraintes locales. Ainsi, une initialisation manuelle suffisamment précise doit être réalisée afin de faire converger le modèle vers la forme cible. Cette initialisation peut être fournie soit par interaction directe de l'utilisateur soit par des techniques automatiques. Cependant, si la position initiale est trop éloignée de l'objet recherché, en termes de translation, rotation ou d'échelle, cela peut conduire à une mauvaise identification d'objet.
Modèles Deformables
Le terme "modèle déformable" (MD) a initialement été utilisé par Terzopoulos et al. [START_REF] Terzopoulos | Deformable models[END_REF] pour se référer à des courbes ou surfaces, définies dans le domaine de l'image, et qui sont déformés sous l'influence de forces internes et externes. Les forces internes sont définies sur les propriétés de la courbe afin de préserver le lissage des contours pendant le processus de déformation. Les forces externes quant à elles permettent de déformer le contour en fonction des caractéristiques de l'image dans son voisinage afin de faire évoluer le modèle vers la structure d'intérêt. Par conséquent, les MD abordent le problème de la segmentation par la recherche d'une limite de l'objet vu comme une structure unique et connecté. Ces modèles peuvent être divisés en deux grandes catégories: paramétrage explicite, basés sur des représentations en maillage, et paramétrage implicite, ensembles de niveau (level-sets), représentés comme une isovaleur d'une fonction scalaire dans un espace de dimension supérieure.
Contrairement aux modèles statistiques, aucun apprentissage ou connaissance a priori n'est nécessaire pour ces modèles déformables. Ils peuvent évoluer vers la forme souhaitée, démontrant une plus grande souplesse que les autres méthodes. Néanmoins, la définition des critères d'arrêt est difficile, et elle dépend des caractéristiques du problème. Les modèles déformables paramétriques ont été utilisés avec succès dans un large éventail d'applications et de problèmes. Une propriété importante de ce type de représentation est sa capacité à représenter les limites à une résolution infra-pixel, ce qui est essentiel dans la segmentation des structures minces. Cependant, ils présentent deux limitations principales. Tout d'abord, si la variation de la taille et de la forme entre le modèle initial et l'objet cible est importante, le modèle doit être paramétré dynamiquement pour récupérer fidèlement la limite de l'objet. La seconde limitation est liée aux complications qu'elles présentent pour faire face aux changements topologiques, telles que le fractionnement ou la fusion de parties du modèle. Les modèles géométriques fournissent une solution élégante pour répondre à ces limitations car, en se basant sur théorie de l'évolution de la courbe, courbes et surfaces évoluent indépendamment du paramétrage. Cela permet une gestion automatique des transitions topologiques.
Un inconvénient commun aux deux modèles, géométriques et paramétriques, est que les images auxquelles appliquer l'un de ces modèles doivent avoir des bords suffisamment nets et des régions homogènes pour une modélisation explicite. En conséquence, les modèles déformables traditionnels ne parviennent généralement pas segmenter en présence d'inhomogénéités d'intensité importantes et/ou de faibles contrastes.
Machine Learning
L'apprentissage automatique ou apprentissage statistique (machine learning en anglais) a été largement utilisé dans le domaine de l'analyse IRM presque depuis sa création. Ces méthodes de segmentation bases sur un apprentissage supervisé d'abord extraient caractéristiques de l'image avec des informations souvent plus riches que des information de nievau de gris seule. Puis ils construisent un modèle de classification basé sur les caractéristiques de l'image en utilisant des algorithmes d'apprentissage supervisé.
Parmi toutes les informations possibles qui peuvent être extraites afin de segmenter des structures cérébrales dans les images médicales, les plus couramment utilisées sont : basées sur le niveau de gris, basées sur la probabilité et l'information spatiale. Elles représentent les cas les plus simples de caractéristiques. Les caractéristiques basées sur le niveau de gris exploitent le niveau de gris d'un voxel et l'aspect de son voisinage. Dans sa représentation la plus simple, des patches carrés autour d'un pixel ou d'un voxel sont utilisées en 2D et 3D, respectivement, avec des valeurs de taille de patch typique allant de 3 à 9 pixels ou voxels. Les caractéristiques basées sur la probabilité analysent la probabilité d'un voxel d'appartenir à une structure déterminée. La carte qui contient ces probabilités est créée à partir d'une base de données préalablement annotée. En plus du niveau de gris et de la probabilité, la localisation du voxel dans l'espace de l'image peut également être utilisée.
L'objectif de nombreux algorithmes d'apprentissage consiste à rechercher une famille de fonctions afin d'identifier un membre de la famille mentionnée qui minimise un critère d'apprentissage. Les réseaux de neurones artificiels (en anglais Artificial Neural Network, ANN) et les machines à vecteurs de support ou séparateurs à vaste marge (en anglais Support Vector Machine, SVM) sont parmi les méthodes d'apprentissage les plus populaires utilisées non seulement pour la segmentation des structures anatomiques du cerveau [START_REF] Hult | Grey-level morphology combined with an artificial neural networks approach for multimodal segmentation of the Hippocampus[END_REF][START_REF] Morra | Automatic subcortical segmentation using a contextual model[END_REF][START_REF] Morra | Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation[END_REF][START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF][START_REF] Pierson | Manual and semiautomated measurement of cerebellar subregions on MR im-Bibliography ages[END_REF][START_REF] Golland | Detection and analysis of statistical differences in anatomical shape[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF][START_REF] Spinks | Manual and automated measurement of the whole thalamus and mediodorsal nucleus using magnetic resonance imaging[END_REF][START_REF] Akselrod-Ballin | Atlas guided identification of brain structures by combining 3D segmentation and SVM classification[END_REF][START_REF] Moghaddam | Automatic segmentation of brain structures using geometric moment invariants and artificial neural networks[END_REF], mais aussi pour la classification des tumeurs [START_REF] Zhou | Extraction of brain tumor from MR images using one-class support vector machine[END_REF][START_REF] Bauer | Fully automatic segmentation of brain tumor images using support vector machine classification in combination with hierarchical conditional random field regularization[END_REF][START_REF] Gasmi | Automated segmentation of brain tumor using optimal texture features and support vector machine classifier[END_REF] ou le diagnostic automatique [START_REF] Glotsos | Automated diagnosis of brain tumours astrocytomas using probabilistic neural network clustering and support vector machines[END_REF].
ANN représente un système de traitement d'informations comportant un grand nombre de composants interconnectés de traitement individuels, à savoir les neurones. Motivé par la façon dont le cerveau humain traite les informations d'entrée, les neurones travaillent ensemble d'une manière distribuée à l'intérieur de chaque réseau pour apprendre des connaissances d'entrée, traiter ces informations et de générer une réponse significative. Chaque neurone n dans le réseau traite l'entrée grâce à l'utilisation de son propre poids w n , une valeur biais b n , et une fonction de transfert qui prend la somme de w n et b n . En raison de leur efficacité dans la résolution de problèmes d'optimisation, ANNs ont été largement intégrés dans les algorithmes de segmentation pour définir les structures sous-corticales [START_REF] Hult | Grey-level morphology combined with an artificial neural networks approach for multimodal segmentation of the Hippocampus[END_REF][START_REF] Magnotta | Measurement of Brain Structures with Artificial Neural Networks: Two-and Three-dimensional Applications 1[END_REF][START_REF] Pierson | Manual and semiautomated measurement of cerebellar subregions on MR im-Bibliography ages[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF][START_REF] Spinks | Manual and automated measurement of the whole thalamus and mediodorsal nucleus using magnetic resonance imaging[END_REF][START_REF] Moghaddam | Automatic segmentation of brain structures using geometric moment invariants and artificial neural networks[END_REF].. Fondamentalement, l'idée principale derrière SVM est de trouver le plus grand hyperplan de marge qui sépare deux classes. La distance minimale de l'hyperplan de séparation entre deux classes est appelée marge. Ainsi, l'hyperplan optimal est celui qui fournit la marge maximale, représentant la plus grande séparation entre les classes. En transformants les objets de leur espace d'origine vers un espace de caractéristiques de dimension supérieure [START_REF] Burges | A tutorial on support vector machines for pattern recognition[END_REF], SVM peut séparer les objets qui ne sont pas linéairement séparables. Leur bonne capacité de généralisation et leur capacité à classer correctement les données non-linéairement séparables ont conduit à un intérêt croissant sur eux pour les problèmes de classification.
En introduisant des méthodes de Machine Learning, les algorithmes développés pour le traitement d'images médicales deviennent souvent plus "intelligents" que les techniques conventionnelles. Les techniques Machine Learning ont montré de meilleures performances que les autres approches plus traditionnelles pour la segmentation segmentant des structures cérébrales [START_REF] Morra | Automatic subcortical segmentation using a contextual model[END_REF][START_REF] Morra | Comparison of AdaBoost and support vector machines for detecting Alzheimer's disease through automated hippocampal segmentation[END_REF][START_REF] Golland | Detection and analysis of statistical differences in anatomical shape[END_REF][START_REF] Powell | Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[END_REF]. Les développements récents des techniques d'acquisition d'imagerie médicale ont conduit à une augmentation de la complexité de l'analyse des images. Cela apporte de nouveaux défis où l'analyse manuelle d'une grande quantité de données est limitée. Dans ce contexte, les techniques Machine Learning nous semblent les plus adaptées pour faire face à ces nouveaux défis. Par ailleurs, un nouveau domaine de l'apprentissage automatique a récemment émergé avec l'intention de rapprocher le Machine Learning de ses objectifs initiaux : l'intelligence artificielle. Il s'agit du Deep Learning. Les progrès récents sur l'utilisation des réseaux profonds pour la reconnaissance d'image, reconnaissance de la parole, ou d'autres applications ont montré qu'ils offrent actuellement les meilleures solutions à bon nombre de ces problèmes. Par conséquent, nous allons considérer l'utilisation du Deep Learning pour résoudre le problème de la segmentation des structures cérébrales en radiothérapie.
Contribution
Deep Learning
Le Deep learning est un nouveau sous-domaine du machine learning qui met l'accent sur l'apprentissage des modèles hiérarchiques de données. L'étude du deep learning moderne prend beaucoup de son inspiration dans la recherche des ANN des décennies précédentes. La plupart des algorithmes d'apprentissage actuels correspondent aux architectures peu profondes avec 1 jusqu'à 3 niveaux d'abstraction. Inspirés par l'architecture "profonde" du cerveau, les chercheurs dans les domaines des réseaux de neurones ont tenté pendant des décennies de former des ANN multicouches profonds. Néanmoins, les premières tentatives rencontrant un succès n'ont été publiées qu'à partir de 2006. Malgré les résultats remarquables des ANN pour effectuer certaines tâches [START_REF] Bengio | A neural probabilistic language model[END_REF], d'autres approches ont dominé pendant les années 90 et 2000 [START_REF] Cortes | Support-vector networks[END_REF][START_REF] Burges | A tutorial on support vector machines for pattern recognition[END_REF][START_REF] Scholkopf | Learning with kernels: support vector machines, regularization, optimization, and beyond[END_REF]. L'une des principales raisons de l'abandon des ANN en faveur de ces approches est la difficulté de former des réseaux profonds. L'apprentissage d'architectures profondes est une tâche difficile et les méth-odes classiques qui ont prouvé leur efficacité lors d'application à des architectures peu profondes ne sont plus adaptées. Finalement, le simple fait d'ajouter des couches ne conduit pas nécessairement à de meilleures solutions. Au contraire, lorsque le nombre de couches cachées augmente il devient plus difficile d'obtenir une bonne généralisation.
Par conséquent, jusqu'à récemment, la plupart des techniques de Machine Learning ont exploité des architectures peu profondes, où les réseaux étaient généralement limités à une ou deux couches cachées.
Cependant, en 2006, le concept de Greedy Layer-Wise Learning a été introduit [START_REF] Bengio | Greedy layer-wise training of deep networks[END_REF][START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF][START_REF] Hinton | Reducing the dimensionality of data with neural networks[END_REF]. Ce nouveau concept bénéficie d'une procédure d'apprentissage semi-supervisé. L'apprentissage non supervisé est utilisé dans une première étape pour initialiser les paramètres des couches, une couche à la fois, et puis un réglage fin de l'ensemble du système se fait par une tâche supervisée. Depuis, le deep learning a émergé comme un nouveau domaine de recherche du Machine Learning, avec un fort impact sur un large éventail de domaines de recherche [START_REF] Hinton | A fast learning algorithm for deep belief nets[END_REF][START_REF] Bengio | Deep learning of representations for unsupervised and transfer learning[END_REF].
L'un des avantages du deep learning par rapport aux ANN peu profonds est que des fonctions complexes peuvent souvent être estimées avec la même précision en utilisant un réseau plus profond mais avec beaucoup moins d'unités par rapport à un réseau typique de deux "grandes" couches cachées. En outre, avec de plus petits degrés de liberté, le deep learning nécessite des ensembles de données plus petits pour l'apprentissage. Un autre facteur, probablement plus convaincant, est que les approches typiques de classification doivent être généralement précédés par une étape de sélection de caractéristiques, où les caractéristiques les plus discriminantes sont privilégiées pour un problème donné. Les approches de deep learning quant à elles, ont la capacité d'apprendre automatiquement les caractéristiques des données. Cette spécificité a largement contribué à l'amélioration en termes de précision.
Parmi les différentes techniques de deep learning disponibles, nous utiliserons Auto-encoders (AE). Dans sa représentation la plus simple, un AE se compose de deux éléments: un codeur h(•) et un decodeur g(•). Tandis que le codeur transforme l'entrée à une certaine représentation cachée, le décodeur transforme la représentation cachée à une version reconstruite de l'entrée x. Un AE est donc formé pour minimiser la contradiction entre les données et sa reconstruction. Néanmoins, si aucune autre restriction outre la minimisation d'erreur n'est imposée, l'AE peut potentiellement n'apprendre que la fonction identité. Une solution pour éviter cela est d'ajouter un processus aléatoire dans la transformation de l'entrée à sa reconstruction, il s'agit du Denoising Auto-encodeurs (DAE) [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF][START_REF] Maillet | Steerable Playlist Generation by Learning Song Similarity from Radio Station Playlists[END_REF][START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF][START_REF] Glorot | Domain adaptation for large-scale sentiment classification: A deep learning approach[END_REF][START_REF] Vincent | A connection between score matching and denoising autoencoders[END_REF][START_REF] Mesnil | Unsupervised and Transfer Learning Challenge: a Deep Learning Approach[END_REF].
En général, un DAE est implémenté comme un réseau neuronal d'une couche cachée formée pour reconstruire un point x à partir de sa version corrompue x. Par conséquent, un AE est converti en un DAE, en ajoutant simplement une étape de corruption stochastique modifiant l'entrée. Par exemple, dans [START_REF] Vincent | Extracting and composing robust features with denoising autoencoders[END_REF], le processus de corruption stochastique consiste à mettre au hasard quelques-unes des entrées à zéro. Plusieurs DAE peuvent être empilés pour former un réseau profond en alimentant la représentation cachée d'un DAE de la couche inférieure alimentant lui même l'entrée de la couche suivante [START_REF] Vincent | Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[END_REF]. Il s'agit alors de Stacked Denoising Auto Encoder (SDAE).
L'apprentissage du SDAE est composé de deux étapes : apprentissage nonsupervisé et supervisé. Les poids entre les couches du réseau sont d'abord appris par l'étape de pré-apprentissage non supervisé. Le pré-apprentisage nonsupervisé de l'architecture proposée est réalisé une couche à la fois. Chaque couche est formée en tant que DAE, en minimisant l'erreur de reconstruction de son entrée. Le DAE de la couche supérieur utilise alors la sortie du DAE de niveau inférieur comme entrée. Une fois que les premières couches k sont formées, la couche k+1 peut être formée parce que la représentation latente de la couche inférieure peut être alors calculée. Une fois que tous les poids du réseau sont calculés le réseau passe par une deuxième étape d'apprentissage appelée supervisé appelée fine-tunning, où l'erreur de prédiction est réduite sur une tâche supervisée
Caractéristiques utilisées pour la segmentation
Quelle que soit l'efficacité de la stratégie d'apprentissage automatique appliqué, le choix de caractéristiques pertinentes est crucial pour des problèmes de classification. Les recherches récentes sur la segmentation des structures du cerveau par des techniques de Learning Machine ont tendance à se concentrer sur l'utilisation de plusieurs algorithmes d'apprentissage plutôt que dans l'ajout de caractéristiques plus discriminantes dans le système. Les caractéristiques traditionnelles, introduites précédemment, ont souvent été utilisées lors de la segmentation de structures cérébrales avec un succès considérable. Cependant, l'utilisation des caractéristiques alternatives peut : i) améliorer les performances de classification, ii) réduire, dans certains cas, le nombre de caractéristiques utilisées pour décrire les informations de texture d'une région donnée. En dehors de l'application de SDAE au problème de la segmentation des OARs, l'une des principales contributions de ce travail est l'utilisation des caractéristiques qui ne sont pas encore utilisées pour la segmentation de ces structures du cerveau.
Parmi l'ensemble des OARs impliqués dans la RT, certains présentent une homogénéité de texture plus importante et une variation plus limitée de la forme que d'autres. Dans ce premier groupe, nous pouvons inclure le tronc cérébral, les yeux et le cristallin. A l'inverse, d'autres OARs ont une tex-ture plus hétérogène des variations inter-individuelles plus importantes en termes de taille ou de localisation. Ce deuxième groupe est constitué par les nerfs optiques, le chiasma, l'hypophyse et la tige pituitaire. En raison des différences entre les caractéristiques des deux groupes, certaines des caractéristiques proposées dépendent de l'organe à segmenter et ne sont pas adaptées à tous les organes étudiés dans ce travail. Alors que la segmentation de certains organes exploite l'utilisation d'une carte de distances géodésiques et de descripteurs fondés sur les motifs binaires locaux en 3D pour obtenir de meilleurs résultats (groupe A), la segmentation d'autres OARs utilise la texture et l'analyse contextuelle (groupe B). Le gradient d'image est exploré dans ce groupe. Parmi toutes les caractéristiques qui peuvent être extraites de l'analyse de texture, nous utilisons les suivantes : la moyenne, la variance, l'asymétrie, l'aplatissement, l'énergie et l'entropie. De plus, la décomposition discrète en ondelettes compos également le vecteur de caractéristiques.
L'apprentissage
Un pré-traitement est appliqué extraire les caractéristiques à l'ensemble des patients. Toutes les images sont redimensionnées à la résolution 1 x 1 x 1 mm 3 . Toutes les images IRM T1 sont spatialement alignées de telle sorte que la ligne de la commissure antérieure et la commissure postérieure (AC-PC) est orientée horizontalement dans le plan sagittal, et la fissure inter hémisphérique est alignée sur les deux autres axes. Ce procédé représente donc également l'étape d'initialisation pour la segmentation d'un nouveau patient. Enfin, les images sont normalisées.
Une carte de probabilité et un masque de recherche sont créés lors de la phase d'apprentissage. Les zones d'intérêts manuellement contourées sur l'ensemble des données d'apprentissage sont sommées dans un volume pour créer une carte de probabilité pour chaque OAR. Cette carte contient ainsi des valeurs continues par voxel dans l'intervalle [0,1], indiquant la fréquence à laquelle un organe apparaît dans l'ensemble de données. Cette valeur indique la probabilité qu'un voxel donné appartienne à une structure. La carte de probabilité est également utilisée pour réduire le nombre d'échantillons qui sont introduits dans le classificateur. De cette carte, une région d'intérêt (ROI) est générée. Le critère d'élagage est basée sur la probabilité d'un voxel d'appartenir à une quelconque des structures d'intérêt. Ainsi, tout voxel contenant une probabilité supérieure à zéro est pris en compte pour créer le masque de recherche, pour chaque structure, et qui sera utilisé pour élaguer les voxels dans l'étape d'extraction de caractéristiques. Pour être sur de que les OARs des nouveaux patients seront à l'intérieur de ce masque commun une marge de sécurité est générée par l'application d'une dilatation morphologique.
Classification
La classification est faite à une classe à la fois. Cela signifie qu'un classifieur binaire est utilisé pour chacune des structures. Le pré-traitement de l'image à segmenter est le même que celui utilisé pendant l'apprentissage. Pour l'extraction des caractéristiques, la carte de probabilités et la ROI de recherche crées pendant l'apprentissage sont aussi utilisées. Les valeurs des caractéristiques pour la classification sont mises à l'échelle en concordance avec la mise à l'échelle des valeurs utilisées lors de la phase de d'apprentissage.
Matériels et méthodes
Tout le code qui a été utilisé dans cette thèse a été mis en oeuvre en utilisant les plates-formes suivantes: MATLAB (The MathWorks Inc., Natick, MA, 2000) et Microsoft Visual Studio (MSVC) 2010.
Les examens IRM de 15 patients pris en charge dans le cadre d'une radiochirurgie Leksell Gamma Knife ont été utilisés dans ce travail. Au total, quatre experts ont participé à aux sessions de contourage manuel des OARs. Ces contours manuels ont été utilisés pour créer les contours de référence. Les contours de référence ont été obtenus dans cette thèse en utilisant le concept de calcul de cartes de probabilité. Les cartes de probabilité sont seuillées à un niveau variable afin de créer le masque. Le seuil a été fixé à 50% ou à 75%, en fonction du nombre d'experts impliqués (3 ou 4 selon l'OAR étudié).
Les techniques typiques de validation pour évaluer la performance d'un classifieur s'appuient sur un partage des données en deux groupes : apprentissage et test. Compte-tenu du nombre limité d'examens dans cette thèse, nous avons utilisé la méthode Leave-one-out Cross validation (LOOCV). Cette technique consiste à utiliser un seul patients pour le test et les autres pour l'apprentissage. Ce processus est répété autant de fois que de patients disponibles, à savoir 15. Ainsi, à chaque itération, 14 examens sont utilisés pour l'apprentisage et 1 pour la classification.
Plusieurs critères permettent d'évaluer la qualité de segmentation d'une image et selon lesquels différentes métriques d'erreur sont définies. Ces différents critères d'évaluation de segmentation ont été classés par [START_REF] Fenster | Evaluation of segmentation algorithms for medical imaging[END_REF] : exactitude, précision, répétabilité et efficacité. Les métriques sont estimées à partir du contour et la taille/le volume de l'objet segmenté. Chaque métrique reporte une information différente et doit être considérée dans un contexte approprié. Bien que les mesures basées sur le volume, telles que Dice Similarity Coefficient (DSC) sont largement utilisées pour comparer les similitudes entre volumes, elles sont assez peu sensibles aux différences sur les bords lorsque ces différences ont un faible impact sur le volume global. Ainsi, les mesures fondées sur la distance, telles que la distance de Hausdorff, sont également utilisées pour évaluer la qualité d'une segmentation. En complément, la sensibilité et la spécificité sont aussi évaluées. Finalement, pour évaluer l'efficacité, le temps nécessaire à l'exécution de l'algorithme est également mesuré et analysé. A titre de comparaison à des algorithmes de référence dans la littérature, un autre classifieur basé sur SVM a été étudié. En outre, les segmentations automatiques générées par notre système de deep learning, sont comparées aux segmentations manuelles obtenues par les experts. A partir des valeurs des métriques ainsi recueillies une analyse statistique est réalisée afin de démontrer que les différences de volume et de surface étaient significativement différentes entre le system basé sur SVM et notre système de classification basé sur SDAE. D'autre part, nous avons également réalisé une analyse statistique entre le résultat des segmentations manuelles et les contours générés par notre système. Dans ce cas, nous souhaitons prouver que, bien que les résultats de certains observateurs manuels ont été meilleurs que les résultats fournis par notre approche, les différences ne sont pas significatives.
Résultats et discussion
Selon les caractéristiques intrinsèques de certains OARs, les résultats ont été séparés en deux groupes. Les résultats du système proposé ont été comparés avec une approche de Learning Machine largement utilisé pour la classification : SVM. En plus des caractéristiques usuelles basées sur l'espace et l'intensité, nous avons ajouté des nouvelles caractéristiques. Ces caractéristiques sont généralement dépendant des organes, et leur évaluation a ainsi été réalisée en conséquence selon deux groupes A et B.
Les résultats fournis dans ce travail montrent que le système de classification proposé surpasse toutes les configurations des classifieurs SVM ou SDAE avec les caractéristiques usuelles. L'ajout de nouvelles caractéristiques dans chaque groupe a augmenté la similitude entre volumes tout en réduisant la distance de Hausdorff. Pour tous les OARs, les schémas de classification proposées pour les groupes A et B ont obtenu les meilleurs résultats pour les différences mesures de similarité, de surface et de volume. La sensibilité et la spécificité sont également améliorés par l'utilisation du système de classification proposé. Premièrement, les valeurs de sensibilité étaient plus élevées dans les configurations basées sur SDAE que dans configurations basées sur SVM. Deuxièmement, l'inclusion de nouvelles caractéristiques dans le système de classification a améliorée des valeurs de sensibilité par rapport aux autres configurations dans les systèmes basées sur SDAE. Cette tendance a été identifiée pour tous les OARs des deux groupes. Les valeurs de spécificité obtenus par les systèmes proposés dans les deux groupes étaient dans environ la moitié des cas parmi ceux les mieux classés.
L'analyse statistique sur les segmentations automatiques a démontré que les résultats obtenus par le système proposé sont significativement meilleurs que les autres groupes.
En ce qui concerne la comparaison aux annotations manuelles, l'erreur de segmentation que nous avons obtenu est comparable à celle obtenue entre les observateurs lorsque les contours sont délimités sans contraintes de temps. Dans ces comparaisons, nous pouvons observer que les résultats de segmentation générés par l'approche proposée sont distribues avec la même variabilité que les experts dans la plupart des cas. L'analyse statistique sur ces résultats du système de classification comparés aux contours manuels souligne que les différences ne sont généralement pas statistiquement significative. En outre, dans certains cas où les différences sont significatives, notre classificateur automatique offre un meilleur résultat que le contourage manuel. Nous pouvons ainsi conclure que les contours automatiques générés par le système de classification proposé sont similaires aux annotations manuelles.
Les résultats démontrent également que le système de deep learning proposé reposant sur l'apprentissage a surpassé tous les travaux précédents lorsque l'on s'intéresse à l'ensemble des OARs analysés. Bien qu'il n'a pas été possible dans ce travail d'utiliser les mêmes ensembles de données que celles utilisées dans les études précédentes, les performances plus élevées de notre approche, comme indiqué par les résultats, suggère sa supériorité pour segmenter ces structures. Les résultats montrent qu'en utilisant SDAE comme classificateur, le temps de segmentation a été significativement réduit par rapport à d'autres méthodes classiques de machine learning, telles que SVM. Cela est particulièrement remarquable si on tient en compte le fait que la plupart des travaux en références dans cette thèse pour segmenter les OARs sont basés sur des techniques atlas, et donc dépendantes du recalage. Cette étape de recalage rend la segmentation chronophage en comparaison de l'approche proposée.
L'implémentation actuelle du système proposé n'est pas optimisée informatiquement et goulot d'étranglement du processus reste l'étape d'extraction de caractéristiques. Cependant son temps de traitement varie entre 1 à 6 secondes pour chacun des OARs. Bien qu'elle ne soit pas une étape très coûteuse en temps de calcul, elle représente plus de 95% du temps total de la segmentation. Comme l'extraction des caractéristiques ne nécessite pas d'opérations complexes de programmation, sa parallélisation est facilement abordable. Cela peut réduire sensiblement l'ensemble du processus de segmentation jusqu'à moins d'une seconde pour un organe entier.
L'un des points forts des méthodes de deep learning repose sur leur capacité abilities of biological neural systems must follow from highly parallel processes operating on representations that were distributed over many neurons. Thus, one motivation of ANN-based systems is to capture this type of highly parallel computation based on distributed representations.
y = f ( m i=0 x i • w i + b) (A.2)
where x i represents the i unit inputs, w i are the weight values for each input i, b is the bias term and f is the transfer or activation function. Lastly, y represents the output value of the neuron. As seen from the artificial neuron model and its equation (A.2), the major unknown variable is its transfer function. Figure A.3 shows some of the most common activation functions employed in ANN. In each case, the x-axis represents the value of the net input whilst the y-axis is the output from the neuron. Among these activation function types, sigmoid functions are widely employed in ANN due to its remarkable computational and mathematical properties. Additionally, most biological neurons are sigmoid units, in the sense that their frequency response on input has a region of maximum sensitivity somewhere between a threshold and a saturation point. Mathematical formulation of the sigmoid activation function is described below:
f (a) = sigm(a) = 1 1 + exp(-a) (A.3)
where a denotes the pre-activation function defined in equation A.1. Since a neural network is built out of interconnected neurons, the function of an entire neural network is simply the computation of the output of all the neurons. Training the network involves presenting the network with some sample data and modifying weights to better approximate an activation function to obtain the desired output. Even though a precise definition of learning is ambitious to formulate, a learning process in an ANN context can be viewed as the issue of updating the artificial network architecture and connection weights so that a network can efficiently perform a specific task. A single neuron, however, is not very useful due to its limited mapping ability. Regardless of which activation function is used, the neuron is only able to represent an oriented ridge-like function, being able to only handle linearly separable or linearly independent problems. Further extensions of single neuron based networks concern models in which many neurons are interconnected and organized into layers, building blocks of a larger, much more practical structures. Neurons in the same layer are fully connected to the neurons in the previous layer, except for the first layer, because this layer is not formed by neurons but by the vector x (i) that will be the input to the network. Neural networks can be built following multiple and diverse architectures. According to the direction of connections between layers ANN can be grouped into two major categories: (i) feed-forward networks, in which no loops exist in the graph, and (ii) feedback networks, also known as recurrent, where loops are present due to feedback connections. Different network architecture leads to different learning algorithms. The most common choice is a n l -layered network, where the first layer represents the input layer, layer n l is the output layer, and each layer l is densely connected to layer l + 1. We will discuss the former, since no other network topology will be analyzed in this dissertation.
Multilayer feed-forward (MLF) neural network represents one of the most popular multilayer ANN. In a feed forward neural network, neurons are only connected forward. Each layer of the neural network contains connections to the next layer, but there are no connections back. This means the signal flow is from input to output units, strictly in a feed-forward direction. Typically, the network consists of a set of sensory units that constitute the input layer, one or more hidden layers of computation nodes, and an output layer of computation nodes. In its common use, most neural networks will have one hidden layer, and it's very uncommon for a neural network to have more than two hidden layers. The input signal propagates through the network in a forward direction, on a layer by layer basis. In this context, to compute the output of the network, activations in primary layers are computed first, up to reach the last layer, L n l . In figure A.4 a simple MLF with 3 inputs, 1 output, and 1 hidden layer containing 3 neuron units is shown. The way in which this model learns to predict the label y (i) associated to the input x (i) is by calculating the function
h W,b (x) = a (n) = f (z (n) ) (A.4)
where n is the number of layers, b is a matrix formed by n -1 vectors storing the bias term for the s neurons in each layer, and W is a vector of n-1 matrices each of which is formed by s vectors, each one representing the weight of one of the neurons in one of the layers. To achieve the learning process the training set is fed into the function in equation A.4. Calculating the value of h W,b (x) is called a feedforward pass. Therefore, to train the network, the first thing to do is to initialize the weights W and the bias term b. This should be done using random values near zero. Otherwise, all the neurons could end up firing the same activations and not converging to the solution.
Let consider the network shown in figure A.4 as example. In this setting, the number of layers, n l is equal to 3. Each layer l is denoted as L l , so input layer is represented by L 1 , and L 3 is the output layer in our network. Let denote W l ij to refer to the weight associated with the connection between the unit j in layer l, and the unit i in layer l+1. In addition, b l i is used to represent the bias associated with the unit i in layer l +1. The output value, also known as activation, of a unit i in layer l is denoted by a l i . Therefore, for the first layer, the activation, a 1 i is simply the i-th input. Thus, given a fixed setting a (l+1) = f (z (l+1) ) (A.16)
Once we have produced a feed-forward pass, we need to calculate the cost function. We define the cost function of a single training example (x, y) as
J(W, b; x, y) = 1 2 y -h W,b (x) 2 (A.17)
that is, half of the squared distance from the prediction to the ground truth. For a whole training set (x (1) , y (1) ), ..., (x (m) , y (m) ) we will use where m is the number of examples and λ is the weight decay parameter. This parameter λ helps to prevent overfitting by penalizing the cost when the weights grow too much. Now that we have a function that measures the cost of all predictions with a particular set of weights, we need a way to update those weights so that, in next iteration, the cost will be reduced and the training may converge to a minimum, hopefully the global one. This update value is:
∇W = ∂ ∂W (l) J(W, b) = [ 1 m m i=1
∇ W (l) J(W, b; x (i) , y (i) )] + λW (l) Therefore, the first step is to calculate ∇ W (l) J(W, b; x (i) , y (i) ) and ∇ b (l) J(W, b; x (i) , y (i) ) for each example independently. This step is done with the backpropagation algorithm. where w is normal to the hyperplane, |b|/ w is the perpendicular distance from the hyperplane to the origin, and w is the Euclidean norm of w. For the linearly separable case, the support vector algorithm looks for the separating hyperplane with largest margin, which can be formulated as follows: Since the problem for SVM is convex, KKT conditions are necessary and sufficient for w * , b * and α * to be a solution [START_REF] Fletcher | Practical methods of optimization[END_REF]. Hence, solving the SVM problem is equivalent to finding a solution to the KKT conditions. The first KKT condition ( Eq. B.11) defines the optimal hyperplane as a linear combination of the vectors in the training set:
w * = i α * i y i x i (B.16)
In the other hand, the second KKT condition (Eq. B.12) requires that the α i coefficients of the training instances should satisfy:
n i=1 α * i y i = 0, α * i ≥ 0 (B.17)
As an application of the KKT conditions, the decision function that can be used to classify future test cases is defined as:
f (x) = w T x i + b = i α i y i x T i x + b (B.18)
where the sign of the decision function determines the predicted classification of x.
The most important conclusions are that, first, this function f (•) can be expressed solely in terms of inner products x T i x i , that can be later replaced with kernel matrices k(x i , x j ) to move to a higher dimensional non-linear spaces. Second, only support vectors are needed to express the solution w.
B.0.2.3 The Non-Separable Case
However, most of the real data is not linearly separable and, even in cases where the data is linearly separable, SVM may overfit to the training data in its search for the hyperplane that completely separates all of the instances of both classes. Sometimes, even if a curved decision boundary is possible, exactly separating the data is probably not desirable: if the data has noise and outliers, a smooth decision boundary that ignores a few data points is better than one that loops around the outliers. Therefore, linear separation may problem and dataset characteristics. Small C values allow constraints to be easily ignored, i.e. large margin, while large C values makes constraints hard to ignore, i.e. narrow margin.
As in the linear case, the problem is switched to a Lagrangian formulation, leading to L(w, b, ξ, α, µ) = 1 2 As explained in previous section, to map input features into higher dimensionality spaces several kernels can be used, according to the nature of the data. This is done via some mapping Φ(x) and then, construction of a separating hyperplane with maximum margin is done in the input space (Figure B.4). As shown in B.4, the linear decision function in the features space corresponds to a non-linear decision boundary in the original input space. Typical kernels' choices are ( [START_REF] Scholkopf | Learning with kernels: support vector machines, regularization, optimization, and beyond[END_REF]):
w 2 + C i ξ i - N i=1 α i [y i (w T x i -b) -1 + ξ i ] - N i=1 µ i ξ i (B.
• Linear Kernel: K(x, y) = x, y
• Polynomial Kernel: K(x, y) = ( x, y ) 2
• RBF Kernel: K(x, y) = exp(-γ x -y 2 )
• Sigmoid Kernel K(x, y) = tanh(γ x, y -theta)
• Histogram Intersection Kernel K(x, y) = |x -y| Each kernel function listed above has its own properties and unique response to handle a variety of data. Employing a sigmoid kernel function in a SVM model is equivalent to use a two-layer perceptron neural network [START_REF] Burges | A tutorial on support vector machines for pattern recognition[END_REF]. If RBF kernel is used instead, the model approximately behaves like a radial basis function neural network, where the feature space is in an infinite dimension. Therefore, selection of a proper kernel function is required to perform optimal classification tasks with SVM models. Selection of the convenient kernel function is, or should be, based on the requirements of the classification task.
B.1 SVM Parameter Setting
First choice to make when working with SVM is the kernel to be used. Despite the several kernels proposed to map features into a higher dimension, Radial Basis Function (RBF) kernels are one of the most used kernels to separate data in SVM classifiers in complex classification environments. Some previous works have found that RBF kernel generally provides better classification accuracy than many other kernel functions [START_REF] Abdi | A novel weighted support vector machine based on particle swarm optimization for gene selection and tumor classification[END_REF]. This kernel non-linearly maps samples into a higher dimensional space. That means that RBF kernel can handle the cases when the relation between class labels and attributes is nonlinear. Second reason to use this kernel is the number of hyperparameters which influences the complexity of model selection, which is lower than in other non-linear kernels, such as the polynomial kernel. Consider two samples x i = [x i1 , x i2 , ..., x id ] T and x j = [x j1 , x j2 , ..., x jd ] T . The RBF kernel is then defined by: K(x i , x j ) = exp -γ x i -x j 2 , γ > 0 (B.37
)
where γ is the width of the Gaussian.
There are two parameters that can be tuned in the RBF kernel and which depend on the input data: C and γ. While C controls the cost of misclassification on the training data, γ is the parameter of the kernel to handle non-linear classification. A large C value will provide a low bias and high variance, because misclassification cost is highly penalized, i.e. hard margin. Contrary, a small C value makes the cost of misclassification low, i.e. soft margin, giving a higher bias and lower variance. To "raise" the points used in the RBF kernel, γ controls the shape of the "peaks" where the points are raised (Fig. B.5). A large γ will give a pointed bump in the higher dimensions, while a small γ will give a softer, broader bump. This is translated into low bias and high variance with a large γ value and higher bias and low variance for lower γ values. Thus, a γ overestimation will produce an almost linear behavior in the exponential and the higher-dimensional projection would start to lose its non-linear power. However, an underestimated γ value will produce a lack on regularization, making the decision boundary highly sensitive to noise in the training data. Therefore, some kind of model selection, i.e. parameter search, must be done for these two parameters. The goal of this search is to identify good C and γ values so that the classifier can accurately predict unknown data, i.e. testing data. Since performing a fully grid search may become time consuming, a coarse grid search is often initially conducted. After identifying the best region on the grid, a finer grid search on that region can be performed.
List of Figures
Figure 2 . 1 :
21 Figure 2.1: Healthy brain(left) compared to brain tumor (in blue,right).
Figure 2 . 2 :
22 Figure 2.2: A patient before and after of having being treated with rdiation therapy.
Figure 2 . 3 :
23 Figure 2.3: Conventional RT and CyberKnife SRS treatment plans for a patient who received 40 Gy in 15 fractions to FLAIR for the first course followed an SRS boost to T1 Enhancement at a total dose of 24 Gy delivered in 3 fractions. Shown are the (A) axial, (B) sagittal, and (C) coronal views of the EBRT treatment plans and the (D) axial, (E) sagittal, and (F) coronal views of the CyberKnife SRS treatment plans.
Figure 2 . 4 :
24 Figure 2.4: A patient being positioned for SRS treatment (Gamma-Knife).
Figure 2 . 5 :
25 Figure 2.5: Gamma Knife radiation helmet.
Figure 2 . 6 :
26 Figure 2.6: Flowchart of a common radiotherapy treatment.
Figure 2 . 7 :
27 Figure 2.7: Transversal, coronal and sagittal dose distribution and DVH information. Graphs: PTV (1), left eye (2), right eye (3), right optic nerve (4), left optic nerve (5), chiasma(6), brainstem(7), spinal cord(8).
Figure 2 . 9 :
29 Figure 2.9: Organs at Risk commonly involved in brain tumor radiation treatment.
2. 4 .
4 The role of Structural MRI in brain tumor radiation treatment 19
Figure 2 . 10 :
210 Figure 2.10: MRI modalities commonly employed in the RTP. From left to right: T1, T1-Gadolinium, T2 and FLAIR modalities.
Figure 2 . 11 :
211 Figure 2.11: The selection of directions of MRI slices.
Figure 3 . 1 :
31 Figure 3.1: Image segmentation example. Original image (left) is segmented into four class regions (center ), and into two class regions (right).form, medical images are represented by arrays of numbers depicting quantities that show contrast between different types of body tissue. Voxel values may vary depending on the image modality, type of tissue or some acquisition parameters. Processing and analyzing medical images are useful to transform this raw information into a quantifiable symbolic form. The extraction of this meaningful quantitative information can aid in diagnosis, as well as in integrating complementary data from multiple imaging modalities. Therefore, in medical image analysis, segmentation has a great clinical value since it is often the first step in quantitative image analysis. For instance, segmentation of medical images aims at identifying the target anatomy or pathology and delineating the boundary of structures of interest for computer aided diagnosis (CAD) purpose or for planning therapy. Image segmentation plays, therefore, an important role in numerous medical applications[START_REF] Pham | Current methods in medical image segmentation 1[END_REF].
Figure 3 . 2 :
32 Figure 3.2: Partial volume effect caused by effects of finite voxel size when imaging a circle. The green area in the left image has value 10 and the white area has value 0. Imaging this circle with 9 voxels results in the right figure.
Figure 3 . 3 :
33 Figure 3.3: Typical atlas-based segmentation workflow where multiple atlases are employed.
.
Figure 3 . 4 :
34 Figure 3.4: An example of constructing Point Distribution Models. (a) An MR brain image, transaxial slice, with 114 landmark points of deep neuroanatomical structures superimposed. (b) A 114-point shape model of 10 brain structures. (c) Effect of simultaneously varying the model's parameters corresponding to the first two largest eigenvalues (on a bi-dimensional grid)
Figure 3 . 5 :
35 Figure 3.5: Segmenting the corpus callosum from an MR midbrain sagittal image using a deformable Fourier model. Top left: MR image (146 x 106). Top right: positive magnitude of the Laplacian of the Gaussian (γ= 2.2) Bottom left: initial contour (six harmonics). Bottom right: final contour on the corpus callosum of the brain.
Figure 3 . 6 :
36 Figure 3.6: Intensity patches commonly used in 2D. Patch sizes are 3x3, 5x5 and 7x7 from left to right.
Figure 3 . 7 :
37 Figure 3.7: Intensity configurations commonly used in 3D. In blue is painted the center voxel under examination and in green its neighboring voxels.
Figure 3 . 8 :
38 Figure 3.8: Brainstem probability map created from the training set.
Figure 3 . 9 :
39 Figure 3.9: Cartesian and spherical coordinates.
Figure 3 . 10 :
310 Figure 3.10: Effect of the kernel transformation. Data is not linearly separable in (a). Mapping features into a higher dimensionality (b) may make the classification possible.
3 - 1 Open Access Series of Imaging Studies 2 Alzheimer's Disease Neuroimaging Initiative 3 International Consortium for Brain Mapping 4 Internet Brain Segmentation Repository 5 Baltimore Longitudinal Study of Aging Table 3 . 2 : 56 Chapter 3 . 3 Evaluates 3 - 1
31234532563331 ×-1 mm 3 40-44 y.o. (39 male/30 female) 70-84 y.o. (17 male/20 female) Kim et al. Summary of experimental set up of referenced segmentation works. Part I Segmentation methods: State of the art -×mm 3 T2 -×-×1.8 mm 3 T1 -×-×mm 3 T2 -×-×3-4 mm -×1.5 mm 3 T2 -×-×3 mm 3 15 males (28.33±8.46 y.o.) 15 females (28.00±8.21 y.o.)Tu et al. Open Access Series of Imaging Studies 2 Alzheimer's Disease Neuroimaging Initiative 3 International Consortium for Brain Mapping 4 Internet Brain Segmentation Repository 5 Baltimore Longitudinal Study of Aging
Figure 4 . 1 :
41 Figure 4.1: Example of different representations: a) Cartesian coordinates and b) polar coordinates are used to represent the data.
Figure 4 . 2 :
42 Figure 4.2: Deep architecture of the brain.
Figure 4 .
4 Figure 4.3: A typical workflow for a convolutional neural network.
Figure 4 . 4 :
44 Figure 4.4: Auto-Encoder.
Figure 4 . 5 :
45 Figure 4.5: Reconstruction example of a handwritten digit input by employing neural encoding.
Figure 4 . 6 :
46 Figure 4.6: The denoising autoencoder architecture.
Figure 4 . 7 :
47 Figure 4.7: Stacked Denoising Auto-encoder. After training a first level denoising autoencoder (Fig. 4.6) its learnt encoding function f θ is used on clean input (left). The resulting representation is used to train a second level denoising autoencoder (middle) to learn a second level encoding function f
Figure 4 . 8 :
48 Figure 4.8: Fine-tuning of a deep network for classification. After training a stack of encoders as explained in the previous figure, an output layer is added on top of the stack. The parameters of the whole system are finetuned to minimize the error in predicting the supervised target (e.g., class), by performing gradient descent on a supervised cost [159].
Figure 4.9: A MRI slice of the brain showing partially the head in the eye region (a). While the horizontal gradient component in the x direction measuring horizontal change in intensity is displayed in (b) the vertical gradient component in the y direction measuring vertical change in intensity is shown in (c).
Figure 4 . 10 :
410 Figure 4.10: Gradient orientation values of the previous image in figure 4.9,a. The arrows indicate the direction of the gradient at each pixel.
Figure 4 . 11 :
411 Figure 4.11: Contextual feature is defined by a number of surrounding regions (green squares) of the voxel under examination (red dot).
Figure 4 . 12 :
412 Figure 4.12: First-order statistical features (F-OSF) example. Axial slice of a brain with several first-order statistical features computed with a with radius = 3 around each voxel. First and second levels of high-pass components from wavelets decomposition ((h) and (i)).
) 2 + γ 2 (∇I • u) 2 ds (4.26)with P a,b the set of all paths between the points a and b, and Γ(s) : → 2 indicating one such path, which is parameterized by s ∈ [0,1]. Figure1shows an example of how compute the GTDM of an image given a binary mask.
Figure 4 . 13 :
413 Figure 4.13: Geodesic distance transform map: a) axial MR view of the brainstem, b) mask obtained from the probability brainstem map (in white), c) binary mask used to obtain the GDTM, and d) output GDTM.
Figure 4 . 14 :
414 Figure 4.14: Merging the 64 possible patterns into 10 groups. Number of different patterns for each group is indicated in brackets.
Table 4 . 2 :
42 Definition of the 10 groups of patterns.
Figure 4 .
4 Figure 4.15: AC-PC example on MRI image(right).
Figure 4 . 16 :
416 Figure 4.16: Framework for training.
Figure 4 . 17 :
417 Figure 4.17: Probability map (bottom-left) and common mask (bottom-right) creation process. Example showing a 2D axial slice of the optic chiasm.
Figure 4 . 18 :
418 Figure 4.18: Decision boundaries for a banana shaped dataset generated by SDAE with different network architectures.
Figure 4 . 19 :
419 Figure 4.19: Framework for classification.
4. 5 . 3 Scaling
53 As stated in Section 4.5.3, features extracted for classification are scaled in concordance with scaling values used during the training phase. Using different scaling values will negatively affect the segmentation performance.
Figure 5 . 1 :
51 Figure 5.1: Workflow of the connection between MSVS and MATLAB.
Figure 5 . 2 :
52 Figure 5.2: Some examples of images contained in the database employed in this work. While in some cases the tumor is inside the brainstem and may change the shape and intensity properties of the brainstem (top-row), in other brain cancer cases, tumors do not affect the brainstem or other OARs properties.
Figure 5 . 3 :
53 Figure 5.3: Intensity profiles of some OARs for a randomly selected patient.
Figure 5 . 4 :
54 Figure 5.4: Pie charts representing mean manual segmentation times for OARs.
106 Chapter 5 .
1065 Materials and Methods
Figure 5 . 6 :
56 Figure 5.6: ROC space sub-division to evaluate our classifier performance.
7) where h(X, Y ) = max x∈X min y∈Y x -y (5.8) and • is some underlying norm on the points of X and Y , such as L 2 or the Euclidean Norm.
Figure 5 . 7 :
57 Figure 5.7: A schematic figure explaining the concept of Hausdorff distance with two segmentation proposals, X and Y, for a certain structure.
1 -
1 ilit y ma p va lue s V ox el in te ns ity Wavelet (4th order High-pass) ilit y ma p va lue s V ox el ho riz on ta l gr ad ie nt Entropy of ilit y ma p va lue s S ke w ne ss of pa tc h 5x 5x ilit y ma p va lue s V ox el in te ns ity Wavelet (4th order High-pass) ilit y ma p va lue s V ox el ho riz on ta l gr ad ie nt Entropy of patch 5x5x5ilit y ma p va lue s S ke w ne ss of pa tc h 5x 5x 5Mean of patch 3x3x3
Figure 6 . 1 :
61 Figure 6.1: Scatter plots of samples from the same subject showing different features sets representations for the optic chiasm. Red crosses and blue circles indicate optic chiasm and non optic chiasm samples, respectively. While the upper row plots samples non-normalized, the row on the bottom represent the distribution of normalized features.
Figure 6 . 2 :
62 Figure 6.2: Parameter setting for SVM with different C and lambda values for the brainstem case.
Figure 6 . 3 :
63 Figure 6.3: Deep network architecture constructed by stacking denoising autoencoders in the proposed approach.
units: 50 -25 -10 Hidden units: 100 -50 -25 Hidden units: 200 -100 -50 Hidden units: 400 -200 -100 Hidden units: 50 -25 -10 -5 Hidden units: 100 -50 -25 -10 Hidden units: 200 -100 -50 -25 Hidden units: 400 -200 -100 -50
Figure 6 . 4 :
64 Figure 6.4: Evolution of batch errors during training for different configurations of the deep architecture. Epochs refers to the number of passes through the data.
a patient with different balance relations Balance relation between negative and positive samples
Figure 6 . 5 :
65 Figure 6.5: DSC values for a given patient with different balance relations between the number of positives and negative samples used to train the classifier.
Figure 6 . 6 :
66 Figure 6.6: Segmentation DSC results for the automatic contours with different settings for organs of group A.
Figure 6 . 9 : 134 Chapter 6 .Figure 6 . 10 :
691346610 Figure 6.9: ROC sub-division analysis for the four automatic approaches for organs of group A.
Figure 6 . 14 :
614 Figure 6.14: Visual examples of manual brainstem delineation and their comparison with reference and automatic segmentations.
6. 2 . Results 139 OFigure 6 . 15 :
2139615 Figure 6.15: Segmentation DSC results for the automatic contours with different settings for organs of group B.
Figure 6 . 16 :
616 Figure 6.16: Segmentation HD results for the automatic contours with different settings for organs of group B. be of 11.47 (± 8.12), 10.11 (± 3.89), 4.67 (± 1.45), 4.29 (± 1.97) and 4.95 (± 1.02) mm for SVM 1 , and 11.18 (± 9.22), 7.46 (± 3.23), 5.24 (± 1.99), 4.20 (± 1.38) and 5.86 (± 1.31) mm, respectively, for SVM AE-F V . Employing SDAE as classifier instead of SVM in a classical features setting decreased mean HD in most cases. Incorporation of either augmented or textural features in the SDAE based classifier improved HD values with respect to classical features.While in some organs mean HD values were lower for augmented features based classifiers, for some other organs textural features set achieved the lowest mean HD values. Nevertheless, the combination of both features sets into the AE-FV set led to the lowest mean HD values across all the structures. Mean HD values obtained with the proposed features set were 3.51 (± 0.87), 3.67 (± 0.67), 3.34 (± 1.09), 2.78 (± 0.76) and 3.29 (± 1.19), for left and right optic nerve, pituitary gland, pituitary stalk and chiasm, respectively.Paired repeated measures ANOVAs conducted on HD values (Table6.10) indicates that including proposed features in the SVM based classifier did not produce segmentations with significant differences with respect to classical configurations (p > 0.05). Employing SDAE as classifier with the classical features set did not report differences statistically significant in four out five structures. Only segmentations of the right optic nerve (p = 0.0098) showed significant different between SVM and SDAE when employing the classical features set. Nevertheless, differences between the two classifiers, i.e. SVM and SDAE, were significant when employing proposed features over all the structures (p < 0.05). Regarding the use of proposed features against classical
Figure 6 . 18 :
618 Figure 6.18: ROC sub-division analysis for the six automatic approaches for organs of group A.
Figure 6 . 19 :
619 Figure 6.19: Segmentation results produced by the proposed classification system when segmenting the right optic nerve (left), pituitary gland (middle) and chiasm (right), and comparison with the other automatic configurations.
Figure 6 . 21 :
621 Figure 6.21: Multi-group comparison of DSC results of manual and our proposed approach for the chiasm.
6. 2 . Results 151 OFigure 6 . 22 :
2151622 Figure 6.22: Hausdorff distance results of manual and our proposed approach for OARs of group B.
Figure 6 . 24 :
624 Figure 6.24: Volume differences results of manual and our proposed approach for OARs of group B.
Figure 6 . 26 :
626 Figure 6.26: Segmentation results produced by the proposed classification system and comparison with the manual annotations.
Figure 6 . 27 :
627 Figure 6.27: Best and worst optic nerves segmentations generated by the proposed deep learning approach. While best segmentations are shown on the top, worst segmentations cases are shown on the bottom.
Figure 6 . 28 :
628 Figure 6.28: Axial slice of brainstem segmentation with tumors causing properties changes in the inner texture for two patients (left). Corresponding binary masks (middle) to generate the geodesic distance transform map (right) are also shown.
Figure A. 1 :
1 Figure A.1: Comparative schemes of biological and artificial neural system.
Figure A. 2 :
2 Figure A.2: Appearance of a biological(a) and an artificial(b) neuron.
Figure A. 3 :
3 Figure A.3: Common activation functions. A.0.1.3 Multilayer feed-forward networks
Figure A. 4 :
4 Figure A.4: A simple MLP with 3 inputs, 1 output, and 1 hidden layer containing 3 hidden units.
, b; x (i) , y (i) ) + λ 2
∇
) J(W, b; x (i) , y (i) )] + λW (l) b (l) J(W, b; x (i) , y (i) ) J(W, b; x (i) , y (i) ) (A.20)
Figure B. 1 :
1 Figure B.1: The max-margin approach favored by Support Vector Machines.
Figure B. 2 :
2 Figure B.2: Linear separating hyperplanes for the binary separable case. Circled exampled that lie on the hyperplane are called support vectors.
[START_REF] Romanelli | Radiosurgery for hypothalamic hamartomas[END_REF] Derived KKT conditions are defined as∂L(w * , b * , ξ * , α * , µ * ) ∂w v = w vi α i y i x iv = 0 (B.24) ∂L(w * , b * , ξ * , α * , µ * ) ∂b =i α i y i = 0 (B.25) ∂L(w * , b * , ξ * , α * , µ * ) ∂ξ i = C -α i -µ i = 0 (B.26) y i (x i • w + b) -1 + ξ i ≥ 0 (B.27)ξ i ≥ 0 (B.28)α i ≥ 0 (B.29)µ i ≥ 0 (B.30) α i y i (x i • w + b) -1 + ξ i = 0 (B.31) µ i ξ i ≥ 0 (B.32)Conveniently converting to the dual problem, and using the KKT equations, the SVM problem can be then efficiently solved, as well as it becomes readily kernelized. The dual formulation is then defined asmaximize L D = N i=1 α i -1 2 ij α i α j y i y j x i • x j (B.33) B.0.2.5 Kernel selection and parameters tuning
Figure B. 5 :
5 Figure B.5: Decision boundaries for a banana shaped dataset generated by SVM with a RBF kernel for different C and γ values.
2. 1
1 Healthy brain(left) compared to brain tumor (in blue,right). . 6 2.2 A patient before and after of having being treated with rdiation therapy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Conventional RT and CyberKnife SRS treatment plans for a patient who received 40 Gy in 15 fractions to FLAIR for the first course followed an SRS boost to T1 Enhancement at a total dose of 24 Gy delivered in 3 fractions. Shown are the (A) axial, (B) sagittal, and (C) coronal views of the EBRT treatment plans and the (D) axial, (E) sagittal, and (F) coronal views of the CyberKnife SRS treatment plans. . . . . . . . . . . . . . . 8 2.4 A patient being positioned for SRS treatment (Gamma-Knife). 10 2.5 Gamma Knife radiation helmet. . . . . . . . . . . . . . . . . . 10 2.6 Flowchart of a common radiotherapy treatment. . . . . . . . . 11 2.7 Transversal, coronal and sagittal dose distribution and DVH information. Graphs: PTV (1), left eye (2), right eye (3), right optic nerve (4), left optic nerve (5), chiasma (6), brainstem (7), spinal cord (8). . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.8 The principle of therapeutic ratio. Grey curve represents the TCP, and red curve the probability of complications. The total clinical dose is usually delivered in 2Gy fractions in EBRT. . 14 2.9 Organs at Risk commonly involved in brain tumor radiation treatment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.10 MRI modalities commonly employed in the RTP.From left to right: T1, T1-Gadolinium, T2 and FLAIR modalities. . . . . . 20 2.11 The selection of directions of MRI slices. . . . . . . . . . . . . 21 3.1 Image segmentation example. Original image (left) is segmented into four class regions (center ), and into two class regions (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2 Partial volume effect caused by effects of finite voxel size when imaging a circle. The green area in the left image has value 10 and the white area has value 0. Imaging this circle with 9 voxels results in the right figure. . . . . . . . . . . . . . . . . 25 3.3 Typical atlas-based segmentation workflow where multiple atlases are employed. . . . . . . . . . . . . . . . . . . . . . . . . 28
Bailleul et al. [61] Bernard et al. [112] Olveres et al. [58] Pitiot et al. [104] Rao et al. [107] Tu et al. [63] Zhao et al. [105] Babalola et al. [62] Babalola et al. [109] Brejl et al. [101] Cootes et al. [59] Duchesne et al. [60] Hu et al. [48] Hu et al. [64]
Method Ref Structures Image Modality
Single Kwak et al. [53] Hippocampus MR T1
Atlas-based Wu et al. [66] Multi-structure MR T1
Aljabar et al. [67] Multi-structure MR T1
Artaechevarria et al. [44] Multi-structure MR
Asman et al. [69] Multi-structure MR
Bondiau et al. [5] Brainstem MR T1,T2
Cardoso et al. [52] Hippocampus MR T1
Collins et al. [45] Hippocampus, amygdala MR T1
Multiple Coupe et al. [46] Multi-structure MR T1
Atlas-based Heckemann et al. [65] Multi-structure MR T1
Khan et al. [49] Hippocampus MR T1
Kim et al. [50] Hippocampus MR 7T
Lotjonen et al. [68] Multi-structure MR T1
Panda et al. [81] Optic nerves, eye globes CT
Wang et al. [54] Hippocampus MR
Zarpalas et al. [55] Hippocampus MR T1
Multi-structure MR
Subthalamic nucleus MR T1
Active Shape models Mid Brain Multi-structure Multi-structure MR T1, SWI MR T1 MR
Multi-structure MR T1
Multi-structure MR
Multi-structure MR T1
Multi-structure MR T1
Active Corpus callosum, cerebellum MR
Appearance Multi-structure MR
models Medial temporal lobe MR T1
Hippocampus, amygdala MR T1, T2
Medial temporal lobe MR T1
Parametric deformable models Lee et al. [120] Mcinerney et al. [119] Mcintosh et al. [56] Szekely et al. [70] Brainstem,cerebellum Corpus callosum,cerebellum Corpus callosum Multi-structure MR MR MR MR
Bekes et al. [124] Eyeballs,lens,nerves CT
Duncan et al. [123] Hippocampus MR T1
Ghanei et al. [40] Hippocampus MR
Geometric Leventon et al. [57] Corpus callosum MR
deformable Shen et al. [41] Hippocampus MR T1
models Tsai et al. [71] Multi-structure MR
Wang et al. [122] Multi-structure MR
Yang et al. [72] Multi-structure MR
Zhao et al. [51] Hippocampus MR
Hult et al. [42] Hippocampus MR T1,T2
Machine Learning. ANN Magnotta et al. [73] Moghaddam et al. [128] Pierson et al. [74] Powell et al. [76] Multi-structure Putamen,caudate, thalamus Cerebellar subregions Multi-structure MR T1,T2 MR T1 MR T1,T2 MR T1,T2,PD
Spinks et al. [126] Thalamus,mediodorsal nucleus MR T1,T2,PD
Golland et al. [75] Hippocampus,amygdala,corpus cal- MR
Machine losum
Learning. Morra et al. [43] Hippocampus MR T1
SVM Morra et al. [47] Multi-structure MR T1
Powell et al. [76] Multi-structure MR T1,T2,PD
Table 3 .
3 1: Summary of subcortical structures segmentation methods.
Table 3 .
3 3: Summary of experimental set up of referenced segmentation works. Part II
3.8. Discussion
Table 3 .
3
4: Summary of benefits, assumptions and limitations of different segmentation methods for brain structures.
Table 4 .
4
Features Tumor Type
Size Homogeneity Shape
1 Small Yes Circular Benign
2 Medium Yes Irregular Malignant
3 Medium No Irregular Malignant
4 Large Yes Circular Benign
5 Small No Irregular Malignant
6 Large No Irregular Malignant
7 Medium No Circular Malignant
8 Medium Yes Circular Benign
9 Small Yes Irregular Benign
10 Small No Circular Malignant
1: Brain tumor classification table. Some tumor properties are used as features to train the classifier.
Table 4 .
4
Features vector elements
#1 #2 #3 #4 #5 #6 #7 #8
Training (Original) 178 205 189 35 12 48 255 241
Training (Scaled) 0.6980 0.8039 0.7412 0.1373 0.0471 0.1882 1.0000 0.9451
Testing (Original) 201 198 55 33 45 124 89 174
Testing (Erroneously Scaled) 1.0000 0.9821 0.1310 0 0.0714 0.5417 0.3333 0.8393
Testing (Correctly Scaled) 0.7882 0.7765 0.2157 0.1294 0.1765 0.4863 0.3490 0.6824
3: Scale example showing a bad and a good example of features scaling
.2.
Image characteristics
Intenstiy Volume
Mean Max Min Size (cm 3 )
Brainstem 280.62 (± 277.23) 745.67 (± 632.45) 36.56 (± 34.50) 25.79 (± 2.85)
Eye (Right) 117.74 (± 67.04) 542.39 (± 305.95) 3.31 (± 2.78) 5.41 (± 0.73)
Eye (Left) 118.68 (± 78.49) 539.92 (± 292.17) 3.69 (± 3.24) 5.43 (± 0.78)
Lens (Right) 438.38 (± 272.19) 619.93 (± 381.16) 233.71 (± 140.01) 0.15 (± 0.04)
Lens (Left) 438.56 (± 289.95) 640.31 (± 434.34) 257.43 (± 159.29) 0.14 (± 0.06)
Optic nerve (Right) 480.81 (± 286.01) 959.14 (± 539.85) 86.14 (± 91.01) 0.81 (± 0.18)
Optic nerve (Left) 484.89 (± 294.52) 994.79 (± 626.46) 94.57 (± 125.21) 0.82 (± 0.25)
Optic chiasm 497.25 (± 324.11) 734.53 (± 523.55) 262.50 (± 168.85) 0.23 (± 0.05)
Pituitary Gland 748.85 (± 478.79) 1210.15 (± 736.49) 313.21 (± 271.75) 0.53 (± 0.14)
Pituitary Stalk 568.45 (± 414.06) 939.71 (± 665.71) 295.93 (± 237.83) 0.08 (± 0.02)
Table 5 .
5 2: Intensity and volume characteristics of images contained in the dataset used for this work.
.3.
Manual segmentation time (minutes)
Brainstem 20 12 (± 10 48 )
Eyes 6 51 (± 1 42 )
Lenses 2 17 (± 0 51 )
Optic nerves 7 34 (± 2 53 )
Optic chiasm 1 52 (± 0 38 )
Pituitary Gland 3 8 (± 0 55 )
Pituitary Stalk 2 41 (± 0 49 )
Table 5 . 3
53
: Mean manual segmentation times per observer and organ.
Table 6 .
6 2: Features sets employed for the different groups.
Features set name Features included Vector size
Group A
Intensity of voxel under examination
Intensity of voxel neighborhood (3D)
Classical Intensity of 8 voxels along maximum gradient direction 39
Probability voxel value
Spherical Coordinates
Classical (except 3D voxel neighborhood)
Enhanced Geodesic Distance Transform Map 3D-Local Binary Texture Pattern 19
Gradient value of voxel
Group B
Intensity of voxel under examination
Intensity of voxel neighborhood (3D)
Classical Intensity of 8 voxels along maximum gradient direction
Probability voxel value
Spherical Coordinates
Classical
Augmented Gradient Patch in 2D (Horizontal and vertical magnitudes and orientation)
Contextual features
Classical
Mean
Variance
Textural Entropy Energy
Kurtosis
Skewness
Wavelet patch decomposition
Classical
AE-FV Augmented
Textural
6.1.1.3 Features Extraction
MR T1 sequence was the only image modality used. Intensity information
around neighboring region of the voxel under examination was extracted by
employing three-dimensional patches in groups A and B. However, patch size in group A was 3x3x3 whilst in group B it was 5x5x5. The reason for this difference is that OARs included in group A present a more homogenized texture than those included in group B. Furthermore, we experimented with both sizes in OARs of group A, and no significant improvement was found.
Table 6 .
6 3: Summary of employed SDAE parameters.Mini-batch learning was followed during both unsupervised pre-training of DAEs and supervised fine-tuning of the entire network. Batch sizes were set in both configurations to 200,500,1000 and 2000, from the top to the bottom layers, respectively. Table6.3 summarizes parameter values employed in the proposed SDAE.
SDAE parameters
Network Structure Group A 100 -50 -25 -10 Group B 400-200 -100 -50
Number of classes 2
Corruption Level 0.5
Batch sizes (per layer) 200 -500 -1000 -2000
Number of epochs 500
Learning rate 0.1
Activation function (Unsupervised learning) Sigmoid
Activation function (Supervised learning) Sigmoid
Output Logistic
Table 6 .
6 4: Paired ANOVA tests for the DSC between the automatic approaches to segment OARs from group A.Figure 6.7: HD results for the automatic contours with different settings for organs of group A. of 8 mm. For the lenses, however, maximum HD values were below 5.5 mm in all configurations.
Paired ANOVA (DSC)
Left Eye Right Eye
SVM 1 SVM 2 SDAE 1 SDAE 2 SVM 1 SVM 2 SDAE 1 SDAE 2
SVM 1 1 0.3345 0.0265 0.0396 1 0.9640 0.1094 0.1955
SVM 2 - 1 0.1626 0.2061 - 1 0.0773 0.1574
SDAE 1 - - 1 0.9818 - - 1 0.7560
SDAE 2 - - - 1 - - - 1
Left Lens Right Lens
SVM 1 SVM 2 SDAE 1 SDAE 2 SVM 1 SVM 2 SDAE 1 SDAE 2
SVM 1 1 0.5351 0.2543 0.0736 1 0.9402 0.1148 0.0748
SVM 2 - 1 0.5315 0.2210 - 1 0.1277 0.0837
SDAE 1 - - 1 0.6442 - - 1 0.8636
SDAE 2 - - - 1 - - - 1
Brainstem
SVM 1 SVM 2 SDAE 1 SDAE 2
SVM 1 1 0.5173 0.0275 5.7583x10 -5
SVM 2 - 1 0.3872 0.0323
SDAE 1 - - 1 0.0181
SDAE 2 - - - 1
Hausdoff distances. Figure
6
.7 presents the values of Hausdorff distances obtained for the four configurations. SVM based systems achieved the highest overall mean HD values among the four groups, with values of 5.96 (± 1.11) and 5.23 mm (± 1.02) for SVM 1 and SVM 2 , respectively. On the other hand, these values decreased when employing SDAE as classifier, with mean HD of 4.29 (± 1.09) and 4.07 mm(± 0.98), for SDAE 1 and SDAE 2 , respectively. Having a look to the HD distributions on individual organs on figure 6.7, it can be observed that both SVM settings achieved the highest mean HD values across all the OARs. Although the addition of proposed features into the SVM framework improved mean HD values, it was not sufficient to outperform SDAE based classifiers. Concerning the use of SDAE, mean HD achieved by both settings were very similar when segmenting both eyes and lenses. Across these structures, mean HD values for SDAE 1 were 5.organs, overall maximum distances were around 10-14 mm when employing SVM in the classification scheme. On the other hand, these maximum values decreased to almost half in SDAE settings, not typically exceeding the barrier
Table 6 .
6 1 and SDAE 2 . Analyzing structures individually, mean relative volume differences obtained by SVM 1 , in absolute values, were: 15.43 (± 8.40), 36.77 (± 28.93), 26.33 (± 15.03), 54.60 (± 39.52) and 43.74% (± 30.95) for the brainstem, left eye, right eye, left lens and right lens, respectively. When adding the proposed features into the SVM-based scheme these values became: 7.76 (± 4.99), 14.54 (± 8.51), 20.04 (± 21.02), 55.32 (± 38.45) and 42.79% (± 28.34). In the same order, SDAE 1 obtained the following relative volume differences: 3.97 (± 2.03), 8.17 (± 6.09), 10.72 (± 6.15), 29.52 (± 26.25) and 20.12% (± 12.52). At last, reported differences for the proposed scheme (SDAE 2 ) were: 3.01 (± 1.23), 10.02 (± 6.01), 10.07 (± 7.74), 28.05 (± 24.14) and 17.57% (± 11.69). 6: Paired ANOVA tests for volume differences between the automatic approaches to segment OARs from group A.
Relative Volume differences
120
SVM 1
SVM 2
Relative Volume differences (%) 20 40 60 80 100 SDAE 1 SDAE 2
0 Brainstem Eye L Eye R Lens L Lens R
Figure 6.8: Vol diff results for the automatic contours with different settings
for organs of group A.
Differences between automatic segmentations in terms of volume were sta-
Table 6 .
6
7: Sensitivity and specificity mean values for the four automatic configurations across the OARs of group A.
, SDAE Augmented , SDAE T extural and SDAE AE-F V , respectively. Looking at each structure individually, it can be observed that including the set of proposed features into the SVM system decreased mean HD values with respect to the classical features set when segmenting both optic nerves. For the rest of the organs, however, inclusion of proposed features did not particularly improve HD values. Mean HD values for left optic nerve, right optic nerve, pituitary gland, pituitary stalk and chiasm, were reported to
80 (± 5.47), 4.74 (± 4.83), 4.69 (± 4.70) and 3.32 141 O. Ner. (L) O. Ner. (R) Pit. gland Pit. stalk Chiasm Hausdorff distances SVM 1 SVM AE-FV SDAE 1 (± 0.96) mm for SDAE 1 6.2. Results 0 2 4 16 18 6 8 10 12 14 SDAE Augmented SDAE Textural Hausdorff distance (mm) SDAE AE-FV
Figure 6.17: Relative volume differences results for the automatic contours with different settings for organs of group B. (± 11.62), 31.14 (± 23.82), 29.13 (± 20.41) and 32.37 (± 27.58). When incorporating augmented features into the features set, mean rVD were: 29.13 (± 18.08), 18.67 (± 12.42), 28.95 (± 23.42), 23.38 (± 8.82) and 20.85 (± 14.77). If we employed textural features instead, mean values of rVD were: 19.68 (± 9.56), 15.68 (± 11.06), 31.89 (± 18.29), 24.46 (± 14.26) and 22.14 (± 15.34).
6.2. Results 143
Relative Volume differences
160 SVM 1 SVM AE-FV
Relative Volume differences (%) 40 60 80 100 120 140 SDAE 1 SDAE Augmented SDAE Textural SDAE AE-FV
20
0 O. Ner. (L) O. Ner. (R) Pit. gland Pit. stalk Chiasm
Finally, our proposed system achieved the following mean rVD values: 16.85
(± 13.39), 16.27 (± 11.09), 18.09 (± 11.29), 22.51 (± 7.55) and 12.48 (±
7.69).
1 mean rVD were 72.58 (± 22.86), 72.14 (± 42.59), 53.37
(± 48.89), 23.44 (± 15.16) and 83.24 % (± 84.49). For SVM including the
proposed AE-FV set: 52.86(± 15.46), 41.48 (± 14.69), 71.49 (± 51.68), 38.10 (± 32.55) and 79.28 (± 43.64). First of SDAE configurations, which employed classical features, obtained the following mean values: 22.06 (± 13.92), 15.10
Table 6 .
6 [START_REF] Combs | Stereotactic radiosurgery (SRS)[END_REF]: Sensitivity and specificity mean values for the six automatic configurations across the OARs of group B.
Chapter 6. Experiments and Results
Table 6 .
6 Figure 6.20: DSC results of manual and our proposed approach for the OARs of group B. 13: P-values of the ANOVA for Dice similarity coefficient results of the OARs of group B.
6.2. Results
Table 6 .
6 and 0.85. By 14: P-values of the ANOVA for Hausdorff distances results of the OARs of group B.
ANOVA analysis (HD)
Within-subjects ANOVA Paired ANOVA
All Obs 1 Obs 2 Obs 3
Optic nerve L 0.0005 SDAE AE-F V 0.0001 0.0938 0.3926
Optic nerve R 0.0014 SDAE AE-F V 0.2015 0.6217 0.0261
Pituitary gland 0.0002 SDAE AE-F V 0.0001 0.0077 0.2768
Pituitary stalk 0.0040 SDAE AE-F V 0.0004 0.0659 0.0418
Chiasm 0.0064 SDAE AE-F V 0.0142 0.5359 0.8895
Table 6 .
6 15: P-values of the ANOVA for volume differences results of the OARs of group B.
ANOVA analysis (Vol Diff )
Within-subjects ANOVA Paired ANOVA
All Obs 1 Obs 2 Obs 3
Optic nerve L 0.0004 SDAE AE-F V 0.0001 0.0013 0.0763
Optic nerve R 0.0057 SDAE AE-F V 0.0199 0.0069 0.0519
Pituitary gland 0.3519 SDAE AE-F V 0.4769 0.1794 0.8961
Pituitary stalk 0.0006 SDAE AE-F V 0.0024 0.1782 0.1295
Chiasm 0.5287 SDAE AE-F V 0.9985 0.8655 0.1827
Table 6 .
6 16: Segmentation times.
It required a pre-registration step which took 50 min.
120-180 min. 1 1 min 2 5 min. 3 30 min. 1 20 min. - - 21 sec. 4 7-8 min 5 0.1793 6 1 sec. - 12 sec. 4 7-8 min 5 0.0378 sec. 6 1 sec. 0.0212 6 7-8 min 5 0.0748 sec. 6 1 sec. - 10 sec. 4 7-8 min 5 - 0.1315 sec. 6 1 sec. - - 7-8 min 5 20 min. - 0.2572 6
- - - - 96.67 - - - 96.89 91.43 98.16 - - 98.36 98.94 98.27 75.22 82.91 82.69 93.50 - - 93.27 - 86.11 99.06 - - 76.86 - - 89.56
- - - - 76.17 - - - 79.40 90.56 97.71 - - 70.33 90.92 97.81 87.23 23.68 84.22 65.20 - - 30.36 - 83.94 76.79 - - 37.91 - - 81.89
MR Atlas 0.94 4.83 3.98 Statistical (PAM) 0.88 6.02 6.80 Statistical (BAM) 0.89 6.37 7.80 Expectation-minimization 0.83 7.74 21.10 MR Atlas --- MR Atlas 0.86 -- CT Atlas 0.86 8.00 - CT Atlas 0.84 3.4 - MR Atlas 0.85 --14.8 MR Deep Learning 0.92 5.87 3.10 CT Geometrical --- MR Atlas 0.84 -- CT Atlas 0.59 5.9 - MR Atlas 0.82 -26.26 MR Deep Learning 0.89 5.15 10.08 CT Geometrical --- MR Deep Learning 0.76 2.17 22.81 MR Atlas 0.3 --36.8 MR Deep Learning 0.76 3.33 18.10 CT Geometrical MR Atlas CT Atlas MR Atlas CT Atlas MR Deep Learning MR -0.37 0.57 0.42 0.8 0.83 --5.80 3.10 -3.30 ----50.5 -12.48 CT Geometrical --- MR Atlas 0.52 -- CT Atlas 0.77 3.75 - MR Atlas 0.38 -31.3 CT & MR Atlas 0.79 4.20 - CT Atlas 0.77 3.33 - MR Deep Learning 0.79 3.59 16.56 1 Set of brain structures 2 It required a pre-registration step which took 20 min. 5 Set of 6 OARs 6 It requires between 1-6 seconds to extract features, depending on the structure.
method Bekes [124] Deeley [6] Hoang [192] Isambert [78] Our method Bekes [124] Our method Isambert [78] Our method Bekes [124] Deeley [6] Hoang [192] Isambert [78] Noble [79] Our method Bekes [124] Deeley [6] Harrigan [193] Isambert [78] Noble [79] Panda [81] Our method
Eyes Lenses Pituitary gland Optic chiasm Optic nerves
3
It required a pre-registration step which took 3 min. 4
Table 6 .
6 17: Summary of segmentation results of related works.
6.3. Discussion
Table 6 .
6 18: Experimental setting-up of related works.
patients N.A.
Hoang [192] 100 patients 2 experts ( Expert 1 contoured 43 images. Expert 2 contoured 57 images.
Isambert [78] 11 patients 2 experts The two experts made the contours together.
Noble [79] 4 Images (Model training) 10 Images (Parameter training) 10 Images (Testing) 1 observer A student made the contours. Then, corrected by 2 experts.
Panda [81] 30 patients 1 expert
à transférer les connaissances de l'homme à la machine. Ces machines "apprennent" à partir d'un ensemble de données. Ainsi, par exemple, en l'absence de limites visibles, le classifieur utilise l'expertise des médecins transférée au système pour la réalisation de cette tâche.
Proceedings., International Conference on. vol. 3. IEEE; 1995. p. 544-547. (Cited on pages25 and 181.)
Acknowledgments
Chapter 8
Own Publications
Chapter 9
French Summary
De nos jours, les techniques de segmentation automatique sont rarement utilisées en routine clinique. Le cas échéant, elles s'appuient sur des étapes préalables de recalages d'images. Ces techniques sont basées sur l'exploitation d'informations anatomiques annotées en amont par des experts sur un "patient type". Ces données annotées sont communément appelées "Atlas" et sont déformées afin de se conformer à la morphologie du patient en vue de l'extraction des contours par appariement des zones d'intérêt. La qualité des contours obtenus dépend directement de la qualité de l'algorithme de recalage. Néanmoins, ces techniques de recalage intègrent des modèles de régularisation du champ de déformations dont les paramètres restent complexes à régler et la qualité difficile à évaluer. L'intégration d'outils d'assistance à la délinéation reste donc aujourd'hui un enjeu important pour l'amélioration de la pratique clinique.
L'objectif principal de cette thèse est de fournir aux spécialistes médicaux (radiothérapeute, neurochirurgien, radiologue) des outils automatiques pour segmenter les organes à risque des patients bénéficiant d'une prise en charge de tumeurs cérébrales par radiochirurgie ou radiothérapie. Pour réaliser cet objectif, les principales contributions de cette thèse sont présentées sur deux axes principaux. Tout d'abord, nous considérons l'un des derniers sujets d'actualité dans l'intelligence artificielle pour résoudre le problème de la segmentation, à savoir le "deep learning". Cet ensemble de techniques présente des avantages par rapport aux méthodes d'apprentissage statistiques classiques (Machine Learning en anglais). Le deuxième axe est dédié à l'étude des caractéristiques d'images utilisées pour la segmentation (principalement les textures et informations contextuelles des images IRM). Ces caractéristiques, absentes des méthodes classiques d'apprentissage statistique pour la segmentation des organes à risque, conduisent à des améliorations significatives des performances de segmentation. Nous proposons donc l'inclusion de ces fonctionnalités dans un algorithme de réseau de neurone profond (deep learning en anglais) pour segmenter les organes à risque du cerveau.
Nous démontrons dans ce travail la possibilité d'utiliser un tel système de classification basée sur techniques de "deep learning" pour ce problème particulier. Finalement, la méthodologie développée conduit à des performances accrues tant sur le plan de la précision que de l'efficacité.
Appendix A
Artificial Neural Networks
A.0.1 Artifical Neural Networks
Artificial Neural Networks (ANN) based methods provide a robust approach for approximating real-valued, discrete-valued or vector-valued targeted functions. They are massively parallel computing systems consisting of an extremely large number of simple processors with many interconnections between them. For certain types of problems, ANN are among the most effective learning methods employed during the last decades [REF?].
A.0.1.1 Biological motivation
The observation that biological learning systems are built of very complex networks of interconnected neurons inspired the study and development of ANN based systems. Thus, in a fuzzy analogy, ANN are composed by a densely interconnected set of simple units. Each of these units takes a number of real-valued inputs and produces a single real-valued output. Inputs and outputs at each neuron may represent the outputs and inputs of other units, respectively.
To develop the basis of this similarity, let us consider some certainties from neurobiology. A neuron is a special biological cell that has the ability to process information. It is estimated that the human brain contains a densely interconnected network of nearly 10 11 of neurons. Each neuron is connected to 10 3 to 10 4 other neurons, on average. Neuron activity is commonly inhibited or excited through connections to other neurons. Neurons communicate through a very short train of pulses, typically with a duration of milliseconds. Although the fastest neuron switching times are estimated to be on the order of 10 -3 seconds, this time is much slower than computer switching times, which are on the order of 10 -10 seconds. However, complex decisions performed by humans can be done surprisingly quick. For instance, visually recognizing a familiar person, such as your mother or father, it requires approximately 10 -1 seconds. Taking into account biological switching times, this implies that the sequence of neurons being excited during this 10 -1 seconds interval cannot be longer than a few hundred of serial stages. This observation led to many researchers during the beginning of ANN to speculate that the information processing of the parameters W and b, the neural network defines a hypothesis h W,b (x) (equation A.4), which output is a real number. Particularly, computation of our neural network is represented by:
21
31
(2)
12 a
(2)
13 a
(2)
Thus, we have in this network that W (1) ∈ R 3x3 and W (2) ∈ R 1x3 .
If we now let z (l)
i denote the total weighted sum of inputs to unit i in layer l, including the bias term:
activation of unit i in layer j can be reformulated in a more compact notation as:
Extension of the activation function f (•) to be applied to vectors in an element-wise function (i.e., f ([z 1 , z 2 , z 3 ]) = [f (z 1 , f (z 2 , f (z 3 )]) will allow equations (A.5-A.8) to be reformulated as: z (2) = W (1) x + b (1) (A.11)
More generally, in order to calculate activations of layer l + 1, a (l+1) , we need to calculate, for each layer l, starting with l = 1 and knowing that a
Backpropagation is a method of supervised learning often used to train feedforward neural networks. Its use allows to calculate the factors by which each weight should be updated in order to minimize the error produced between the prediction and the ground truth given a set of weights W and a bias term b. It proceeds as follows:
• Perform a feed-forward pass, that is, calculate the final activations
, where n is the number of layers, and denoting that a (n) are the activations of the last layer. This will give us a vector of predictions achieved by the actual weights θ. Moreover, store all the intermediate z (l) and a (l) for each layer l for a later use.
• For each final activation a
This factor indicates how different the prediction of the model is from the ground truth.
• Propagate the penalization term to the previous layers by calculating for each node i in layer l except the first layer, because the input does not need to be corrected
• Finally, compute the partial derivatives
Now, we can calculate ∇W and ∇b with the formulas in the previous section (equations A.19 and A.20,respectively). These partial derivatives should now be used to properly update the old weights with some optimization technique such as gradient descent, conjugate gradient or L-BFGS algorithm [START_REF] Liu | On the limited memory BFGS method for large scale optimization[END_REF], for example.
Support Vector Machines
B.0.2 Support Vector Machines
Another widely employed ML system, which also represents a state-of-the-art classifier, is Support Vector Machines (SVM). It was originally proposed by Vapnik [START_REF] Cortes | Support-vector networks[END_REF] and [START_REF] Vapnik | Statistical learning theory[END_REF] for binary classification. In contrast with other machine learning approaches like artificial neural network which aims at reducing empirical risk, SVM implements the structural risk minimization (SRM) that minimizes the upper bound of generation error.
Support vector machines and their variants and extensions, often called kernel-based methods, have been studied extensively and applied to wide spectrum of pattern classification and function approximation problems. Basically, the main idea behind SVM is to find the largest margin hyperplane that separates two classes, among all the possible hyperplanes. The minimal distance from the separating hyperplane to the closest training example is called margin. Thus, the optimal hyperplane is the one providing the maximal margin, which represents the largest separation between the classes. This will be the line such that the distances from the closest point in each of the two groups will be farthest away. The training samples that lie on the margin are referred as support vectors, and conceptually are the most difficult data points to classify. Therefore, support vectors define the location of the separating hyperplane, being located at the boundary of their respective classes. See In the binary classification setting, let ((x 1 , y 1 )...(x n , y n )) be the training dataset where x i are the feature vectors representing the instances and y i ∈ {-1, +1} denote the labels of the instances. Support vector learning is the problem of finding a separating hyperplane that separates the positive examples from the negatives examples with the largest margin. The margin of the hyperplane is defined as the shortest distance between the positive and negative instances that are closest to the hyperplane. The intuition behind searching for the hyperplane with a large margin is that a hyperplane with the largest margin should be more resistant to noise than a hyperplane with a smaller margin. Supposing that all the training data satisfy the following constraints:
with normal w and perpendicular distance from the origin |1 -b|/ w for the first case, and | -1 -b|/ w for the second case. Hence, the shortest distance from the separating hyperplane to the closest positive and negative examples is defined as 1/ w , and the margin is simply two times this distance, 2/ w . Thus, the maximum margin that separates the two classes can be constructed by solving the following primal optimization problem
subject to the constraints given by Eq. B.3. In other words, the margin is maximized, subject to the constraints that all training cases fall on either side of the support hyper-planes. The cases that lie on the hyperplane are called support vectors, since they support the hyper-planes and hence determine the solution to the problem. The primal problem can be solved by a quadratic program. However, it is not ready to be kernelised, because its dependence is not only on inner products between data-vectors.
A switch to Lagrangian formulation of the primal problem is done at this point mainly because of two reasons. First, the constraints are easier to handle. And second, the training data only appears as a dot product between vectors in this reformulation. This second characteristic of the Lagrangian reformulation is an essential property in order to generalize the procedure to the nonlinear case. Hence, positive Lagrange multipliers α i , i = 1, ..., l, for each of the inequality constraints in B.3 are introduced. The generalized Lagrangian function is then defined as:
The goal is to minimize (B.7) with respect to w, b, and simultaneously require that the derivatives of L(w, b, α) with respect to all the α i vanish, subjected to the constraints α i ≥ 0.
B.0.2.1 Duality
Optimization problems can be converted to their dual form by differentiating the Lagrangian with regards to the original variables, solving the obtained results for those variables if possible, and substituting the resulting expression(s) back into the Lagrangian, thereby eliminating the variables.
Minimizing Eq. B.7 is a convex quadratic programming problem, because the objective function is itself convex, and points satisfying the constraints also form a convex set. In these cases, and only then, minimization and maximization can be interchanged, allowing to equivalently solve what is known as the "dual" problem. Duality of the problem is defined as maximizing L(w, b, α) subject to the constraints that the gradient of L(w, b, α) with respect to w and b vanish, and also subject to the constraints that the α i ≥ 0. Forcing the gradient of L(w, b, α) with respect to w and b vanish give the conditions:
Inserting this back into the Lagrangian formulation (Eq. B.7), the formulation of the dual problem becomes:
which is subject to the constraints of B.9. The hyperplane whose weight vector w * = n i=1 y i α i x i solves this quadratic optimization problem is the maximal margin hyperplane with geometric margin λ = 1 w . The theory of duality guarantees that for convex problems, the dual problem becomes concave with an unique solution of the primal problem that corresponds to the unique solution of the dual problem. The important point of problem dualization is that the dual problem only depends on x i through the inner product x i x j . A clear advantage is that the dual problem lends itself to kernelization, via the substitution x i x j -→ k(x i , x j ), while the primal problem does not.
B.0.2.2 The Karush-Kuhn-Tucker Conditions
The Karush-Kuhn-Tucker (KKT) conditions [START_REF] Karush | Minima of functions of several variables with inequalities as side constraints[END_REF][START_REF] Kuhn | Proc. 2nd Berkeley Symp. Math. Stat. Prob[END_REF] establish the requirements that need to be satisfied by an optimum solution to a general optimization problem. Given the primal problem in B.7, KKT conditions state that the solutions w * , b * and α * should satisfy the following conditions (where i runs from 1 to the number of training points and v from 1 to the dimension of the data d)
present a high sensivity to ourliers. To address these problems, the concept of "soft margin" were introduced in SVM by [START_REF] Cortes | Support-vector networks[END_REF]. The basic is to relax the constraints in B.1 and B.2, only when necessary, via the introduction a further cost in the primal objective function ( Eq. B.6). This can be done by introducing positive "slack variables" ξ i in the constraints. With the addition of the "slack variables", the modified relaxed constraints become The solution is again given by
where N S is the number of supported vectors. This solutions is practically the same than in the linear case, but with an extra constraint on the multipliers α i which have now an upper bound of C.
B.0.2.4 Non-linear Support Vector Machines
The power of SVMs can be fully realized when linear SVMs are extended to allow more general decision surfaces. One of the benefits of Lagrangian reformulation of the SVM problem is that the training data appears as a dot product between vectors (Section X). This advantage can be exploited by using the kernel trick, which allows SVM to form non-linear boundaries.
In the dual problem in B.33, the dot product can be replaced with the new kernel function K,
subject to the conditions defined in B.34. List of Figures |
01759303 | en | [
"spi.gproc"
] | 2024/03/05 22:32:10 | 2017 | https://theses.hal.science/tel-01759303/file/TH2017PESC1018.pdf | Prof. Gérald Assoc Pourcelly
Dr Kateryna Fatyeyeva
Dr Florence Lutin
Prof Victor Nikonenko
Prof Patrick Fievet
Prof Natalia
Pismenskaya, Prof. Christian Larchet, who agreed to evaluate this work. I also want to thank you for letting my defense
-polymer matrix chains; -bridges of polymer agent, cross-linking the main polymer matrix chains; -incorporations of an inert polymer imparting thermal stability, or mechanical strength, or elasticity to the membrane(adapted from [4]) . composite sample turned (b) to polymerizing solutions, respectively (adapted from [22]).
Fig. 1234. Microphotographs of the membrane MA-40: a) initial; b) profiled (adapted from [4]). [34]).
Fig. 123456. Total current density and partial current density of ions H + [i (H + )] through MK-40
and its modifications (adapted from [34]).
Fig. 1234567. Schematic diagram illustrating the principle of electrodialysis (adapted from [40]).
Fig. 12345678. Schematic drawing illustrating the construction of a sheet flow stack design (adopted from [40]).
Fig. 123456789. Schematic drawing illustrating the RED system (adopted from [7]).
Fig. [1][2][3][4][5][6][7][8][9][10]. Schematic diagram of the EDM process (adopted from [44]).
Fig. [1][2][3][4][5][6][7][8][9][10][11]. Schematic drawing illustrating the principle of electrodialytic production of acids and bases from the corresponding salts with bipolar membranes (adopted from [40]).
Fig. 1-12. Scheme diagram illustrating the structure of PEMFC and the principle of operation(adopted from [49]).
Fig. [1][2][3][4][5][6][7][8][9][10][11][12][13]. Concentration profile in an EMS. 0 С is the electrolyte concentration in the bulk solution; 1 С and 2 С are the electrolyte concentrations at the boundaries between the diluate and concentrate diffusion layers and the membrane, respectively; the bar over the letter designates that the quantity relates to the membrane phase; and d are thickness of diffusion layer and membrane, respectively.
Fig. 1-14. Scheme of gravitational convection (adapted from [66]). , 2 and 3 are the thicknesses of the electrically neutral region, extended SCR and quasi-equilibrium SCR, respectively; 0 C is the bulk electrolyte concentration. The difference between counter-ions (curve 2) and co-ions (curve 3) concentrations near the leftmembrane side corresponds to the appearance of the SCR (adapted from [START_REF] Mishchuk | Concentration polarization of interface and non-linear electrokinetic phenomena[END_REF]). (b) membrane surfaces (adapted from [START_REF] Volodina | Ion transfer across ion-exchange membranes with homogeneous and heterogeneous surfaces[END_REF]).
Fig. 1-21.Chronopotentiograms for heterogeneous (MA-41) and homogeneous (AMX) anionexchange membranes in a 0.1 M solution NaCl. The membrane is in a vertical position, the current density is 17.2 mA cm -2 (adapted from [START_REF] Belova | Effect of Anion-exchange Membrane Surface Properties on Mechanisms of Overlimiting Mass Transfer[END_REF]). reference electrodes, 11 -pulley (adapted from [START_REF] Zabolotskiy | Ion transport and electrochemical stability of strongly basic anion-exchange membranes under high current electrodialysis conditions[END_REF]). gaskets (3,4); square aperture (5); cathode (6); anode (7); plastic capillaries (8); Ag/AgCl chloride electrode or Luggin capillary (9); connecting pipe (10); stream spreader of a comb shape (11); the membrane studied (A*); a cation-exchange membrane (C); an anion-exchange membrane (A) (adapted from [START_REF] Volodina | Ion transfer across ion-exchange membranes with homogeneous and heterogeneous surfaces[END_REF]).
Fig. 1-24. Schematic illustration of scale formation schemes (adapted from [START_REF] Antony | Scale formation and control in high pressure membrane water treatment systems: A review[END_REF]).
Fig. 1-25. Visualization of membrane fouling by photo imaging (A,B), optical microscopy (C), SEM (D), CLSM (E), AFM (F) (adapted from [START_REF] Mikhaylin | Fouling on ion-exchange membranes: Classification, characterization and strategies of prevention and control[END_REF]).
Fig. . Schematic representation of the system for measuring membrane conductance (adapted from [START_REF] Lteif | Auclair Conductivité électrique membranaire: étude de l'effet de la concentration, de la nature de l'électrolyte et de la structure membranaire[END_REF]). (adapted from [START_REF] Hamilton | A technique for the characterization of hydrophilic solid surfaces[END_REF][START_REF] Zhang | Membrane Characterization by the Contact Angle Technique: II. Characterization of UF-Membranes and Comparison between the Captive Bubble and Sessile Drop as Methods to obtain Water Contact Angles[END_REF]).
Fig. 1-29. Dead-end filtration and cross-flow filtration (adapted from [START_REF] Tsibranska | Concentration of ethanolic extracts from Sideritis ssp. L. by nanofiltration: Comparison of dead-end and cross-flow modes, Food and Bioprod[END_REF]).
Fig. 1-30. Scheme of EDR and presence of foulants (adapted from [START_REF] Mikhaylin | Fouling on ion-exchange membranes: Classification, characterization and strategies of prevention and control[END_REF]). (NaCl), i = 7.7 mA cm -2 (CaCl 2 ), and i = 7.0 mA cm -2 (MgCl 2 ). Fig. 3-2. Principal scheme of the experimental setup. ED cell (1) consists of one concentration (2), one desalination (3) and two electrode chambers (4), two platinum polarizing electrodes (5) and two Ag/AgCl electrodes inserted in Luggin capillaries (6). A flow pass cell with a pH electrode (7) and a pH meter pHM120 MeterLab (8) are used for pH measurements. The current was set and the potential difference was measured using a potentiostat-galvanostat PARSTAT 2273 (9); the data were registered using a PC (10). (2), one auxiliary (3) and two electrode compartments (4), two platinum polarizing electrodes (5) and two Ag/AgCl electrodes inserted in Luggin capillaries (6). A mixed salt solution (7) composed of Na 2 CO 3 , KCl, CaCl 2 and MgCl 2 feeds the desalination compartment; NaCl solution (8) circulates through the auxiliary and two electrode compartments. A flow cell with a pH electrode (9) and a pH meter pHM120 MeterLab (10) are used for pH measurements.
The current is supplied by KEITHLEY 220 current source (11), the potential difference is measured using HEWLETT PACKARD 34401A multimeter (12); the data are registered using a PC (13). Due to low fraction of conductive area and narrow "gates" for ion passage, the funnel effect [29] results in high concentration polarization of MK-40 while electroconvection is low. The increased "gates" and more hydrophobic surface of MK-40 MOD membranes lead to lower concentration polarization and higher electroconvection. Adapted from [38]. and its derivative, d()/dt vs t, in the model solution at i=17.5 mA cm -2 . Ohm is the ohmic potential drop just after the current is switched on; St is the potential drop at the steadystate; = Ohm is the reduced potential drop; is the transition time. The value of d is defined as the distance from the membrane surface on which the concentration profile has an unstable, oscillatory character. The curves are performed using experimental data from Ref. [57]. and OH -ion fluxes generated at the depleted CEM surface. Adapted from [40]. measurement cell, (2) near-electrode chambers, (3) platinum polarizing electrodes, (4) nearmembrane chambers, (5) membrane under study, (6) Luggin-Haber capillaries, (7) auxiliary membranes, (8) silver/silver chloride electrodes, (9) Autolab pgstat302n
x potentiostat/galvanostat, (10) PC, (11) Heidolph pumpdrive 5101 peristaltic pump, and (12) vessels for solutions. (2) 0.8×10 -5 A s -1 ) scan rates of the polarizing current. xi
List of tables
Introduction
Ce travail est consacré à l'étude de l'effet des propriétés de surface des membranes d'échange d'ions sur la formation de précipités de composés de Ca 2+ et de Mg 2+ sur la surface de la membrane lors de l'électrodialyse de solutions aqueuses.
L'électrodialyse est l'une des technologies les plus développées utilisées pour le dessalement et la concentration des solutions d'électrolytes et pour la séparation des ions. Cette technique est basée sur la migration sélective des ions à travers les membranes échangeuses d'ions sous l'action d'un champ électrique appliqué comme force motrice.
La faible consommation d'énergie et l'efficacité élevée du courant sont les principaux avantages du procédé d'électrodialyse. Cependant, la polarisation de concentration et la formation de précipités introduisent des limites dans diverses applications. La polarisation de concentration, c'est-à-dire l'apparition de gradients de concentration sous l'action du courant électrique, est un phénomène inhérent à l'électrodialyse, qui permet le dessalement et la concentration. Il résulte de la différence entre les nombres de transport ionique dans la solution et dans la phase membranaire. La diminution de la concentration d'électrolyte à la surface de la membrane permet le dessalement de la solution, mais entraîne également une augmentation de la résistance électrique. Lorsque la concentration à l'interface approche de zéro, la densité de courant approche une certaine valeur de courant appelée densité de courant limite. Le transport d'ions par électrodiffusion atteint une vitesse maximale et il apparaît de nouveaux phénomènes et mécanismes de transport, appelés effets couplés de polarisation de concentration. La dissociation de l'eau entraîne la génération d'ions H + et OH -, qui participent au transfert du courant électrique mais aussi une diminution de l'efficacité du transfert d'ions des sels. De plus, la consommation d'énergie augmente et il apparaît des variations de pH des solutions dessalées et concentrées. Ces dernières peuvent entraîner la création de conditions favorables à la précipitation de sels peu solubles (scaling) à la surface ou à l'intérieur des voies conductrices d'ions (pores) de la membrane qui réduisent le transfert d'ions et créent une résistance supplémentaire au flux de solution et au transfert de masse.
L'électroconvection, qui est une convection induite par le courant électrique au-dessus du courant limite, résultant de l'action du champ électrique sur les charges d'espace existant à l'interface membranaire, améliore au contraire le transfert de masse et limite le processus de dissociation de l'eau.
Les principaux procédés utilisés pour diminuer la polarisation de concentration et le scaling sont le prétraitement de la solution (en utilisant un traitement chimique et/ou une filtration à l'aide de membranes d'ultrafiltration ou d'microfiltration) et/ou ou une inversion périodique des polarités. Dans la méthode d'inversion de l'électrodialyse, la polarité des électrodes et la direction de l'écoulement sont modifiées en même temps. Les chambres de dessalement deviennent des chambres de concentration, et réciproquement. La perte d'efficacité, de capacité et de produit final pendant le fonctionnement du changement de polarité de l'électrode sont les inconvénients de cette méthode. Cependant, lors de l'application d'un prétraitement de la solution et/ou d'une inversion de polarité, une certaine quantité d'ions faiblement solubles reste dans les solutions à traiter et la formation de précipité peut quand même se produire. L'idée principale de ce travail est d'utiliser d'autres possibilités «intrinsèques» pour réduire la formation du précipité. Nous allons essayer de modifier la surface de la membrane de manière à obtenir une membrane plus résistante à la formation de précipité que la membrane vierge. Plusieurs effets vont être appliqués. Ce sont l'homogénéisation, l'uniformisation et l'hydrophobisation de la surface, ce qui doit faciliter l'électroconvolution et réduire la polarisation de concentration et la dissociation de l'eau. Une surface de membrane plus lisse réduit le risque de nucléation des cristaux sur la surface de la membrane. Une membrane hétérogène à faible coût (une membrane échangeuse de cations MK-40 fabriquée en Russie) est prise comme substrat. La modification est faite en pulvérisant sur la surface une couche très mince de Nafion ® , ce qui ne modifie pas sensiblement le coût de la membrane. Ainsi, en combinant la modification de surface et l'utilisation de régimes électriques spéciaux, nous espérons contribuer au développement de méthodes de prévention ou d'atténuation du scaling de la membrane.
Dans ce contexte, l'objectif principal de la présente étude est d'étudier l'effet des propriétés superficielles des membranes échangeuses de cations sur le développement des phénomènes couplés de polarisation de concentration et de formation de précipités dans les solutions contenant des cations Ca 2+ et Mg 2+ dans les procédés classiques d'électrodialyse.
La thèse de doctorat comprend 5 chapitres. Le premier chapitre est consacré à la revue de la littérature concernant les membranes d'échange d'ions, ainsi que des méthodes de modification et d'application des membranes échangeuses d'ions, la polarisation de la concentration et l'effet couplé de la polarisation de la concentration dans les procédés électromembranaires, la formation de précipités et leurs méthodes d'étude et les moyens permettant de les limiter. Dans le deuxième chapitre, nous avons étudié l'effet de la nature des électrolytes et de la surface de la membrane sur le développement de l'électroconvection. Le troisième chapitre est consacré à l'homogénéisation et à l'hydrophobisation d'une membrane échangeuse de cations sur le scaling en présence d'ions divalents. Le quatrième chapitre vise à étudier l'influence de l'électroconvection, du pH et de l'application d'un champ électrique pulsé sur l'atténuation du scaling. Dans la cinquième partie, nous proposons une autre méthode pour la préparation d'une membrane composite anisotrope par formation d'une couche de polyaniline sur une membrane échangeuse de cations. Nous étudions le comportement électrochimique de la membrane composite et le processus de formation de précipité en présence d'ions divalents.
Dans chaque chapitre, l'état de l'art de chaque sujet est présenté et les méthodes expérimentales qui nous ont permis d'atteindre nos objectifs sont décrites.
This work is devoted to the study of the effect of ion-exchange membranes surface properties on the formation of scale of Ca 2+ and Mg 2+ compounds on the membrane surface during electrodialysis of aqueous solutions.
Electrodialysis is one of the highly developed technologies used for desalination and concentration of electrolyte solutions and for ion separation. It is based on ion selective migration through ion-exchange membranes under the action of applied electric field as the driving force.
Low energy consumption and high current efficiency are the advantages of electrodialysis process. However, concentration polarization and membrane scaling are the limitations in various applications of electrodialysis. Concentration polarization, i.e. appearance of concentration gradients under the action of electric current, is an inherent phenomenon in electrodialysis, which allows desalination and concentration and which arises as a result of the difference in the ion transport numbers in the solution and membrane phase. The decrease of electrolyte concentration at membrane surface allows solution desalination, but also leads to increasing potential drop. When the concentration at the interface approaches zero, the current density approaches a certain current value (the limiting current density). The ion transport by electrodiffusion becomes saturated and new phenomena and additional transport mechanisms develop, which are named coupled effects of concentration polarization. Water splitting results in generation of H + and OH -ions, which participate in the transfer of the electric current. However, this leads to a decrease in the current efficiency towards the transfer of salt ions. Besides, the energy consumption increases and variations in the pH of desalted and concentrated solutions are produced. The latter may results in creation of the conditions where precipitation of sparingly soluble salts (scaling) occurs. The scale formation on the surface or inside the ion-conductive pathways (pores) of the membrane reduces the working area of the membrane and causes additional resistance to the flow of solution and mass transfer.
Electroconvection, which is the current-induced convection resulting at overlimiting currents by the action of electric force upon the electric space charge, on the contrary, enhances the mass transfer and hinders the process of water splitting.
The main known methods for decreasing concentration polarization and membrane scaling are solution pretreatment (using chemical treatment and/or filtration using microfiltration and ultrafiltration membranes) and electrodialysis reversal. In the electrodialysis reversal, the electrode polarity and the flow direction are changed at the same time. The desalination chambers become concentration chambers, and vice versa. The loss in the current efficiency, capacity and final product during the operation of electrode polarity changing are the disadvantages of this method. However, when applying solution pretreatment and/or electrodialysis reversal, a certain amount of sparingly soluble ions remains in solutions to be treated and the scale formation may occur. The main idea of this work is the use of other "intrinsic" possibilities to reduce scale formation. We try to modify the membrane surface in a way to obtain a membrane more resistant to scale formation than the pristine membrane.
Several effects, which mitigate scaling, are applied. They are homogenization, smoothness and hydrophobization of the surface, which facilitate electroconvection and reduce concentration polarization and water splitting. A smoother membrane surface reduces the risk of nucleation of scale crystals on the membrane surface. A low cost heterogeneous membrane (a MK-40 cation-exchange membrane made in Russia) is taken as the substrate. The modification is made by casting on the MK-40 surface a very thin layer of Nafion ® material, which does not noticeably increase the membrane cost. Thus, in this way, combining surface modification and the use of special electric regimes, we hope to make a contribution to the development of methods for prevention or mitigation of membrane scaling.
In this context, the main goal of the present study is investigation of the effect of surface properties of cation-exchange membranes on the development of coupled phenomena of concentration polarization and scale formation in the solutions containing Ca 2+ and Mg 2+ cations during conventional electrodialysis processes.
The doctoral thesis includes five chapters. The first chapter is devoted to the literature review concerning ion-exchange membranes, and methods of modifications and applications of ionexchange membranes, concentration polarization and coupled effect of concentration polarization in electromembrane systems, scaling of ion-exchange membranes and methods for studying and mitigating the scale formation. In the second chapter, we study the effect of electrolyte nature and membrane surface properties on the development of electroconvection.
The third chapter is devoted to the effect of homogenization and hydrophobization of a cation exchange membrane on its scaling in the presence of divalent ions during electrodialysis. The fourth chapter aims to study the influence of electroconvection, pH adjustment and pulsed electric field application on membrane scaling mitigation in electrodialysis. In the fifth part, a method for the preparation of an anisotropic composite membrane by the formation of polyaniline layer within a cation-exchange membrane is proposed. The electrochemical behavior of the composite membrane and the process of scale formation in the presence of divalent ions are studied.
In each chapter, the state of the art of each subject is presented, and the experimental methods that have enabled us to achieve our objectives are described.
Chapter 1
Literature review
Ion-exchange membranes
Ion-exchange membranes (IEMs) are polymers (usually in the form of a film), which contain positive or negative charged groups fixed to the polymer matrix (Fig. 1-1). The high density of these charged groups inside the macromolecule creates a space charge, which is compensated by an equivalent number of mobile charges of the opposite sign -counter-ions.
The latter in the vicinity of the fixed charges groups create an ionic atmosphere and ensure the electroneutrality of the polymer. The membrane also contains a small number of mobile ions that have the same charge sign as fixed ions, which are called co-ions [1][2][3].
Fig. 1-1. Schematic representation of a membrane as a fragment of an electromembrane system: R --fixed ions; K, A -counter-ions and co-ions in the membrane and electrolyte solution; -polymer matrix chains; -bridges of polymer agent, cross-linking the main polymer matrix chains; -incorporations of an inert polymer imparting thermal stability, or mechanical strength, or elasticity to the membrane(adapted from [4]).
When a membrane contacts a dilute electrolyte solution, the co-ions are almost completely excluded from the membrane phase and make only a slight contribution to the current transfer. This effect is called "Donnan exclusion" [1]. Application of an electric field to the membrane causes the movement of counter-ions, or electromigration. Therefore, an ideal membrane swollen by water or electrolyte solution is a polyelectrolyte with unipolar conductivity.
1.1.1 The structure of ion-exchange membranes
IEMs are based on polymeric materials such as aliphatic, cyclic, aromatic, etc. The chains of the polymer matrix are most often hydrocarbon or perfluorinated chains (Fig. 12). The perfluorinated chains are more chemically and thermally stable [5]. The following groups are used as fixed ions for cation-exchange membranes (CEMs): -SO 3 -, -COO -, -PO 3 - [6,7]. The charge of these groups is compensated by positively charged counter-ions. In anion-exchange membranes (AEMs), positive charges are fixed to the matrix: -NH 3 + , -RNH 2 + , -R 3 N + [7].
The charge of these groups is compensated by negatively charged counter-ions. chloride and a number of other additives) and often reinforced by fibers to give the membrane the mechanical strength. The desire to achieve a more even distribution of charges and better electrochemical and separation properties of membranes led to the creation of homogeneous membranes. Fixed functional groups are introduced directly into the polymer film of these membranes [6].
Bipolar membrane (BM) is a special type of electromembrane material, which makes it possible to realize an important process of electrochemical production of acids and bases from the corresponding salts. These membranes are a bilayer system consisting of cation-and anion-exchange layers. The membrane generates oppositely directed flows of H + and OH - ions by applying an electric field due to the water splitting reaction at the bipolar boundary inside the membrane [8].
Characterization of ion-exchange membranes
The properties of the membrane are divided into two categories: equilibrium or static properties, which are studied without an electric current ( 0) i , and transport or dynamic properties, which are studied by passing an electric current ( 0) i or in conditions of diffusion transport [3,9]. The main characteristics of IEMs including equilibrium, electro transport and polarization properties are presented in Table 1-1.
Physico-chemical static properties of membranes
Transport properties Polarization properties: The physico-chemical and mechanical properties of membranes [8] depending on the synthesis and the accompanying technological operations (pressing, fiber reinforcement, etc.) are evaluated by measuring the density, geometric dimensions in the dry and swollen state, and tensile strength. These properties involve also the exchange capacity which characterizes the number of charged groups fixed to the polymer matrix per unit membrane volume or mass. The amount of water absorbed by the membrane depends on the exchange capacity and determines its chemical behavior under process conditions. The moisture capacity of the membranes is an informative quantitative characteristic of their hydrophilic properties.
I-V curve 0 i 0 i , 0 c 0 i 0 i exchange capacity: Q, meq g -
Measurement of the membrane potential served as a starting point for the creation of membrane electrochemistry, since the thermodynamic bases for this method and the experimental procedure were developed much earlier than the appearance of synthetic membranes and were used for the study of collodion and biological membranes. The diffusion permeability [8] of the charged membranes is a property that reduces the selectivity of ion transport across the membrane during electrodialysis and reverse osmosis. The quantitative characteristic of diffusion permeability is the integral permeability coefficient P m .
The conductivity is one of the most important characteristics that determine the practical usefulness of IEMs. The value of conductivity provides rich information about the membrane properties, since it depends on the structure, the nature and concentration of equilibrium solutions, the synthesis conditions and the polymer composition of the membrane material.
The method can be used both for testing membranes in different states (for example, after synthesis or after treatment to assess the degradation degree), and for systematic studies.
The membrane permselectivity characterizes the ability of selective counter-ion transport through the membrane. This property is the most important, as it determines the application of membranes in technology. Quantitatively, the permselectivity is associated with the transport number of ion in the membrane. The transport number of counter-ions, the measure of permselectivity of IEMs, is the basic parameter determining the efficiency of electromembrane separation processes. This is related to the ratio of the concentration of ionexchange groups in the membrane (fixed ion concentration) to the concentration of the outer solution [8]. Ionic transport number is a quotient of the current carried by an ionic component and the total current. The counter-ion transport number in modern commercial membrane in relatively dilute solutions (up to 1 M) is very close to 1 (>0.97 acording to [10]) for homogeneous as well for heterogeneous membranes.
The electro-osmotic permeability of membranes is a property associated with the transfer through the membrane of a certain amount of water under the electric current [1,11,12]. The strength of the electric field causes electromigration of charged ions. These hydrated ions move under the action of the electric field together with the water of hydration. The water is transported by both cations and anions. Therefore, the measured volume of the transferred fluid represents the difference between the flow of water moving toward the cathode and the anode.
Voltammetry is the method to study electrochemical behavior of IEMs. The I-V curve provides information about limiting current density (LCD) values and coupled effects of concentration polarization such as water splitting phenomenon and current-induced convection (gravitational convection and electroconvection). All this extensive information can be obtained in the electromembrane system (EMS).
The measured quantities strongly depend on the used methods and experimental conditions (electrolyte nature, its concentration, temperature, stirring).
Methods of ion-exchange membrane modification
Modification of IEMs is a method of changing membrane structure in their bulk and/or on their surface in order to give the material new functional properties. The basic methods of membrane modification are as follows:
-chemical modification; -mechanical modification; -electrochemical deposition.
Chemical modification
Chemical modification of IEM refers to the chemical changes of a polymer matrix by treating it with various substances. For example, introduction of fixed charged groups in the membrane matrix changes essentially membrane properties. Recently, films from polysulfone and polyether ether ketone are used as the basis for new perspective IEMs, by treating them with chlorosulfonic and sulfuric acid, respectively [13,14]. It is possible to obtain a set of ion-exchange, heat resistant and sufficiently strong membranes with good electrical conductivity and selectivity changing the degree of sulfonation. Sata et al. [15] developed methods for the introducing the controlled amounts of surfaceactive polyelectrolytes in a charged polymer matrix. The result of these studies was the creation of electrodialysis membranes having the charge-selective properties and resistant to the fouling by surfactants. The charge-selective properties allow one to successful use of electrodialysis (ED) for the treatment of multicomponent solutions. Enhancement of electrical conductivity properties of IEMs may be achieved by introducing polypyrrole [16], viologene [17] and polyaniline (PANI) [18] into the sulfo-cationite film.
Chemical modification of industrial membranes such as Nafion ® consisting in the change of hydrophilic properties of membranes is achieved in two ways:
1) the regulation of cluster formation near the fixed charged centers in the side chain is carried out at the stage of alkaline hydrolysis of sulfonyl fluoride groups by varying the concentrations and nature of the strong base (necessary for saponification) [19]. Depending on the hydration of the cation during the transition from LiOH to KOH or CsOH it is possible to obtain more or less water-saturated ion-dipole associates [-SO 3 ... H 2 O ... Me + ] and vary the swelling of the cluster zones;
2) carboxyl groups are introduced into the sulpho-cathionite film from the receiving side of the membrane facing the anode [20] or the membrane surface is treated by amination [21] (to weaken the electro-osmotic transfer of water to the desalination chamber).
Berezina et. al [22] suggested an express method of synthesis of surface-modified membrane composites based on MF-4SK membrane (which is a Russian analogue of Nafion ® ).
Composite MK-4SK/PANI membrane was obtained using the chemical synthesis of PANI through the successive diffusion of the monomer and polymerization initiator (ammonium persulfate) into water (Figs. 1-3a,b). The membrane modification by PANI results in a 2-3 fold decrease in the membrane surface roughness (Figs. 1-3c,d). This is probably due to lining of the modified size by polyaniline chains, which leads to surface smoothing. composite sample turned (b) to polymerizing solutions, respectively (adapted from [22]).
Thermogravimetric analysis showed a higher thermal stability (590 0 C) of the MF-4SK/PANI membrane as compared to the pristine membrane (405 -550 0 C). Water transport eletroosmotically driven by the Na + ions is 3 times higher than that by the H + ions due to the difference in the transport mechanism of these ions in the electric field. The sodium ion is transported via the migration mechanism with its hydrate shell and entrains additional water molecules in dilute solutions, while the hydrogen ion moves through the membrane mainly via the Grotthuss mechanism [23]. The transport number of water in the MF-4SK membrane is 2-3 times higher in comparison with the MF-4SK/PANI composite. PANI acts as a barrier layer for water molecules. The modification by PANI causes a decrease by 50 -70 % in the membrane diffusion permeability. Also asymmetry of diffusion permeability (about 10 %) was found for modified membrane. The diffusion permeability proved to be lower in the case of the composite orientation with its modified side turned towards the flow of electrolyte.
Nevertheless, the decrease in the membrane conductivity by 85 % as compared to the pristine membrane occurs as a result of modification by PANI.
Ivanov et al. [24] explained this fact by the granular PANI morphology and existence of the so-called redox heterogeneity of PANI with the intermediate oxidation degrees. This means
(a) (b) (c) (d)
the presence of well conducting regions separated by less conducting oxidized fragments in the whole polyaniline layer due to the block structure of this polymer.
In addition, polymerization of PANI on the surface of CEMs leads to a higher membrane selectivity for monovalent cations in comparison with bivalent cations [25]. Such monovalent selectivity can be useful in electrodialysis metathesis and selectrodialysis [26,27] for concentration of divalent ions in product stream.
Mechanical modification
The methods of mechanical modification include, first of all, the production of BMs, when two oppositely charged membranes are pressed together [8,28]. This leads to the production of a bilayer membrane, which exhibits special properties in the electric field depending on the orientation of the layers to the cathode or anode.
Mixed types of IEMs are mechanically obtained when heterogeneous membranes are formed from a mixture of cation-and anion-exchange resins and polyethylene [8]. As a result, a mosaic ion-exchange membrane is formed, containing inclusions of negatively and positively fixed ions randomly distributed in a neutral polymer matrix. Such membranes have ampholytic properties, and the electrical interactions of the heteropolar groups lead to the formation of additional pores, the proportion of which is determined by the ratio of the resins in the composition. Investigation of the properties of these membranes showed that they are capable of passing large organic ions, which is important, for example, in the ED of demineralization of sugar solutions containing impurities of coloring substances. The use of mixed type of IEMs in ED instead of AEM helps to reduce the effect of membrane fouling and increase the life of the ED apparatus.
Profiled membranes (Fig. 1234) are obtained by hot pressing in a certain mode for the development of the membrane surface [29,30]. The scientists of KubSU (Russia) developed new technology which involves pressing the IEMs preliminarily transformed into their swelled state [31]. The approach suggested by the authors allows one to increase the electrical conductivity of membranes and the fraction of active surface due to the destruction of the polyethylene film on the surface, which is formed in the course of hot pressing [32]. The use of profiled membranes in electrodialyzers provides higher transport numbers of counter-ions of the salt [31]; the rate of mass transfer, compared with usual smooth membranes, increases by a factor of four [33] due to the solution turbulence on the protrusions of the membrane surface and the growth of electroconvection (EC) in the over-limiting current mode. Mechanical homogenization of heterogeneous CEMs was proposed in Refs. [34][35][36]. Fig. 12345shows SEM images of the surface and cross section of the pristine heterogeneous MK-40 membrane and its modifications. An essential growth of LCD for MK-40*/Nf 20 membrane in comparison with other membranes (Fig. 123456) was found [34]. There is a strong correlation between the rate of overlimiting transfer and the degree of surface hydrophobicity of studied membranes: the more hydrophobic the surface, the more intensive the overlimiting transfer. The main cause of the strong increase in the overlimiting current is the enhancement of the Na + ion transport from the solution bulk to the membrane surface by EC [34].
Besides, the partial current of H + ion produced in water splitting reaction was significantly lower in the case of modified membrane in comparison with the pristine one (Fig. 123456). intensified by EC increases up to 3.5 times [36].
Electrochemical deposition
Electrochemical methods include techniques for modifying the membrane surface under external electric field. In this case, a modifying additive is introduced into the solution that surrounds the membrane from the receiving side (which includes the main stream of the electrolyte ions). It was found that the introduction of an anion of dodecyl sulfate in a dilute solution of sodium sulfate in a system with an AEM leads to the electrosorption of this anion at the interface between the membrane and the solution immediately after the current is turned on [37]. As the electric current passes, the ions accumulate in the surface layer of the membrane and create a front of electrodiffusion through the entire volume of the membrane.
After that, the modifying anions pass through the membrane. During this process, it is easy to observe the different state of the membrane and its properties during the transition from the modified state (a thin layer of oriented surface-active anions on the surface) to the completely saturated membrane with modifying ions. The change in transport properties indicates a weakening of the salt and ion transport due to additional obstacles. The membranes acquire new functional properties: asymmetry, charge selectivity or reduction of water transport under electric field.
Sivaraman et al. [38] used acrylic acid grafted fluorinated ethylene propylene copolymer membrane after sulfonation for electrochemical deposition of PANI on to the membrane. The deposition of PANI started from the surface of the membrane and grew up inside the membrane. The amount of PANI deposited depended upon the polymerization time. The electrodialysis experiments showed improved permselectivity for divalent cations.
Vázquez-Rodríguez et al. [39] studied the effect of synthesis conditions on the transport properties of commercial CMX membranes modified electrochemically by polypyrrole (Ppy).
The quantity of Ppy in CEM, compactness of Ppy, and the charges on Ppy (negative or positive) were controlled effectively by changing the electrosynthesis conditions. The presence of Ppy in the membranes decreased the transport number of Na + and Mg 2+ , and the reduction in transport number was more significant for Mg 2+ .
These examples show that the membrane modification creates a sufficiently thin surface layer corresponding to small degrees of membrane saturation with a modifying component. In this case the surface has barrier functions, and in the membrane phase arises a new interface.
Application of ion-exchange membranes
ED and processes such as electrodialysis with bipolar membranes (EDBM), fuel cells are all based on IEMs and an electrochemical potential as driving force. But the process design and their applications can be rather different. The spacers not only separate the membranes, they also contain the manifold for the distribution of the two different flow streams in the stack and provide the proper mixing of the solutions in the chambers.
The main difference between the sheet flow and the tortuous path flow spacer is that in the sheet flow spacer the chambers are vertically arranged and the process path is relatively short.
The flow velocity of the feed is between 2 and 4 cm s -1 and the pressure loss in the stack is correspondingly low, i.e. between 0.2 and 0.4 bar. In the tortuous path flow stack, the membrane spacers are horizontally arranged and have a long serpentine cut-out which defines a long narrow channel for the fluid path. The feed flow velocity in the stack is relatively high, i.e. between 6 to 12 cm s -1 which provides a better control of concentration polarization and higher limiting current densities, but the pressure loss in the feed flow channels is quite high, i.e. between 1 and 2 bar. Fig. 12345678. Schematic drawing illustrating the construction of a sheet flow stack design (adapted from [40]).
Besides the presented stack designs, which are used in industry, there is a number of lab-cells used for studying membrane properties (see section 1.3.3.3).
Reverse electrodialysis
The production of energy by mixing fresh and sea water through IEMs is a process referred to as reverse electrodialysis (RED). RED system includes salt and fresh water chambers, IEMs, electrodes and electrode chambers with a suitable redox couple [7]. In RED process the fresh water chamber is placed between the salt water chambers (Fig. 123456789).
Electrode chamber
Desalination chamber
Concentration chamber
Fig. 123456789. Schematic drawing illustrating the RED system (adopted from [7]).
The chambers are separated by a CEM on the one side and an AEM on the other side. Due to the difference in concentration in salt and fresh water chambers, the cations and anions diffuse into the fresh water chambers [7]. The ionic diffusion flux generates an electrochemical potential. Electrodes receive the ions and convert them into an electrical current through an oxidation or reduction reaction. The electrical current generated is captured directly by an external load. The membrane can generate potential difference of around 30 mV with high internal resistance in mimicking sea water and river water [42].
The practical application of RED process is limited by the presence of multivalent ions in the seawater and river water sources. Therefore, IEMs with high selectivity for monovalent/multivalent ions are very beneficial for RED applications.
Electrodialysis metathesis
Electrodialysis metathesis (EDM) is used to maximize water recovery and produce concentrated salts from brines obtained as reject or secondary product in other membrane processes, e.g. reverse osmosis [43]. EDM is modified from the conventional ED. In the EDM system, four alternating IEMs form a repeating quad of four chambers and a substitution solution is added to provide the exchangeable ions for the metathesis reaction (Fig. 1-10) [44].
In the reaction, the feed solution, represented by MX, exchange cations and anions with the substitution solution, M′X′, to form new product salts, M′X and MX′. For this purpose, two of the chambers contain the feed and a substitution solution, and are called diluate 1 (D1) and dilute 2 (D2) chambers, respectively. The other two chambers contain each the newly formed non-precipitating product salts, and are called concentrate 1 (C1) and concentrate 2 (C2) chambers (Fig. [1][2][3][4][5][6][7][8][9][10]. Depending on the intended product solution in the two concentrate chambers, an appropriate substitution salt solution is fed to the D2 chamber to initiate the double decomposition reaction with the feed solution from the D1 chamber. Salts formed in the C1 chamber are predominantly rich in anions and poor in cations from the D1 chamber.
Salts formed in the C2 chamber are predominantly rich in cations and poor in anions from the D1 chamber. When sodium chloride is added as the substitution solution, the C1 chamber contains mixed sodium salts, and the C2 chamber contains mixed chloride salts [44].
The membrane selective to the transfer of singly charged ions can be used for mitigation of the scaling and fouling in the concentrate chamber when applying electrodialysis metathesis.
The design of electrodialysis stack is organized in a way that there are two concentrate chambers. There are singly charged cations (such as Na + ) and multicharged anions (SO 4
2-
) in one chamber, multicharged cations (such as Ca 2+ ) and singly charged anions (Cl -) in the other one.
Fig. 1-10. Schematic diagram of the EDM process (adopted from [44]).
Bipolar membrane electrodialysis
The conventional ED can be combined with BMs and utilized to produce acids and bases from the corresponding salts [45]. In this process CEMs and AEMs are installed together with BMs in alternating series in an electrodialysis stack (Fig. [1][2][3][4][5][6][7][8][9][10][11]. A typical repeating unit of an electrodialysis stack with BMs is composed of three chambers, two monopolar membranes and a BP. The chamber between the monopolar membranes contains a salt solution and the two chambers between the monopolar and the BMs contain, respectively, a base and an acid solution.
When an electrical potential gradient is applied across a repeating unit, protons and hydroxide ions generated in the membrane bipolar junction produce an acid and a base, respectively. The process design is closely related to that of the conventional ED using the sheet flow stack concept. However, because of the significantly higher voltage drop across a BM, only 50 to 100 repeating cell units are placed between two electrodes in a stack.
Fig. 1-11. Schematic drawing illustrating the principle of electrodialytic production of acids and bases from the corresponding salts with bipolar membranes (adopted from [40]).
The utilization of EDBM to produce acids and bases from the corresponding salts is economically very attractive and has a multitude of interesting potential applications in the chemical industry as well as in biotechnology and water treatment processes.
Fuel cells
Fuel cells are electrochemical devices with high energy conversion efficiency, minimized pollutant emission and other advanced features. Proton exchange membrane fuel cells (PEMFC) are considered as a key issue against oil rarefaction and green house gas emissions [46]. High temperature PEMFCs have been proposed to solve problems of catalyst poisoning by CO and fuel cell electrode flooding, as well as to improve fuel cell efficiency, reduce the amounts of noble metal catalyst and avoid reactant humidification [47].
The power generating system in PEMFCs is an electrochemical reaction involving gases such as hydrogen, methanol, and ethanol. The reactions occurring in the fuel cell could be elucidated as follows:
e 4 4H H 2 (anode) (1.1) O 2H 4H 4 O 2 2 e (cathode) (1.2) O 2H O 2H 2 2 2 (cell reaction) (1.3)
The working principle of the fuel cell is schematically illustrated in Fig. 1-12. The hydrogen ions at the anode move to the cathode through the electrolyte membrane, which produces the electrical current with water as a by-product [48].
Fig. 1-12. Scheme diagram illustrating the structure of PEMFC and the principle of operation (adopted from [49]).
The key component of the PEMFCs is the polymer electrolyte membrane (PEM), which functions as an electrolyte to transfer the protons from the anode to the cathode, and provides a barrier to the passage of the electrons and fuel. Therefore, several researchers have focused on the development of reliable, high performance PEMs with high proton conductivity, low fuel permeability, high oxidative stability, good mechanical stability and low cost of fabrication of membrane electrolyte and assembly.
1.3 Concentration polarization and coupled effects of concentration polarization in electromembrane system
Concentration polarization in electromembrane system
Let us consider a concentration profile of an electrolyte in an IEM and the adjoining diffusion layers of an EMS (Fig. 12345678910111213). Within the membrane, we consider the virtual electrolyte solution, which is in local equilibrium with a thin layer of membrane [50]. As there is equilibrium at the membrane boundaries, the real concentration in the solution and the virtual concentration in the membrane there are identical and the concentration profile is continuous (Fig. [1][2][3][4][5][6][7][8][9][10][11][12][13]. When a direct current passes normal to the IEM, the counter-ions from the solution pass through the IEM by migration and diffusion. Due to the fact that the counter-ion transport numbers in the membrane, i t , are almost twice as large as in the solution, i t , the electrolyte concentration decreases near the membrane in one (desalination) chamber, 1 C , and increases in the neighboring (concentration) chamber, 2 C (Fig. [1][2][3][4][5][6][7][8][9][10][11][12][13]. The phenomenon of concentration variation under the effect of electric current is called concentration polarization (CP).
The ion transport in solution or membrane is described by the extended Nernst-Planck equation with a convective term:
i i i i i i F j D C z C C V RT
(1.4)
Here i j , i D , i
C and i z are the flux density, diffusion coefficient, concentration and charge number of ionic species i respectively; is the electric potential; F is the Faraday constant;
R is the gas constant; T is the temperature; V is the fluid velocity vector. The first term in the right-hand side of Eq. (1.4) represents diffusion, the second, migration, and the third, convection. The current density, i , is the sum of flux densities of all ionic species taking into account their charges:
l C 0 C 0 C 1 C 1 - C 2 C 2 - d i i i i F z j (1.5)
With a gradual increase in the electric current under these conditions the salt concentration near the membrane surface in desalination chamber, 1 C , is decreasing and tends to zero. The current density is determined by the bulk solution concentration, 0 C , and takes a critical value, called the limiting current density, lim i , when 1 0
C C : 0 lim ( ) i i FDC i t t (1.6)
Here D is the electrolyte diffusion coefficient. Eq. (1.6) was first obtained by Peers in 1956 [51]. It can be seen that lim i also depends on the difference in the ion transfer numbers in the phase of the membrane and the solution and on the thickness of the diffusion layer.
Consequently, the polymer selectivity, the nature of the counter-ions and the convection of the solution will be reflected in the value of the limiting current.
Thus, following the classical electrochemical interpretation presented above, the formation of concentration gradients results in saturation of the current density caused by the vanishing interface concentration. When the current density tends to its limiting value, lim i , the potential drop over a membrane surrounded by two diffusion boundary layers tends to infinity.
However, in real EMS the limiting current density can be exceeded in several times at the expense of the emergence, near the membrane surface, of a complex of effects that are caused by concerted action of the flowing current and concentration variations in the system. These effects may be united by the term "coupled effects of concentration polarization".
1.3.2 Coupled effect of concentration polarization in electromembrane system
Water splitting
Two effects explaining the overlimiting mass transfer are connected with the water splitting reaction at the membrane/solution interface. The emergence of additional charge carriers H + and OH -ions that are generated during the water splitting in membrane systems [52], for a long time was considered as the principal and, frequently enough, sole, reason for the overlimiting mass transfer [53].
The water splitting reaction cannot occur in solution, it takes place rather within a thin layer in membrane. Indeed, there are several peculiar effects explaining the high rate of water splitting within the membrane interface. Simons [54] has proposed that the H + and OH -ions are generated in the course of protonation and deprotonation reactions involving the fixed charged groups as catalytic centers and water. These reactions for CEMs can be written as:
1 1 2 3 AH + H O A + H O k k (1.7) 2 2 2 A + H O AH + OH k k (1.8)
Here AH is a neutral acid; k is the chemical reaction rate constant.
For AEMs, similar reactions occur:
3 3 2 B + H O BH + OH k k (1.9) 4 4 2 3 BH + H O B + H O k k (1.10)
Here B is a neutral base, such as tertiary amine.
These proton-transfer reactions take place in a thin, about 2 nm of thickness, reaction layer of the membrane adjoining to the depleted solution. In the case of CEM, the protons produced by reaction (1.7) leave the reaction layer for the membrane bulk, under the action of applied current, while the hydroxyl anions produced by reaction (1.8) migrate into the adjacent depleted solution. In BMs, the protonation and deprotonation reactions take place at the junction of the cation-and anion-exchange layers [55][56][57].
The backward reactions of recombination in reactions (1-7, 1-8, 1-9, 1-10) are normally rapid, the rate constants k -n (n=1,2,3,4) are of the order of 10 10 (M -1 s -1 ) [58], while the forward reactions of dissociation depend strongly on the nature of the functional groups. The dissociation rate constants at equilibrium can be expressed in the following form, e.g. for reactions (1.7) and (1.8):
1 1 k K k a ; 2 2 k K K k a w (1.11)
where a K is the acid-dissociation equilibrium constant. Following Simons [59] It is possible to range the ion-exchange groups (fixed to a membrane matrix) in the order of increasing water splitting as follows [55,57]:
-N + (CH 3 ) 3 < -SO 3 -< -PO 3 H -<=NH, -NH 2 < ≡NH≡ < -COO -< -PO 2- 3 (1.12) lim k , s -1 0 3 10 -3 3 10 -2 10 -1 1 10 10 2
Here lim k is the rate constant corresponds to the step limiting reaction.
At the same time, water splitting reaction gives rise to another mechanism of the overlimiting transfer. This is the effect of exaltation of limiting current [60]. Yu.I. Kharkats [61] explored the effect of exaltation of limiting current in relation to EMS for the first time. The emergence of the H + and OH -ions in the vicinity of the membrane surface perturbs electric field and is capable of increasing (exalting) transfer of counter-ions of a salt. For example, the OH -ions generated at the interface depleted diffusion layer/CEM surface attract the cations from the bulk solution towards the membrane interface. Taking into account the effect of exaltation the flux density of salt counter-ions, 1 j , is described:
1 0 1 1 2 W W D C D j J D (1.13)
Here index 1 is the counter-ion; W D and W J are the diffusion coefficient and the flux density of the products of the water splitting reaction that are generated near the surface of a membrane (in the case of a CEM, these are the OH -ions).
The increase in the flux of salt counter-ions due to the effect of exaltation of limiting current in EMS is relatively small. For example, it is about lim 1 0.2 j when the flux of OH -ions reaches the values equal to lim 1 j . In practice, the increment flux of salt counter-ions is much higher [62,63], therefore, it can not be explained only by the effect of exaltation of limiting current.
Gravitational convection
Gravitational convection arises due to the non-uniform distribution of solution density, which causes Archimedean volume force bringing the liquid in motion [64,65] (Fig. [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Solution is more diluted near the membrane then in the bulk. Moreover, its temperature is elevated due to Joule heat within a layer with high electric resistance. As a result, near the surface, the Archimedean buoyant force acts upon a parcel of fluid vertically upward, while in the bulk, the body force applied to a parcel of fluid acts in opposite direction. This couple of forces produces vortex motion of the fluid in the space near the membrane. Consider a solution layer situated between two solid plates. When the plates are vertical and the density gradient in the solution is horizontal, the gravitational convection arises without threshold [64]. [64,65]:
3 0 Ra= gX D (1.14)
Here g is the free-fall acceleration; is the viscosity; 0 X is the characteristic distance where the variation in the solution density, , takes place; is the variation of within 0 X .
If the region of solution density variation spreads from the one plate to the other, 0 X is equal to the distance between the plates. Otherwise, 0 X is less than the distance between the plates, being more close to the diffusion boundary layer (DBL) thickness near a plate [67,68].
Electroconvection
EC is the main mechanism of the mass transfer enhancement through the IEM in ED for the treatment of dilute solutions in overlimiting regimes [65,[69][70][71]. EC is an entrainment of fluid molecules by ions forming a space charge region (SCR) at the surface of IEM under the electric field [72]. Theoretically [73][74][75] there are two main mechanisms of EC:
bulk EC caused by the action of an electric field upon the residual space charge of a (quasi) electroneutral strong electrolyte with a non-uniform concentration distribution;
-EC induced by electro-osmotic slip in SCR forming at the interface depleted solution/ion-selective membrane surface. Dukhin [START_REF] Dukhin | Electrokinetic phenomena of the second kind and their applications[END_REF] and Mishchuk [START_REF] Delgado | Interfacial electrokinetics and electrophoresis[END_REF] proposed to distinguish two principal kinds of the current-induced electro-osmosis:
Electro-osmosis of the 1 st kind occurs as the electrolyte slip caused by the action of the imposed tangential electric field upon the diffuse part of the electric double layer (EDL), which remains (quasi) equilibrium;
Electro-osmosis of the 2 nd kind occurs as the electrolyte slip caused by the action of the imposed tangential electric field upon the extended non-equilibrium SCR of the EDL.
The bulk and the current-induced electo-osmosis may be (quasi) equilibrium and nonequilibrium. At small currents/voltages, when the EDL at the interface depleted solution/IEM surface is (quasi) equilibrium (the structure of its diffuse part preserves the Boltzmann ion distribution, shifted by the action of the external field [START_REF] Levich | To the theory of non-equilibrium double layer[END_REF]), there is a (quasi) equilibrium EC.
In contrast to the (quasi) equilibrium EC [75], a non-equilibrium EC develops in the overlimiting current regimes. In this case, the electro-osmosis of the 1 st kind transforms into the electro-osmosis of the 2 nd kind [START_REF] Mishchuk | Electro-osmosis of second kind near heterogeneous ion-exchange membranes[END_REF][START_REF] Mishchuk | Electroosmosis of the second kind[END_REF]. A feature of this kind of EC is the presence of an extended SCR (Fig. 1 concentrations near the left membrane side corresponds to the appearance of the SCR (adapted from [START_REF] Mishchuk | Concentration polarization of interface and non-linear electrokinetic phenomena[END_REF]).
As mentioned above, the tangential electric field is required for the development of electroosmosis of the 1 st and 2 nd kind. The tangential electric field can be due to the electrical and/or geometrical heterogeneity of the membrane surface [74,[START_REF] Maletzki | Ion transfer across electrodialysis membranes in the overlimiting current range: stationary voltage current characteristics and current noise power spectra under different conditions of free convection[END_REF], as well as the non-uniform lateral concentration distribution [START_REF] Urtenov | Basic mathematical model of overlimiting transfer enhanced by electroconvection in flow-through electrodialysis membrane cells[END_REF]. Also, electro-osmotic slip can appear as a result of spontaneous perturbation of an initially homogeneous electric field [73]. In the case of forced flow of a solution in the electrodialysis chambers, the reason for the non-uniform concentration distribution is the partial desalination of the solution increasing as the solution moves along the chamber. It causes an increase in the solution resistance along the longitudinal coordinate and, as a consequence, the electric current lines condense near the entrance to the chamber [START_REF] Urtenov | Basic mathematical model of overlimiting transfer enhanced by electroconvection in flow-through electrodialysis membrane cells[END_REF].
EC induced by electro-osmotic slip can arise both in a no-threshold and threshold modes. In the first case, there is a stable EC due to the action of a stable tangential electric field upon the SCR, the main part of which is made up of a quasi-equilibrium EDL. This type of EC develops in the membrane system at relatively low currents/voltages (corresponding to the linear part of the I-V curve and to the part of the inclined plateau where the curve is smooth).
Usually, the contribution of a stable EC to mass transfer is negligible. In the second case, a hydrodynamically unstable EC develops. The reason for the instability when a certain voltage threshold is reached is the appearance of a positive feedback between the fluctuation of the local tangential force and the electro-osmosis slip velocity [73,74,[START_REF] Andersen | Confinement effects on electroconvective instability[END_REF]. The tangential force is directed from the depleted solution region at the membrane surface to the region where the solution is enriched due to the rotation of the vortex, which brings a relatively concentrated solution out of the volume. If the force grows for some reason, it will lead to an increase in the rotation speed of the vortex and an increase in the electrolyte concentration at the surface, which in turn will cause an increase in the tangential electric force.
1.3.3 Methods for studying the concentration polarization in an electromembrane system
Voltammetry [39,[START_REF] García-Gabaldón | Evaluation of two ion-exchange membranes for the transport of tin in the presence of hydrochloric acid[END_REF][START_REF] Tiana | Thin-film voltammetry and its analytical applications: A review[END_REF], chronopotentiometry [START_REF] Mareev | Chronopotentiometry of ion-exchange membranes in the overlimiting current range. Transition time for a finite-length diffusion layer: modeling and experiment[END_REF][START_REF] Scarazzato | Evaluation of the transport properties of copper ions through a heterogeneous ion-exchange membrane in etidronic acid solutions by chronopotentiometry[END_REF], chronoamperometry [START_REF] Rao | A distributed dynamic model for chronoamperometry, chronopotentiometry and gas starvation studies in PEM fuel cell cathode[END_REF][START_REF] Grzeszczuk | Influence of electrodeposition potential on composition and ion exchange of polypyrrole films in aqueous hexafluoroaluminate featured by EQCM molar mass to charge factors[END_REF][START_REF] Moya | Chronoamperometric response of ion-exchange membrane systems[END_REF] impedance [START_REF] Zh | Polarization-dependent mass transport parameters for orr in perfluorosulfonic acid ionomer membranes: an EIS study using microelectrodes[END_REF][START_REF] Moya | The differential capacitance of the electric double layer in the diffusion boundary layer of ion-exchange membrane systems[END_REF][START_REF] Roghmans | Electrochemical impedance spectroscopy fingerprints the ion selectivity of microgel functionalized ion-exchange membranes[END_REF], optical methods [START_REF] Shaposhnik | Analytical model of laminar flow electrodialysis with ion-exchange membranes[END_REF][START_REF] Vasil'eva | Limiting current density in electromembrane systems with weak electrolytes[END_REF][START_REF] Vasil'eva | The membranesolution interface under high-performance current regimes of electrodialysis by means of laser interferometry[END_REF], as well as methods for pH measurement [START_REF] Martí-Calatayud | Ion transport through homogeneous and heterogeneous ion-exchange membranes in single salt and multicomponent electrolyte solutions[END_REF][START_REF] Zabolotskiy | Ion transport and electrochemical stability of strongly basic anion-exchange membranes under high current electrodialysis conditions[END_REF] and ion transport numbers [START_REF] Vásquez-Garzón | Transport properties of tartrate ions through an anion-exchange membrane[END_REF][START_REF] Mir | Sharp decline in counter-ion transport number of electrodialysis ion exchange membrane on moderate increase in temperature[END_REF][START_REF] Kristensen | Counter-ion transport number and membrane potential in working membrane systems[END_REF][START_REF] Kristensen | Counter-ion transport number and membrane potential in working membrane systems[END_REF] are used for IEM concentration polarization. Hereafter we give a short review of these methods with focusing attention on those methods, which will be used in our research.
Optical methods (laser interferometry, schlieren-method, etc.) are especially convenient for studying the CP in membrane systems. Even an insignificant change in concentration causes a sufficiently large change in the refractive index of the solution [START_REF] Plambeck | Electroanalytical chemistry: Basic principles and applications[END_REF] and leads to a shift in the interference fringes. The most informative of these methods is laser interferometry based on recording an interference pattern between a wave reflected from the object under study and a reference wave. If an optical disturbance arises in the region of the reflected wave from the object, then, where the path difference reaches a value that is a multiple of the wavelength, a dark band appears. The method of laser interferometry allows one to determine the local electrolyte concentrations in desalination and concentration chambers of complex geometry.
The minimum distance from the surface of the membrane on which measurements are possible is several micrometers (≈10 μm).
The most common methods of voltammetry and chronopotentiometry are the galvanostatic one [START_REF] Plambeck | Electroanalytical chemistry: Basic principles and applications[END_REF]. The constant current is passed through the experimental cell until the steady values of potential are reached in the galvanostatic method. I-V curve obtained with a linear current sweep are called galvanodynamic curves. These methods are the most common tools for studying the electrochemical characteristics of membrane systems. As the IEMs are conductors of the second kind, they must be located between two polarizing electrodes to ensure the current flows into the EMS. There are quite a few modifications of equivalent electrical circuit for studying the CP in EMS. The most commonly used four-electrode cell (Fig. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. In this setup, the cathode and anode are located parallel to the membranes and limit the membrane package. The current is applied between the working (anode) and the counter (cathode) electrodes. The potential drop (PD) across the membrane under study, 𝜑, is measured using sense and reference Ag/AgCl electrodes. These electrodes are placed in the Luggin capillaries. The Luggin tips are installed at both sides of the membrane under study in its geometric center at a distance of about 0.5 mm from the surface. The PD between sense and reference Ag/AgCl electrodes in the electrolyte solution (without the membrane) is zero.
To prevent the ingress of electrolysis products from the near-electrode to the premembrane chambers, they are separated by auxiliary CEM and AEM. In this case, the AEM separates the investigated membrane from the anode chamber, the CEM from the cathode membrane. A sharp increase in the PD in second region indicates that the salt concentration at the membrane/solution interface becomes small compared to the salt concentration in the bulk solution. This region is called "plateau" or "limiting current" region. It corresponds to the maximum ion transfer rate, controlled by diffusion through the diffusion layer. In membrane systems, there is always a current increase in this region, which is mainly due to a decrease in the thickness of the diffusion layer, caused by the development of various types of coupled convection. Another reason for the current increase in the second region can be a nonsimultaneous achievement of the limiting state due to heterogeneity of a membrane surface.
At the beginning, the limiting current is reached in the areas where the thickness of the diffusion layer is large, with a further increase in the PD the limiting state extends to regions with smaller thickness.
The third region corresponds to a higher growth rate of current density, which is usually associated with the development of water splitting reaction and/or the appearance of additional current carriers. The inflection point tr i in plateau region corresponds to the transition state from the trend in the concentration polarization development in this region to a faster increase in current in the overlimiting region.
The characteristic point of the I-V curve is the intersection of the tangents drawn to the ohmic and plateau regions. The intersection point of the tangents exp lim i characterizes another transition state, when the linear section of increasing CP is replaced by an inclined plateau corresponding to the spread of the limiting state along the membrane surface [START_REF] Zabolotsky | Ion transport in membranes[END_REF] and the development of coupled convection. Thus, exp lim i characterizes the state when the limiting current density is reached at least on the conductive areas of the membrane surface.
The accuracy of determining the point tr i of the I-V curve can be increased [START_REF] Choi | Direct measurements of concentration distribution within the boundary layer of an ion-exchange membrane[END_REF] if one takes into account that the extremum of the first derivative of this curve corresponds to the inflection point.
Chronopotentiometry
Chronopotentiometry is one of the most informative methods for studying the bulk and surface properties of IEMs. It consists in measuring the PD in the membrane system as a function of time at a constant current density. This method was first applied to electrode systems in electrochemistry and subsequently, practically without changes, was used for the membrane study. The main advantage of chronopotentiometry in comparison with other dynamic methods (impedance, voltammetry) is a direct study of the contribution of different regions (layers, heterogeneities) of the membrane system to the overall PD.
The typical chronopotentiogram (ChP) for a system with a homogeneous IEM obtained with an overlimiting current regime is shown in Fig. 1-19. The ChP can be divided into several parts. At time zero the experiment is started but no current is applied yet and since the solutions on either side of the membrane are equal, the voltage drop remains zero. When the fixed current is switched on an instantaneous increase in PD occurs (Fig. [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. This increase is due to the initial ohmic resistance of the system composed of solution and membrane between the tips of the voltage measuring capillaries.
After the increase in voltage due to the ohmic resistance, part b commences corresponding to the slow growth of PD up to the inflection point 2. It is caused by a drop in the electrolyte concentration near the membrane from the side of the desalination chamber. In this case, the transport of electrolyte from the solution bulk to the membrane surface has an electrodiffusion character. At a certain time this is followed by a strong increase in PD (part c). The point at which this increase occurs is the transition time, , (point 2). After the inflection point 2, the convective transfer plays an important role in the delivery of ions. It should be noted that the inflection point occurs on the ChP only at exp lim i i , otherwise there is no pronounced change in the delivery mechanism of the electrolyte with the connection of convective transfer. After point 3, the membrane system goes into a stationary state, in which no noticeable changes in the PD occur. The PD in part e is equal to the ohmic PD of the polarized membrane system after the current is turned off. The last segment of the curve describes the process of relaxation of the membrane system, consisting in the resorption of the concentration gradients formed during the electrical current flow.
An important characteristic of non-stationary ion transport is transient time [START_REF] Krol | Chronopotentiometry and overlimiting ion transport through monopolar ion exchange membranes[END_REF]. In mathematical models that do not take into account convective transport, this moment corresponds to the drop of the concentration at the membrane/solution boundary to zero, with the PD becoming equal to infinity. In real systems, at this point, the mechanism of ion delivery to the surface of the membrane changes. There are some difficulties in determining the transition time from the ChP. Krol and co-authors [START_REF] Krol | Chronopotentiometry and overlimiting ion transport through monopolar ion exchange membranes[END_REF] proposed a method for determining from the intersection point of the tangents to the regions of slow and rapid growth of the PD in part c.
Also in the literature there is a method to find the transition time as the inflection point (point
i i i C z F D t t i (1.15)
Here i is the applied current density. The transition time reaches a minimum with an ideally selective membrane 1 i t and increases with increasing co-ions transfer through the membrane.
The determination of the connection between the transition time and the heterogeneity of the membrane surface is also of great interest. Thus, Krol and co-authors [START_REF] Krol | Chronopotentiometry and overlimiting ion transport through monopolar ion exchange membranes[END_REF] experimentally determined the transition time for systems with heterogeneous membranes. This value turned out to be less than theoretical transition time (Eq. 1.15). The authors thought that the local current density at the conducting regions of the heterogeneous membrane is above the average over the surface.
The difference in the surface structure and internal structure of different types of membranes is reflected in their electrochemical properties, which can be seen on ChPs. If the surface is practically homogeneous, as in the case of AMX, the current lines are uniformly distributed over the surface (Fig. 1-20a), and the electrolyte concentration in the solution near the membrane boundary is approximately the same at all points of the surface. The electric current lines condense on the conductive areas of heterogeneous membrane (Fig. 1-20b). The local current density is substantially higher than the average one over the surface, and the boundary concentration here is much lower than the average one. In this case, the electrolyte concentration near the conducting regions decreases more rapidly and the state corresponding to the transition time occurs in the system more rapidly. The different structure of the surface of the membrane also affects the form of the ChP (Fig. . 2 mA cm -2 (adapted from [START_REF] Belova | Effect of Anion-exchange Membrane Surface Properties on Mechanisms of Overlimiting Mass Transfer[END_REF]).
The PD begins to grow much earlier for homogeneous membrane surface than in the case of homogeneous surface. When the boundary concentration on the conducting regions becomes sufficiently small, the delivery of the electrolyte begins to be carried out in a tangential direction. As a result, the PD through the heterogeneous membrane grows faster at first after the current is turned on in comparison with the homogeneous membrane, and slower near the inflection point, which explains the smoothed curve. Thus, chronopotentiometry makes it possible to characterize the electrical heterogeneity of the membrane surface.
Experimental cells for voltammetry and chronopotentiometry
The main demands for the lab-cells used for studying membrane properties are the data reproducibility and the opportunity to calculate the thickness of the diffusion boundary layer.
A cell with a rotating membrane disk (RMD) [START_REF] Zabolotskiy | Ion transport and electrochemical stability of strongly basic anion-exchange membranes under high current electrodialysis conditions[END_REF] and cells with laminar solution flow [START_REF] Volodina | Ion transfer across ion-exchange membranes with homogeneous and heterogeneous surfaces[END_REF][START_REF] Belova | Effect of Anion-exchange Membrane Surface Properties on Mechanisms of Overlimiting Mass Transfer[END_REF] are convenient for experimental and theoretical studies.
The central element of the RMD cell is a rotating glass tube (2) (Fig. . At the end portion of the tube an IEM (1) is attached. Inside the tube there are inlet (4) and outlet (5) solution capillaries and a polarizing platinum electrode (6), as well as one of the Luggin capillaries (7).
The rotating glass tube with its polarizing platinum electrode and membrane forms the anode chamber. The bottom half-cell (3), which is the cathodic chamber contains a second platinum polarizing electrode (6) and the second Luggin capillary (7). reference electrodes, 11 -pulley (adapted from [START_REF] Zabolotskiy | Ion transport and electrochemical stability of strongly basic anion-exchange membranes under high current electrodialysis conditions[END_REF]).
The Luggin capillaries are located on opposite sides of the membrane on an axis passing through the centre of the membrane disc. Measurement of the current-voltage characteristics of the membrane system is carried out in galvanostatic mode, gradually increasing the current density with the galvanostat (8). The potential drop across the membrane is recorded on a millivoltmeter (9) with silver chloride electrodes (10). The rotation speed is varied from 100 to 500 rpm and measured using an optical-mechanical transducer coupled to the digital display unit. In this range of speeds a laminar fluid flow regime is observed.
The diagram of the measuring cell with a laminar flow is presented in Fig. 1-23 [START_REF] Volodina | Ion transfer across ion-exchange membranes with homogeneous and heterogeneous surfaces[END_REF]. The chambers are formed by membranes (1) as well as by plastic (2) and elastic (3,4) gaskets with a square aperture of 2 × 2 cm (5). The thickness of the plastic gaskets is 5 mm. The thickness of elastic gasket (3) and ( 4) is 0.5 and 0.95 mm, respectively. Hence, the distance between the neighbouring membranes, h, is about 7 mm. The chambers adjoining the membrane under study (A*) are separated from the electrode chambers by a CEM (C) from the side of the platinum plane cathode (6) and by an AEM (A) from the side of the platinum plane anode (7).
Two Luggin capillaries (8) are used for the sampling of solution from the layers adjoining the studied membrane from both sides. The face planes of the Luggin capillaries with diameter 0.8 mm are placed at the center of the membrane at the angle of 45 0 at a distance of about 0.5 mm from the membrane surface. The flow rate of this solution is not exceed 5 % of the solution flow rate through the desalination and concentration chambers. Two Ag/AgCl electrodes ( 9) used to measure the potential difference are placed in the Luggin capillaries. Each chamber is supplied with a solution through the connecting pipes (10) built into the plastic gaskets. Special guides for the solution in shape of comb (11) installed after inlet and before outlet connecting pipes provide laminar uniform flowing of solution between the membranes in the cell chambers. The cell is settled in such a way that the plane of the membranes is vertical. The solution is passed from down to up of the cell.
In the case of the cell with the rotating disk the solution viscosity, ν, and the angular velocity of the disk rotation, ω, determine the DBL thickness:
1 1 1 6 3 2 1.61 D (1.16)
In the case of the cell with a laminar solution flow the thickness of DBL can be approximately calculated using the following equation deduced from the convection-diffusion model proposed by Lévêque [START_REF] Lévêque | Les lois de la transmission de chaleur par convection[END_REF]:
1 3 2 1.02 LDh h h V 10 -4 < L 2 0.02 Vh D (1.17)
Here L is the length of desalination path; h is intermembrane distance in the desalination chamber; V is the flow velocity of solution through the desalination chamber.
A more accurate calculation may be made using the following equation [START_REF] Zabolotsky | Ion transport in membranes[END_REF]:
1 3 2 Sh 1.85 ReSc 0.4 av h L 2 0.02 Vh L D (1.18)
Where Sh av is the Sherwood number,
2 Re VX hV , Sc D , (1.19)
are the Reynolds and Schmidt numbers. The Sherwood number may be presented in the form
2 Sh av h (1.20)
which allows the calculation of 𝛿 delta after calculation of Sh av using Eq. (1.18).
Eq. (1.18) allows also calculation of the LCD when applying the Peers equation [START_REF] Nikonenko | Desalination at overlimiting currents: State-of-the-art and perspectives[END_REF]:
0 lim i i FDC i t t (1.21)
Substituting Eqs (1.18) and (1.20) in Eq. (1.21), we find:
1 2 3 0 lim 1.47 0.2 i i FDC h V i h t t DL (1.22)
Volodina et al. [START_REF] Volodina | Ion transfer across ion-exchange membranes with homogeneous and heterogeneous surfaces[END_REF] have shown that when using the lab-cell presented in Fig. 1-23 in the case of homogeneous membrane and negligible coupled effects of CP such as EC, gravitational convection and water splitting, the experimental and calculated values of lim i using Eq. (1.22) are in a good agreement.
Fouling
Membrane fouling in separation processes results in loss of performance of a membrane due to the deposition of suspended or dissolved substances on its external surfaces, at its pore openings, or within its pores [START_REF] Koros | Terminology for membranes and membrane processes[END_REF]. This process is one of the key problems for the modern chemical, agricultural, food, pharmaceutical processing and water treatment. The types of fouling on IEMs can be divided into four categories: colloidal, organic and biofouling, and scaling [START_REF] Mikhaylin | Fouling on ion-exchange membranes: Classification, characterization and strategies of prevention and control[END_REF]. In our study, we focus on scaling phenomena.
Scale forming mechanisms
Scaling is a complex phenomenon including both crystallization and transport mechanisms.
Thermodynamically, crystallization becomes possible when the activity of ions in solution is above their saturation limit and the solution is supersaturated. Also, kinetics of precipitation is a key determinant of the severity of scaling. When supersaturation exceeds a critical value, nucleation of crystal particles occurs, and then these particles grow. Higher concentration of nucleation sites accelerates crystallization kinetics. Heterogeneous nucleation, when "foreign" particles act as a "scaffold" for the crystal growth, takes place more quickly. Some of the most typical ones are small inclusions or scratches in the surface where the crystal is growing.
If the solution region, where the product of ion concentrations exceeds the solubility product, is near the membrane/solution interface, the scale formation occurs on the membrane surface (surface crystallization). In the case where this region is far from the surface, the scale crystals are formed in the bulk of the processed solution (bulk crystallization) [START_REF] Lee | Effect of operating conditions on CaSO 4 scale formation mechanism in nanofiltration for water softening[END_REF][START_REF] Lee | Scale formation in NF/RO: mechanism and control[END_REF]. A schematic representation of these crystallization processes is shown in Fig. 1-24. Scaling in membrane systems is a combination of these two extreme mechanisms and is affected by membrane morphology and process conditions.
Surface crystallization results in flux decline and surface blockage. Bulk crystallization may deposit on membrane surfaces as sediments/particles to form a cake layer that leads to flux decline. In addition, supersaturated scale itself forms conditions leading to scale growth and agglomeration. This is due to the random collision of ions with particles and secondary crystallization occurs on the surface of these foreign bodies present in the bulk phase [28,30].
Fig. 1-24. Schematic illustration of scale formation schemes (adapted from [START_REF] Antony | Scale formation and control in high pressure membrane water treatment systems: A review[END_REF]).
Simultaneous bulk and surface crystallization may also occur for high recovery operating conditions. The Mg 2+ , Ca 2+ , HCO 3 -, SO 4 2-ions are the main cause of scaling phenomenon in ED treatment [START_REF] Mikhaylin | Fouling on ion-exchange membranes: Classification, characterization and strategies of prevention and control[END_REF][START_REF] Choi | Electrodialysis for desalination of brackish groundwater in coastal areas of Korea[END_REF][START_REF] Van Geluwe | Van der Bruggen Evaluation of electrodialysis for scaling prevention of nanofiltration membranes at high water recoveries[END_REF]. Membrane structure, regimes of ED treatment, solution temperature also affect the precipitation [START_REF] Franklin | Prevention and control of membrane fouling: practical implications and examining recent innovations[END_REF]. As the temperature decreases, the solubility decreases and the solution becomes supersaturated, which leads to crystallization of the salt.
Also, pH of the treated solution is one of the important factors for scale formation. OH -ions cause formation of such precipitates as Mg(OH) 2 and Ca(OH) 2 [START_REF] Gence | pH dependence of electrokinetic behavior of dolomite and magnesite in aqueous electrolyte solutions[END_REF]:
Mg 2+ (aqueous) + OH - (aqueous) Mg(OH) 2(solid) (1.23) Ca 2+ (aqueous) + OH - (aqueous) Ca (OH) 2(solid) (1.24)
Higher pH shifts carbonic ions towards carbonate ions, which participate in scaling:
Mg 2+ (aqueous) + CO 3 2- (aqueous) MgCO 3 (solid) (1.25) Ca 2+ (aqueous) + CO 3 2- (aqueous) CaCO 3(solid) (1.26)
An additional factor affecting the scale formation is the ratio of the scaling ions due to the competition in their migration from the dilute chamber towards the concentrate and due to the cross-effects of different scaling ions on nucleation and crystal growth [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF][START_REF] Mikhaylin | Intensification of demineralization process and decrease in scaling by application of pulsed electric field with short pulse/pause conditions[END_REF].
Asraf-Snir et al. [START_REF] Asraf-Snir | Gypsum scaling on anion exchange membranes during donnan exchange[END_REF][START_REF] Asraf-Snir | Gypsum scaling on anion exchange membranes in electrodialysis[END_REF] investigated the effect of IEM surface structure on gypsum scale formation. They found that in the case of heterogeneous anion-exchange MA-40 membrane, the scale formation occurs mainly on the surface of conductive areas, which are ion-exchange resin beads incorporated in the polyethylene supporting matrix. The amount of deposit is essentially lower on the surface of a homogeneous anion-exchange AMV membrane [START_REF] Asraf-Snir | Gypsum scaling on anion exchange membranes during donnan exchange[END_REF][START_REF] Asraf-Snir | Gypsum scaling on anion exchange membranes in electrodialysis[END_REF]. Moreover, the scale forms not only on the heterogeneous membrane surface, but within its macropores also. Membrane thickness, scaling content (SC), membrane electrical conductivity are the characteristics, which can be determined by direct methods [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF]. SC can be calculated as follows:
100 * c c w W W W SC (1.27)
Here w W and c W is the dry weight of the working membrane after ED and control membrane, respectively.
There is a wide range of methods to measure the membrane electrical conductivity in the literature [9]. Belaid et al. [START_REF] Belaid | Conductivité membranaire: interpretation et exploitation selon le modèle à solution interstitielle hétérogène[END_REF] and Lteif et al. [START_REF] Lteif | Auclair Conductivité électrique membranaire: étude de l'effet de la concentration, de la nature de l'électrolyte et de la structure membranaire[END_REF] developed a method using a clip-cell with two platinum electrodes to measure the conductance of the solution with membrane, s m G , and without membrane s G (Fig. 1-26). As the membrane conductance depends on the temperature, thermostat provides the constant temperature during experiment. The electrical resistance of the membrane can be calculated using Eq. 1.28. (adapted from [START_REF] Lteif | Auclair Conductivité électrique membranaire: étude de l'effet de la concentration, de la nature de l'électrolyte et de la structure membranaire[END_REF]).
1 1 1 m m m s s R G G G (1.28)
Here m R is the transverse electric resistance of the membrane; m G is the conductance of the membrane. Finally, the membrane electrical conductivity, , can be calculated as [START_REF] Lteif | Auclair Conductivité électrique membranaire: étude de l'effet de la concentration, de la nature de l'électrolyte et de la structure membranaire[END_REF] m
L R S (1.29)
Here L is the membrane thickness; S is the electrode area.
Generally, scaling leads to the increase in membrane resistance by formation of surface scaling layer or internal membrane scaling and deposition of scale crystals on membrane ionexchange groups [START_REF] Araya-Farias | Electrodialysis of calcium and carbonate highconcentration solutions and impact on membrane fouling[END_REF]. The disadvantage of this method is that during immersion of the scaled membranes into the working solution (usually NaCl), the scaling could be detached from the membrane leading to the underestimation of results.
1.4.2.2.b Voltammetry and chronopotentiometry
Voltammetry and chronopotentiometry are the privilege methods to study electrochemical behavior of IEMs in the presence of scaling [70,[START_REF] Martí-Calatayud | Ion transport through homogeneous and heterogeneous ion-exchange membranes in single salt and multicomponent electrolyte solutions[END_REF]. The formation of scaling on the membrane surface or inside the membrane leads to the changes in shape of I-V curve. For example, the significant changes in the slope of the initial linear region, lim i and the plateau length of scaled membrane in comparison with the pristine membrane [START_REF] Lee | Analysis of fouling potential in the electrodialysis process in the presence of an anionic surfactant foulant[END_REF].
As was mentioned above, the electrical resistance of the scaled membrane reduces in comparison with the pristine one, that is why the slope angle of the quasi-ohmic region became higher in the case of scaled membrane. The extended plateau is related to the formation of precipitates at the membrane surface. The decrease in lim i for scaled membrane is attributed to the presence of area of different conductance on the surface of IEM. It is possible to obtain important information on operations of ED processes to minimize scaling effects. The disadvantages of this method are that it takes a lot of time and usually requires additional knowledge to interpret the results. Also, during the sample preparation and I-V curve registration, the weakly bounded scaling can be modified and/or detached.
Marti-Calatayud et al. [START_REF] Martí-Calatayud | Ion transport through homogeneous and heterogeneous ion-exchange membranes in single salt and multicomponent electrolyte solutions[END_REF] studied the process of scale formation on a homogeneous Nafion Moreover, in the case of Nafion 117 they discovered the second increase of PD. They supposed that this increase relates to the formation of a precipitate layer at the anodic side of the membrane, which was visually confirmed at the end of the experiments. In the case of HDX membrane the PD increases immediately after the current is applied. This different response may indicate the presence in the heterogeneous membrane of some pores more prone to the formation of precipitates where this phenomenon starts from the beginning of the current pulse. Kang et al. [START_REF] Kang | Effects of inorganic substances on water splitting in ion-exchange membranes: I. Electrochemical characteristics of ion-exchange membranes coated with iron hydroxide/oxide and silica sol[END_REF] reported the decrease of the fraction of conducting region with deposited silica compounds from 10 to 25 % for different AEMs and with embedded hydroxide and oxide of Fe 3+ from 7 to 27 % respectively for the Nafion 117 membrane.
1.4.2.2.c Contact angles
The membrane surface hydrophobicity can be investigated by the contact angle measurements [68,[START_REF] Nikonenko | Intensive current transfer in membrane systems: Modelling, mechanisms and application in electrodialysis[END_REF]. Usually, contact angles, , of different membranes are measured by goniometer with a measurement range of contact angle 0-180 o . The measurement procedure includes the registration of the contact angles between a drop of a liquid (distilled water) and a membrane surface [START_REF] Hamilton | A technique for the characterization of hydrophilic solid surfaces[END_REF]. A drop of liquid may be applied onto a dry membrane or onto a swollen membrane (sessile drop technique) mopped with a filter paper to remove the excessive water from the surface (Fig. 1-28a). Furthermore, membrane can be immersed into a liquid and a drop of another liquid (immersed method) [START_REF] Hamilton | A technique for the characterization of hydrophilic solid surfaces[END_REF] or air bubble (captive bubble method) [START_REF] Yuan | Contact angle and wetting properties[END_REF] could be applied onto a membrane surface (Fig. 1-28b). (adapted from [START_REF] Hamilton | A technique for the characterization of hydrophilic solid surfaces[END_REF][START_REF] Zhang | Membrane Characterization by the Contact Angle Technique: II. Characterization of UF-Membranes and Comparison between the Captive Bubble and Sessile Drop as Methods to obtain Water Contact Angles[END_REF]).
The studies of Bukhovets et al. [START_REF] Bukhovets | Fouling of anion-exchange membranes in electrodialysis of aromatic amino acid solution[END_REF] revealed the increase of the AEM hydrophobicity when membrane was fouled by phenylalanine. Similar increases in the AEM surface hydrophobicity were reported by Guo et al. [START_REF] Guo | Analysis of anion exchange membrane fouling mechanism caused by anion polyacrylamide in electrodialysis[END_REF] who conducted studies on polyacrylamide fouling and Lee et al. [START_REF] Lee | Fouling of an anion exchange membrane in the electrodialysis desalination process in the presence of organic foulants[END_REF] who conducted studies on humate, SDBS and bovine serum albumin fouling. The simplicity and rapidity of the contact angles method is very attractive to analyze the scaling phenomena. However, the scaling can be partially removed during the mopping procedure aiming removal of excessive water at the surface. Furthermore, IEMs lose their inner water on the air and become dry quite quickly, which demands certain skills from researcher to accomplish an analysis in appropriate quick way.
However, as far as we know, there are no studies where the contact angle is measured on the membrane surface containing salt precipitates.
X-ray diffraction analysis
X-ray diffraction (XRD) is a method for the identification of atomic and molecular structure of a crystal. XRD diffractometer consists of a source of monochromatic X-rays, which are focused on a sample at some angle θ. The intensity of diffracted X-rays is analyzed by a detector, which is placed at 2θ from the source path. The diffraction happens when the X-rays strike the crystalline surface resulting in their partially scattering by atoms from different layers of crystalline lattice having a certain interlayer distance d. If X-ray beams diffracted by two different layers are in phase, constructive interference occurs and diffraction results in a peak on a diffractogram. However, if beams are out of phase, destructive interference occurs and there is no peak on a diffractogram. Hence, only crystalline solids will be detected by XRD while the limit is that amorphous materials will remain undetected [START_REF] Guinier | X-ray diffraction in crystals, imperfect crystals, and amorphous bodies[END_REF].
(a) (b)
The studies of XRD spectra allow revealing of the crystalline structures of precipitates formed on the IEMs treated by ED. The only pretreatment of a scaled membrane prior to XRD is drying. Casademont et al. [START_REF] Casademont | Bilayered Self-Oriented Membrane Fouling and Impact of Magnesium on CaCO 3 Formation during Consecutive Electrodialysis Treatments[END_REF] reported the transformation of calcium carbonate and hydroxides of Ca and Mg during the consecutive ED treatments. The more detailed studies using XRD of scaling formation mechanisms and effects of non-stationary electric fields were conducted by Cifuentes-Araya et al. [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF]. These authors reported the formation of multilayer scaling composed of calcite, brucite and aragonite crystals during consecutive ED treatments. Since the scaling agents are Ca 2+ , Mg 2+ and other multiply charged ions, the modification of a membrane surface by a layer selective to the transfer of singly charged ions is an option. It is known that the deposition of organic substances on the membrane surface can be reduced when the membrane is covered with a thin ion-conductive layer bearing a fixed charge opposite to the charge of the supporting membrane matrix [START_REF] Sata | Studies on anion exchange membranes having permselectivity for specific anions in electrodialysis -effect of hydrophilicity of anion exchange membranes on permselectivity of anions[END_REF]. As a result, the resistance of the composite membrane is increasing towards multiply charged counter-ions, which causes scaling. A similar effect is obtained by modification of the membrane surface with highmolecular surface active substances [START_REF] Grebenyuk | Surface modification of anion-exchange electrodialysis membranes to enhance anti-fouling characteristics[END_REF].
The above modification reduces the transfer of multicharged ions from the diluate to the concentrate chamber. This mitigates or prevents scaling in the concentrate chamber. However, in diluate chamber the situation may be even worse: the increased local concentration of multicharged ions at the depleted membrane surface together with possible increase in pH due to water splitting may provoke formation of hydrates of multicharged ions able to precipitate.
In practice, the casting of conductive perfluorinated resin solution of Nafion ® or its Russian analogue MF-4SK on the surface of a CEM leads to the growth of limiting current density by 1.5 times in comparison with unmodified membrane [34][35][36]. This effect is explained by a better distribution of the current lines after the surface modification. Besides, the membrane surface becomes relatively more hydrophobic that is beneficial for EC [34][35][36][START_REF] Nebavskaya | Impact of ion exchange membrane surface charge and hydrophobicity on electroconvection at underlimiting and overlimiting currents[END_REF].
The positive effect of application a homogeneous film to the membrane surface is supported by theoretical studies of Rubinstein et al. [START_REF] Rubinstein | Ion-exchange funneling in thin-film coating modification of heterogeneous electrodialysis membranes[END_REF]. The current lines are distorted in solution near an electrically heterogeneous surface: they are accumulated at the conductive areas causing higher CP [START_REF] Rubinstein | Ion-exchange funneling in thin-film coating modification of heterogeneous electrodialysis membranes[END_REF], which should lead to increasing scale formation. It is shown [START_REF] Rubinstein | Ion-exchange funneling in thin-film coating modification of heterogeneous electrodialysis membranes[END_REF] that application of a homogeneous conductive layer on heterogeneous surface diminishes the effect of formation of funnel-shaped distribution of current lines in the solution near the membrane surface. Even a very thin and weakly charged conductive layer results in homogenization of current lines distribution over the membrane surface and in the nearsurface solution. As a consequence, the CP at the membrane surface decreases and the LCD increases [START_REF] Rubinstein | Ion-exchange funneling in thin-film coating modification of heterogeneous electrodialysis membranes[END_REF].
Cleaning agents
The cleaning agents are used to remove the scaling form the surface of IEMs or to prevent scaling formation during ED treatment. He et al [START_REF] He | Effects of antiscalants to mitigate membrane scaling by direct contact membrane distillation[END_REF] showed that the application of antiscalants helps to prevent membrane scaling by calcite and gypsum. The effects of antiscalants were investigated during the direct contact membrane distillation process implemented with porous hydrophobic polypropylene hollow fibers having a porous fluorosilicone coating on the fiber outside surface. Antiscalant K752, a polyacrylic acid and sodium polyacrylate based compound, is more effective in inhibiting CaSO 4 scaling, compared with other antiscalants tested, i.e., K797, GHR, GLF and GSI. However, a great amount of scaling deposits were observed on the membrane surface after de-ionized water washing [START_REF] Wang | Cation-exchange membrane fouling and cleaning in bipolar membrane electrodialysis of industrial glutamate production wastewater[END_REF]. Hydraulic cleaning was limited for the strong adhesion force between the scalants and the membrane surface. Acid cleaning may improve the removal efficiency as a result of the neutralization and/or hydrolysis reactions between hydrochloric acid and mineral deposits that would dissolve the scalants (calcium or magnesium hydroxide or their carbonates) into acid solution [START_REF] Wang | Cation-exchange membrane fouling and cleaning in bipolar membrane electrodialysis of industrial glutamate production wastewater[END_REF].
Pretreatment: Pressure-driven membrane processes
Microfiltration, ultrafiltration, nanofiltration and reverse osmosis are successfully used as a pretreatment step prior ED [START_REF] Chen | Green Production of Ultrahigh-Basicity Polyaluminum Salts with Maximum Atomic Economy by Ultrafiltration and Electrodialysis with Bipolar Membranes[END_REF][START_REF] Mikhaylin | Hybrid bipolar membrane electrodialysis/ultrafiltration technology assisted by pulsed electric field for casein production[END_REF]. The pressure gradient are used as a driving force and allow separation of particles according to their size or molecular weight.
There are two types of filtration dead-end filtration and cross flow filtration (Fig. 1
Mechanical action
The ultrasound has been demonstrated as an effective method to remove fouling materials from the membrane surface [START_REF] Wang | Cation-exchange membrane fouling and cleaning in bipolar membrane electrodialysis of industrial glutamate production wastewater[END_REF]. The actual physical phenomena involved in ultrasonic cleaning is cavitation, which can be defined as the formation, growth, and implosive collapse of bubbles combined with a large negative pressure applied to a liquid medium. This cavitational collapse produces a number of phenomena that result in the break and dislodgement of fouling layer from the membrane surface. 19.9 % of oxygen and 71.5 % of calcium were removed compared with the untreated membrane after 120 s ultrasonic cleaning [START_REF] Wang | Cation-exchange membrane fouling and cleaning in bipolar membrane electrodialysis of industrial glutamate production wastewater[END_REF].
The combination of acid cleaning with ultrasound showed that after 60 s of ultrasound treatment in 1 % HCl solution, the scale deposits observed on the treated CEM surface were almost removed. The physical and chemical membrane parameters indicated that the performance of cleaned membrane was restored [START_REF] Wang | Cation-exchange membrane fouling and cleaning in bipolar membrane electrodialysis of industrial glutamate production wastewater[END_REF]. Concentration polarization causes limitations in ED. Water splitting occurs at currents more than limiting current. This pH changes can lead to the scale formation on the IEM surface. In order to get the maximum ion flux per unit membrane area it is desirable to operate at the highest possible current density. Grossman et al. [START_REF] Grossman | Experimental study of the effects of hydrodynamics and membrane fouling in electrodialysis[END_REF] increased the limiting current changing the hydrodynamic conditions. They showed that the changing in flow rate and using the spacer material promoting turbulence mitigate the scaling on the IEM surface. The spacers design is important in terms of increasing current efficiency. Impact of air sparging on the ED process was shown in Ref. [START_REF] Balster | Towards spacer free electrodialysis[END_REF]. They compared the gas/liquid ratio in combination with various spacer configurations and the situation without feed spacer. The mass transfer increase was most effective in a spacer free cell where the mass transfer increased linearly with increasing gas/liquid ratio. Also, the spacer free cell combined the highest mass transfer increase with the lowest increase in cell resistance. This approach allows increasing solution turbulence, which may be also advantageous in the reduction of a membrane scale.
1.4.3.5.b Electrodialysis reversal
In the 1950's, Ionics started experiments with systems which showed a radical improvement in scale control [START_REF] Katz | The electrodialysis reversal process[END_REF]. They proposed to use electrodialysis reversal (EDR) process which results in the prevention of calcium carbonate and calcium sulfate scale formation. Also Japanese researchers have highly developed the scaling process. They used ED treatment for the production of sodium chloride from the see-water [57,[START_REF] Wang | Membrane and desalination technologies[END_REF]. The key moment in the EDR is the changing of the electrode polarity and the flow direction at the same time [57]. The electric field is alternately reversed to dissolve salt scale deposited on the membranes. The desalination channels become concentration channels, and vice versa, concentration channels -desalination channels (Fig. . The disadvantage of this method is the productivity loss and the waste of the finished product when the electrode polarity is changing.
Fig. 1-30. Scheme of EDR and presence of foulants (adapted from [START_REF] Mikhaylin | Fouling on ion-exchange membranes: Classification, characterization and strategies of prevention and control[END_REF]).
1.4.3.5.c Pulsed electric field
Pulsed electric field (PEF) modes are proposed as another effective way for combating the consequences of CP, such as the scaling formation at IEM interfaces [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF]. In this mode the pulse lapse (T on ) of current or voltage alternate with pause lapse (T off ) of certain duration.
During the pause lapse the ions transport from the bulk solution to the membrane surface by means of diffusion and convection, so the effect of CP diminishes before the pulse lapse is applied [START_REF] Nikonenko | Intensive current transfer in membrane systems: Modelling, mechanisms and application in electrodialysis[END_REF][START_REF] Mishchuk | Intensification of electrodialysis by applying a non-stationary electric field[END_REF]. The rate of ED desalination is higher applying a non-stationary regime with short current or voltage pulses [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF][START_REF] Mishchuk | Intensification of electrodialysis by applying a non-stationary electric field[END_REF]. The duration of palse/pause lapses is very important for the ED intensification. Sistat et al. [START_REF] Uzdenova | Effect of electroconvection during pulsed electric field electrodialysis. Numerical experiments[END_REF] compared the desalination rate of sodium chloride during ED under a constant applied voltage and PEF modes. They showed that the average current density and the desalination rate increase with increasing frequency of pulse lapses from 1 Hz to 100 Hz.
Also, the duration of pulse/pause lapses affects essentially the amount and the composition of the salt deposits [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF][START_REF] Cifuentes-Araya | Impact of pulsed electric field on electrodialysis process performance and membrane fouling during consecutive demineralization of a model salt solution containing a high magnesium/calcium ratio[END_REF].
During pause lapses the concentration depolarization occurs. As a result the electric resistance of depleted solution reduces and the salt deposits dissolve, at least partially. EC plays an important role in this process. EC appears as a result of the action of electric field on the space electric charge at the membrane surface. The transfer of solution by EC intensifies the exchange between the near-surface solution layer and the bulk solution [START_REF] Rubinstein | Electro-convective versus electroosmotic instability in concentration polarization[END_REF][START_REF] Kwak | Microscale electrodialysis: concentration profiling and vortex visualization[END_REF][START_REF] Nikonenko | Competition between diffusion and electroconvection at an ion-selective surface in intensive current regimes[END_REF][START_REF] Davidson | On the dynamical regimes of pattern-accelerated electroconvection[END_REF] and contributes to reducing scaling [START_REF] Mikhaylin | How physico-chemical and surface properties of cation-exchange membrane affect membrane scaling and electroconvective vortices: Influence on performance of electrodialysis with pulsed electric field[END_REF]. The duration of the current or the voltage pulses should be considerably shorter than the calculated characteristic time to build up the polarization layer. Subsequent studies conducted by Mikhaylin et al. revealed that short PEF modes (1 s -3 s) allow better mitigation of concentration polarization and scaling phenomena [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF].
Indeed, the application of the optimal pulse/pause mode of 2 s/0.5 s allows a 40 % decrease of the CEM scaling and complete inhibition of AEM scaling (just traces of AEM scaling were detected) in comparison with a continuous current mode [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF].
Moreover, Uzdenova et al. [START_REF] Uzdenova | Effect of electroconvection during pulsed electric field electrodialysis. Numerical experiments[END_REF] have shown that EC vortexes persist after pulse lapses. The reason is that electro-osmotic fluid flows remain in the membrane system during pause lapse.
These flows are due to the non-uniform distribution of counter-ion concentrations along the longitudinal coordinate inside the double electric layer. The longitudinal diffusion of the counter-ions inside the space charged region generates the flow of electric current, which in turn causes electro-osmotic slip of the liquid. Thus, a non-uniform distribution of the space electric force arises, which feeds the vortexes.
1.4.3.5.d Overlimiting current regime
Recently it was shown that the use of overlimiting currents could improve ED performances [START_REF] Nikonenko | Desalination at overlimiting currents: State-of-the-art and perspectives[END_REF]. It is possible if the overlimiting regime leads to the formation of electroconvection.
Electroconvective vortices significantly reduce the scaling on the surface of IEMs. This effect is due to increasing convection transfer, which enhances hydrodynamic effect on the scale and decreases water splitting [START_REF] Mikhaylin | How physico-chemical and surface properties of cation-exchange membrane affect membrane scaling and electroconvective vortices: Influence on performance of electrodialysis with pulsed electric field[END_REF][START_REF] Belova | Role of water splitting in development of electroconvection in ion-exchange membrane systems[END_REF]. The latter results in lowering pH changes at the membrane surface, and, as consequence, mitigation of scaling. The effect of EC increases for IEMs with relatively hydrophobic surface [START_REF] Pismenskaya | Evolution with time of hydrophobicity and microrelief of a cation-exchange membrane surface and its impact on overlimiting mass transfer[END_REF] and in the case where PEF are applied [START_REF] Cifuentes-Araya | Multistep mineral fouling growth on a cation-exchange membrane ruled by gradual sieving effects of magnesium and carbonate ions and its delay by pulsed modes of electrodialysis[END_REF].
Accordingly, the scaling is mitigated in these cases [START_REF] Mishchuk | Intensification of electrodialysis by applying a non-stationary electric field[END_REF][START_REF] Mikhaylin | How physico-chemical and surface properties of cation-exchange membrane affect membrane scaling and electroconvective vortices: Influence on performance of electrodialysis with pulsed electric field[END_REF].
Nevertheless, another coupled effect of concentration polarization can both mitigate and intensify scale formation. Casademont et al. [START_REF] Casademont | Effect of magnesium/calcium ratios in solutions treated by electrodialysis: Morphological characterization and identification of anion-exchange membrane fouling[END_REF][START_REF] Casademont | Effect of magnesium/calcium ratio in solutions subjected to electrodialysis: Characterization of cation-exchange membrane fouling[END_REF][START_REF] Casademont | Impact of electrodialytic parameters on cation migration kinetics and fouling nature of ion-exchange membranes during treatment of solutions with different magnesium/calcium ratios[END_REF] studied in details the pH influence on scaling on the IEM surfaces. It was reported that the membrane scaling formed by minerals of Ca 2+ and Mg 2+ on AEM surface mostly takes place at neutral pH values, however CEM surface was scaled at basic pH. Additionally to pH factor, the type of IEM (anion-exchange or cation-exchange) can play an important role in the scaling formation. Generation of OH -ions by CEM provides conditions for the scaling formation, however H + ions generated by AEM may create a "proton barrier" preventing precipitation of minerals.
Conclusions
In the literature review presented in this chapter, we have considered the structure and main properties of ion-exchange membranes, the methods of their characterization, the approaches to their surface and volume modifications and their main applications in ion separation and energy production. Analysis of the literature shows that there are two main drawbacks of electrodialysis, which hinder its wider implementation. They are concentration polarization and scale formation. The both result in increasing resistance of the membrane system and causing higher energy losses. On the other hand, concentration polarization initiates coupled phenomena, which can produce positive or negative effects on the economics of electrodialysis process. Current-induced convection, mainly electroconvection in dilute solutions, allows enhancement of mass transfer: electroconvective vortexes contribute to the delivery of salt ions from the bulk solution to the membrane interface, as well as to the evacuation of the depleted solution from the interface to the bulk solution. However, water splitting at the membrane depleted interface is generally non-desirable effect. Generation of H + and OH -ions leads to the decrease in current efficiency towards the transfer of salt ions.
Besides, it results in pH changes of solution, which increase the risk of membrane fouling/scaling. The situation is complicated by the fact that these two effects affect each other. Since electroconvection contributes to the transfer of "fresh" solution to the membrane surface, it diminishes the water splitting rate and mitigates the scale formation on the ionexchange membrane surface in the overlimiting current regimes.
The development of electroconvection is essentially affected by the nature of electrolyte solution. The understanding of this effect is important when trying to use electroconvection for optimizing electrodialysis treatment of natural multicomponent solutions inclined to scale formation. This effect was studied in stationary conditions (using voltammetry) by Choi and Moon. They have shown that the intensity of electroconvection increases with the increase of the degree of counter-ion hydration. However, it remains not clear, how the ability of counterions to involve the water volume in motion and membrane surface properties will influence the mechanism of electroconvection development in non-stationary conditions (chronopotentiometry).
In practice, there are several approaches for scaling mitigation. Pretreatment and electrodialysis reversal are sufficiently mature methods. Pretreatment step is used to remove the particles causing scale formation from the feed solution according to their size or weight by microfiltration, ultrafiltration and nanofiltration. The key moment in the electrodialysis reversal is the changing of the electrode polarity and the flow direction at the same time. The disadvantage of this method is the loss in the current efficiency, capacity and final product during the operation of electrode polarity changing.
Another approach is modification of membrane surface by a layer, which is selective to the transfer of singly charged ions. The fluxes scaling agents, which are the Ca 2+ , Mg 2+ , SO 4 2-and other multiply charged ions, are reduced, so the concentration of these ions is low in the concentrate chamber and the scale does not form there. Recently it has been shown that polymerization of polyaniline on the surface of the cation-exchange membrane results in the appearance of selectivity for the transfer of singly charged ions and the decrease of water transport number. Modification of the membrane with polyaniline leads to a smoother surface, which should reduce the risk of nucleation of scale crystals of the membrane surface.
However, the formation of an internal bipolar boundary between the cation-and anionexchange layers lead to catalytic water splitting that can stimulate the rate of scale formation on the ion-exchange membrane. In that connection, it would be interesting to study how polyaniline modification of heterogeneous membranes affects the scaling process.
Pulsed electric field modes are proposed as another effective way for combating the nondesirable consequences of concentration polarization and scaling formation at ion-exchange membrane interfaces. It was shown that the duration of pulse and pause lapses of high frequency affects essentially the amount and the composition of the salt deposits. During pause lapses, the concentration depolarization occurs. As a result, the electric resistance of depleted solution reduces and the salt deposits dissolve, at least partially. It was also shown that the application of low frequency pulsed electric field modes (with the period about 1 min, Ruiz) allows mitigation of organic fouling. However, the influence of pulsed electric field with low frequency on the mineral scale formation on the membrane surface is not studied.
There are only few publications (Mikhaylin, Bazinet et al.) where the phenomenon of electroconvection is used for fighting scale formation. The literature analysis above shows that the beneficial effect of electroconvection may be increased when applying surface modification of commercial ion-exchange membranes. Thus, it is found in the publications by Belova, Belashova, Pismenskaya et al. that the casting of a thin homogeneous ion-conductive perfluorinated film of Nafion ® or its Russian analogue MF-4SK on the surface of a heterogeneous cation-exchange membrane leads to the growth of the limiting current density by 1.5 times in comparison with the pristine membrane. Besides, when applying the modified membrane, one can obtain an essential growth of the overlimiting mass transfer and a decrease in the water splitting rate. This effect is explained by a better distribution of the current lines and a more hydrophobic surface after the membrane modification that is beneficial for electroconvection. Since intensification of electroconvection contributes to the scaling mitigation, we can expect that the use of the ion-exchange membranes modified with a conductive perfluorinated film will essentially decrease the rate or totally prevent salt precipitation. Besides, we can expect that the effect of electroconvection on the scaling mitigation will be higher in the case where pulsed electric field modes are applied.
The literature analysis allows us to formulate the following problems, which are worth to be investigated and to study which our research will be directed.
1) The effect of electrolyte nature and membrane surface properties on the electroconvection development in non-stationary conditions (chronopotentiometry) in particular at times shorter than the transition time, which is very important in pulsed electric field modes.
2) The effect of homogenization and hydrophobization of a cation-exchange membrane by Nafion ® film on its scaling in the presence of divalent ions during electrodialysis.
3) The process of water splitting at the depleted membrane interface and the search of possibilities how water splitting can be used for pH adjustment in desalination and concentration chambers without external addition of HCl to prevent the scale formation.
4) The effect of pulsed electric field with low frequency on the scale formation on the surface of pristine membrane and the membrane modified by Nafion ® film.
5) The development of a method for the preparation of an anisotropic composite membrane by formation of a polyaniline layer within a cation-exchange membrane. Investigation of the electrochemical behavior of the composite membrane and the process of scale formation on its surface in the presence of divalent ions.
Presentation of the article
In the previous chapter, we have presented general information about ion-exchange membranes, the main applications of ion-exchange membranes and the limitations of electrodialysis process. It has been shown that the application of overlimiting currents in electrodialysis can increase the mass transfer in electromembrane system due to intensification of electroconvection. As the electroconvective vortexes deliver the "fresh" solution from the bulk to the membrane interface, the water splitting is reduced and the scale formation on the membrane surface can be prevented or mitigated.
Choi and Moon [1] showed that the intensity of electroconvection depends on the nature of electrolyte, namely, on the degree of counter-ion hydration. More the counter-ion hydration number, more intensive is electroconvection. These results were obtained in steady state of membrane system using voltammetry. However, the influence of counter-ion nature on the dynamics of electroconvection development in non-stationary conditions is not studied.
Nevertheless, non-stationary conditions are widely used in electrodialysis, namely, in pulse electric field modes. It is important to know how fast electroconvection is developing, at which current density/voltage the effect of electroconvection on mass transfer becomes significant, and how these parameters depend on the electrolyte nature.
On the other hand, it is known that the effect of electroconvection increases when heterogeneous membrane is surface-modified by a thin homogeneous ion-conductive perfluorinated film of Nafion ® or its Russian analogue MF-4SK in NaCl solutions. Hence, this type of membrane modification seems promising for mass transfer intensification and for mitigation of scaling, taking into account the ability of electroconvection to reduce the rate and amount of scale formation shown by Mikhaylinet al. [2]. However, electroconvection in the systems with this type of membranes in the solutions containing scaling agents is not studied yet.
The main task of the study presented in this chapter is the effect of electrolyte nature and membrane surface properties on the dynamics of electroconvection development. By means of chronopotentiometry and voltammetry the influence of different counter-ions Na + , Ca 2+ and Mg 2+ is investigated in the article when using a commercial heterogeneous cationexchange MK-40 membrane and its modification by a MF-4SK layer. Investigations are carried out in a four-compartment flow-through electrodialysis cell.
This study allows us to conclude that the potential drop across the membranes at a fixed value of lim /i i decreases in the order Na + >Ca 2+ >Mg 2+ in both stationary and non-stationary regimes. This order agrees with that found by Choi and Moon [1] in the case of steady state.
Additionaly, we have established that at a fixed lim /i i ratio, electroconvection develops faster in the case of Mg 2+ and Ca 2+ ; the value of time at which electroconvection becomes significant in the case of Na + is greater. In the case of Mg 2+ and Ca 2+ at a constant overlimiting current this time is about 3-7 s, which is less than the transition time. In the case of Na + this time is close to the transition time. The latter is necessary to attain a concentration at the membrane surface close to zero and varies in the range of 4-23 s.
In all studied case, at a fixed lim /i i ratio, the potential drop is less and electroconvection is higher in the case of modified membrane. The latter is necessary to attain a concentration at the membrane surface close to zero. Thus, in the case of Mg 2+ and Ca 2+ ions, the electroconvection is more intense and the oscillations appear at times shorter than the transition time, which gives us reason to assume that the "fight" against sedimentation will be successful.
Effect of counterion hydration
Introduction
Overlimiting current regimes are increasingly being used in electrodialysis of dilute solutions [3]. A positive economic effect [4] under such conditions is achieved via stimulation of mass transfer by electroconvection [4][5][6][7], which not only increases the useful transport of salt ions [8], but also prevents membrane scaling [2].
Several electroconvection (EC) mechanisms are discussed in the literature [9]. Globally, electroconvection, i.e., fluid transport driven by an electric force, occurs when the force acts on both space charge in the region of a microscopic electric double layer (EDL) (electroosmosis phenomenon) and the residual space charge in the stoichiometrically electroneutral macroscopic bulk solution [9][10][11]. On the other hand, the state of EDL can be either (quasi)equilibrium at a current density i below the limiting current density lim i or nonequilibrium when lim i i . Accordingly, there are equilibrium and nonequilibrium electroconvection modes [9]. When lim i i , the EDL structure remains the same as for
0 i ,
with the EDL thickness being inversely proportional to the root of counter-ion concentration at the interface of EDL with electrically neutral solution, s C 1 [12]. At lim i i , when s C 1 is on the order of 10 -6 mol/L, the quasi-equilibrium EDL having a thickness of about a hundred nanometers adds a nonequilibrium extended space charge region (SCR) with a thickness of several microns [13,14]. Dukhin and Mishchuk [15,16] proposed to call electro-osmosis at lim i i (equilibrium EC) electro-osmosis of the first kind and that at lim i i , electro-osmosis of the second kind. With underlimiting currents in the case of a homogeneous, smooth, ideally ion-selective surface, EC is stable and its influence on mass transfer is negligible. According to published data [6,[15][16][17], intense EC is developed near an ion-selective surface (ionexchange beads, membranes, etc.) if the following two conditions are observed: there are the extended SCR and tangential electric force. The tangential force emerges if the surface has electrical or geometrical heterogeneity: the surface resistance varies along the longitudinal coordinate or the surface has convexities or cavities. However, as shown theoretically by Urtenov et al. [18], EC in the channels of a forced-flow electrodialyzer develops even in the case of smooth homogeneous membranes. At currents lim i i , the mechanism of the phenomenon is similar to the Dukhin-Mishchuk mechanism. However, the cause of the tangential bulk electric force is noniniformity of the concentration field, rather than electric or geometric irregularity of the surface. The heterogeneity of the concentration field is due to the fact that the solution is desalted as it flows between the membranes. This mechanism gives rise to stable vortices in the surface layer of the depleted solution. The vortices facilitate arrival of the fresh solution to the membrane surface and the removal of the depleted solution from the surface into the bulk, a factor that explains the increase in the counter-ion transport velocity and the excess current density over its limiting value. The Dukhin-Mishchuk regime pertains to the potential range corresponding to the sloping plateau of the current-voltage curve, in which there are no noticeable oscillations. As the potential drop increases to a certain threshold, this EC mechanism is replaced by the Rubinstein-Zaltzman convective instability mechanism [19]. In this case, EC vortices become significantly larger and more unstable over time, thereby causing current oscillation at a constant potential drop (or potential drop oscillations at a fixed value of current).
Existing theoretical models [9,[18][19][20][21][22] suggest that the electric body force responsible for the transfer of water is proportional to the product of the field strength and space charge density.
The expression for this force does not contain parameters that reflect the interaction of ions and water molecules. In particular, this expression does not account for the influence of the nature of space-charge-forming ions. However, the experimental data obtained by Choi et al. [1] indicate that current-voltage curve parameters, such as the plateau length, substantially depend on the nature of the electrolyte. Analysis of the current-voltage curves led the cited authors [23] to the conclusion that the electroconvection intensity increases with an increase in the Stokes radius of the counter-ion. This is an important conclusion, because it gives the opportunity to predict the behavior of membrane systems indifferent electrolytes under intensive current regimes.
In particular, this information can help to evaluate the risk of scaling on the surface and in the bulk of the membrane during treatment of solutions containing hardness ions (Ca 2+ , Mg 2+ , etc.).
However, the concepts of electroconvection emergence mechanisms, which can be different depending on the ability of counter-ion to engage a volume of water in movement, have been poorly developed. In particular, it is unclear how the nature of the counter-ion will affect the dynamics of mass transfer and electroconvection over time. In this study, the main tool of experimental research was the technique of chronopotentiometry. We show that in the presence of strongly hydrated Ca 2+ and Mg 2+ ions, electroconvection is not only more intense under steady state conditions, but onsets at shorter times after switching on direct current.
Experimental
2.2.1Protocol
Investigations were carried out in a four-compartment, flow-through electrodialysis (ED) cell, inwhich the desalination channel (DC) was formed by an AMX anion-exchange membrane (Astom,Tokuyama Soda, Japan) and an MK-40 MOD cation-exchange membrane. The procedures for measuring current-voltage characteristics and chronopotentiograms are detailed in [23]. A specific feature of the cell design is that the cell provides a laminar flow regime in the intermembrane space. In this case, it is possible to theoretically calculate the limiting current density in the cell and the effective thickness of the diffuse layer using the Leveque equation (see. Eq. (2.5) in Results and Discussion section).
Experiments were carried out with three different solutions (NaCl, CaCl 2 , and MgCl 2 ) of the same concentration (0.02 M), which were alternately pumped through the desalination channel. A 0.02 M NaCl solution was circulated through the buffer and electrode compartments. The pumping rate of the solution through all the compartments was the same, V =0.4 cm s -1 ; the intermembrane distance was h =6.5 mm; and the polarized membrane area was S =4 cm 2 . Luggin сapillaries, connected with Ag/AgCl electrodes, were brought to the geometric centers of polarizable sites of the test membrane and spaced at about 0.8 mm from its surface.
Ion-exchange membranes
The membrane MK-40 MOD was prepared by forming on the surface of a heterogeneous MK-40 membrane(Shchekinoazot, Russia) of a thin (about 15 μm thickness) homogeneous film by casting an LF-4SKsolution kindly provided by S.V. Timofeev (Plastpolymer).The LF-4SK is a solution of tetrafluoroethylene copolymer with sulfonated perfluorovinyl ether in isopropyl alcohol. Solvent evaporation gives a film identical to the MF-4SK membrane. The procedure for preparing the modifying film and main characteristics of the resulting bilayer membrane are presented in [8]. From these data it follows that the exchange capacity, diffusion permeability, and conductivity of the original (MK-40) and modified (MK-40 MOD ) membranes are identical within the measurement errors. What have changed are the chemical composition, most (80 %) of the surface of the swollen MK-40 membrane is coated with polyethylene, used as a filler binder. Particles of KU-2 ion-exchange resin, which provide selective conductivity of the membrane, form crests on the polyethylene surface of about contain nonconductive areas (Ŝ =100%), but it is not perfectly smooth (Fig. 2-1b). There are projections and depressions on the surface, with the height difference between them determined with a Zeiss AxioCam MRc5 light microscope being about 2 μm; the distance between adjacent crests is 5-10μm.
Coating the MK-40 membrane with the MF-4SK film leads to an increase in the contact angle (measured for the swollen state as described in [24]) from 55 ° to 64 ° (±3 °).
Materials
The solutions were prepared using distilled water (400 kΩ cm) and the following chemicals: numbers were borrowed from [25]. To determine i D , D , St r , and i t at 20°C, the temperature dependence of equivalent conductivity at infinite dilution was used [25].
Water molecules in the primary shell of Mg 2+ are stronger bonded to the ion as compared with
Ca 2+ [26], a difference that is manifested in a lower value of the average length of the Mg-O bond compared to Ca-O. These bond lengths are respectively 2.07 and 2.45 Å [27]. The length of the hydrogen bond between oxygen atoms in the primary and secondary shells of the Mg 2+ or Ca 2+ ions is 2.74 or 2.88 Å, respectively [27], values that also indicate a greater strength of water binding by magnesium ions. These data, as well as the hydration numbers and Stokes radii of cations ( To compare between chronopotentiograms of different membrane systems, it is convenient to use the difference of total potential drop and ohmic potential drop ohm in the unpolarized membrane system instead of the total potential drop per sec. [28]:
ohm ' (2.1)
The value of In terms of the simple mathematical model proposed by Sand [30], assuming local electroneutrality and the absence of convection, the counter-ion concentration at the membrane surface s C 1 depends upon the time t elapsed since the switching on the direct current with a density i , as a function of t
Dt FD z t T i C C i i s 1 1 1 ) ( 2 (2.2)
where 1
C is the counter-ion concentration (in mol.L -1 ) in the bulk of the stirred solution; 1 T and 1
t are the counter-ion transport numbers in the membrane and the solution, respectively; gravitational convection [31], and electroconvection. As shown in our earlier study [7],
electroconvection whose mechanism is briefly described in the introduction gives the largest contribution in this system. Electroconvection provides mixing of the depleted solution at the membrane surface, which slows the PD growth over time. Slowing down the growth of the potential drop causes the appearance of an inflection point in the chronopotentiogram. The time corresponding to the inflection point is called transition time, . The quantity is an important parameter because it defines the time required to achieve a nearly zero value of the near-to-surface concentration of electrolyte and to change the transport mechanism. The equation for calculating was derived by Sand [30] from Eq. (2.2), in which s C 1 is set to be zero [32,33]:
2 2 1 1 1 1 1 4 i t T C Fz D (2.3) Equation (2.
3) holds if electrolyte transport at the membrane surface is determined by electrodiffusion. To eliminate the effect of the finite thickness of the diffuse layer, the current density at which this equation is applicable should be greater than the limiting value by a factor of at least 1.5 [34].
The EC intensity increases with , so the system goes into the (quasi-)stationary state over time when value of only slightly changes with time.
The stationary value of the limiting current density lim i is defined as the maximum value of i achieved during electrodiffusion transport in the absence of associated effects. According to
Pierce equation [5],
) ( 1 1 1 1 lim t T FD C z i (2.4)
where is the thickness of the Nernst diffusion boundary layer.
In the case of a channel formed by smooth membranes and laminar flow of solution, lim i can be calculated by the Leveque equation obtained in terms of the convection-diffusion model [35]:
2 . 0 47 . 1 ) ( 3 / 1 2 1 1 1 1 lim LD V h t T h FD C z i theor (2.5)
Here, L is the length of the desalination path (2 cm).
Substituting Eq. (2.4) into Eq. (2.2) gives
Dt i i C C s lim 1 1 2 1 (2.6)
Since the potential drop across the membrane upon electrodiffusion transport is largely determined by the value of s C 1 , the form of the initial portion of CP for different electrolytes should be almost identical if the lim / i i ratio and for the two systems take the same values. In the case of NaCl, there is a monotonic increase in the potential drop, which is qualitatively consistent with the Sand theory. In the case of MgCl 2 or CaCl 2 , there is a local PD maximum at a time m t t . The minimum current density at which the local maximum is detectable is 0.8 with t m being 19 s for both electrolytes (Fig. 23). Note that the transitional state is not achieved yet at this underlimiting current. As the current increases, t m decreases: in particular, As has been already noted, m t rapidly decreases with an increase in applied current (Figs. 2-3- 2-5). However, the value of 5 . 0 m it is almost independent of i (Fig. 23456). This behavior can be explained if we take into account Eq. (2.6), which suggests that s C depends upon 5 . 0
it . The onset of the first PD oscillation is presumably due to reaching a certain threshold value of s C at the membrane/solution interface. Calculation of the surface-average value av s C using Eq.
(2.6) for 8 . 0 / lim i i and t = 19 s (conditions corresponding to the maximum in Fig. 23) yields av s C =0.011 mol L -1 . The resulting value is too high to allow the formation of the extended SPR. However, Rubinstein and Zaltzman [9] showed that for the development of electroconvection at underlimiting currents, the electrochemical potential gradient along the membrane surface is important and not so much the average concentration av s C . This gradient may take relatively large values for the test membrane, since the membrane surface is electrically and geometrically irregular. In particular, the presence of crests and troughs can lead to a significantly large value of the tangential electrical force acting on the equilibrium space charge along the "slope" of the crest necessary for EC development [6]. However, PD oscillations in the case of NaCl were not observed at times shorter than (Figs. 2-3-2-5).
Thus, it can be stated that a necessary condition for the appearance of such oscillations at underlimiting currents is not only the presence of the tangential body force, but also the presence of a sufficiently large hydration shell, as in the case of Ca 2+ and Mg 2+ ions. was found from the value of the potential drop at the point of intersection of the tangents to the inclined "plateau" and to the segment of the rapid current rise [1]. The thickness of the diffusion layer ( ) was evaluated using Eq. (2.5) and Pierce equation (2.4).
Current-Voltage Characteristics
From these data it follows that the "plateau" length decreases and the slope of the "plateau" increases in the order of Na + < Ca 2+ < Mg 2+ , that is, with an increase in the water-structuring capacity of the cations and, hence, the capacity to engage large volumes of water in their movement. These results are in good agreement with those obtained by Choi et al. [1], who associated the revealed relationship with the enhancement of electroconvection by increasing the Stokes radius of the counter-ion. shows the intersection of the tangents to the inclined "plateau" and to the segment of the fast current rise.
In [8,23] we have shown that the application of a homogeneous hydrophobic cation-exchange film onto the surface of a heterogeneous cation-exchange membrane promotes the development of electroconvection and enhances mass transfer.
Counter The effect of surface properties of ion-exchange membranes on their electrochemical behavior at overlimiting currents, depending on the degree of development of electroconvection, was also studied by other investigators. Zhil'tsova et al. [36] visualized convective instability zones due to oscillating EC vortices near two membranes, using laser interferometry.
Commercial MK-40 and modified membranes were studied; the latter was similar to MK-40 MOD , but it was prepared with the use of a Nafion film, not the MF-4SK. It was shown that at a fixed current density, the size of the EC instability zone near the MK-40 MOD is greater and the potential drop across the membrane is smaller compared with those in the case of MK-40.
The cause of EC intensification at the surface of the modified membrane is an easier slip of the liquid along the hydrophobic surface [37]. The second reason behind the growth of EC near the modified membrane may be optimization of the "funnel" effect. This effect, described by Rubinstein and coworkers [38], consists in that electric current lines at the surface of an electrically inhomogeneous membrane are concentrated in conductive areas.
This concentration leads to two outcomes. One is that local current density through the conductive areas is substantially higher than the surface-average density, thereby causing a significant reduction in salt concentration at the surface and a greater potential drop compared with those at the homogeneous surface. The other is the stimulation of electroconvection by the tangential electric force formed at the boundary between the conducting and nonconducting areas when the current lines are curved. More intense electroconvection reduces both the resistance of the system and the potential drop. In the case of MK-40, in which the proportion of the conductive surface is very small, the first effect is dominant, and the potential difference across this membrane at a low excess of the limiting current is higher than that across the homogeneous membrane. In the case of MK-40 MOD , in contrast, the second effect makes a greater contribution; the PD across a membrane of this type is substantially smaller than that across the unmodified membrane.
The scenario of development of early oscillations in the equilibrium EC mode when the concentration polarization of the membrane is relatively low and the extended SCR is not formed, was described in the recent theoretical paper by Rubinstein and Zaltzman [9]. They associated periodic oscillations of current at a fixed potential drop with the evolution of EC vortices at the membrane surface in the system with an unforced flow of the solution. A surface part corresponding to the maximum size of a vortex was considered. The larger the vortex size, the more effectively it stirs the solution at the surface. This decreases the resistance of the solution adjacent to the membrane and increases the current density. At the same time, the evolution of the vortex structure involves the formation of another, adjacent vortex rotating in the opposite direction. Developing, this second vortex increases in size and captures part of the space that has been originally occupied by the first vortex. As a result, the size of the first vortex decreases. After some time when the both vortices have similar sizes making in total the maximum size of the first vortex, their contribution in the mixing of the near-membrane solution becomes minimal. The resistance in this state becomes maximal, and the current density is reduced to its minimum. Then, the initial vortex disappears and a new vortex that rotates in the opposite direction appears instead. The current density reaches a maximum value again. Note that the oscillation pattern and period (several seconds) calculated in [9] are similar to those observed in our experiment (Fig. 2-4a).
According to Rubinstein and Zaltzman [9], the threshold current/PD value for equilibrium electroconvection may be well below the threshold required for the onset of nonequilibrium EC, which appears in the presence of the extended SCR. This fact, as well as the similarity of the oscillation parameters predicted by Rubinstein and Zaltzman and found in the experiment, suggests that the potential oscillations observed at low potential drops in the initial portions of CP are due to instability of equilibrium EC [9]. Electroconvection of this type occurs in all the test systems. However, the size of vortices emerged in the case of NaCl is so small that their appearance does not affect the form of the initial portion of CP (Figs. 2-3-2-5). Indirect evidence for their presence is the excess of exp values over Sand (Fig. 2-5a). In fact, Ca 2+ and, especially, Mg 2+ ions structure water much stronger than do Na + ions (Table 2-1).
Therefore, the EC vortices formed in their presence much stronger stir the solution, a difference that is manifested in the oscillations recorded.
The mechanism of CP oscillations observed at large times of t , when the system reaches a quasi-stationary state, differs from the oscillation mechanism at t . As mentioned above, additional current transport mechanisms that ensure the achievement of the steady state at a finite potential drop develop in the vicinity of the transition time. Under our experimental conditions, electroconvection plays a determining role. At t , ( lim i i ), the near-surface electrolyte concentration reaches a threshold value sufficient for the development of a nonequilibrium extended SCR at the membrane surface. In this case, unstable EC appears in accordance with the Rubinstein-Zaltzman mechanism. After reaching the quasi-stationary state, vortex clusters consisting of several opposing vortices are continuously formed at the membrane surface [18,39]. Dominating are the vortices, the rotation direction of which on their outer boundary is concerted with the direction of the forced fluid flow. Inside the cluster, there is the depleted solution, which is being vigorously stirred [5,22]. Under the influence of the forced flow, the clusters drift from the entrance of the desalination channel to its exit.
When the cluster gets in between the Luggin capillaries (which are used to measure the potential difference), the resistance in the gap between the capillaries increases and the PD takes a relatively high value. In between two clusters, the solution has a higher concentration [39], so when the intercluster gap occurs between the capillaries, the PD value is relatively low. The motion of the clusters is responsible for oscillations with a relatively low frequency, about 0.25 Hz in the case of NaCl; this means that a cluster passes between the capillaries every 4 seconds. At the same time, smaller vortices within the clusters are responsible for high-frequency oscillation with a smaller amplitude. In the case of Ca 2+ ions, the PD oscillations that can be attributed to the emergence of EC by the Rubinstein-Zaltzman mechanism are detected at times (potential drops) slightly exceeding Sand . In the system with MgCl 2 , these oscillations appear near and the oscillation amplitude increases (Fig. 2-4a).
It is noteworthy that the thickness of equilibrium EDL in the case of singly charged counterions (Na + )is greater than that in the case of doubly charged (Ca 2+ , Mg 2+ ) ions. This factor can diminish the effect of the bulkier hydration shell of the doubly charged ions on electroconvection. Indeed, at current densities close to lim i (Fig. 234567) and for short chronopotentiometry times (Fig. 234) when the equilibrium EC dominates, the behavior of the membrane in the CaCl 2 solution is closer to that in the NaCl, rather than the MgCl 2 solution.
However, at lim i i and t when EC becomes nonequilibrium, the factor of increasing the size of the hydration shell is dominant, resulting in a lower value of the pressure drop in the system with CaCl 2 compared to NaCl at a given lim / i i ratio.
Conclusions
The influence of the nature of counter-ions (Na + ,Ca 2+ , Mg 2+ ) on chronopotentiograms and the current-voltage characteristics of the MK-40 MOD membrane, prepared from a commercial MK-40 membrane by applying a MF-4SK thin film on its surface, has been studied. The resulting bilayer membrane is characterized by not only electrical heterogeneity, but also significant waviness and comparatively high hydrophobicity of the surface, properties that promotes electroconvection.
It has been shown that at a fixed lim / i i ratio, the potential drop across the membrane varies in the order Na + > Ca 2+ > Mg 2+ , as can be explained by the influence of electroconvection, which is the most intense in the case of Mg 2+ and minimal in the case of Na + . The resulting electroconvection intensity series is consistent with the data obtained previously by Choi et al. [1] and is supposed to be due to the effect of the degree of hydration of the counter-ion: the greater the degree of ion hydration (maximal for Mg 2+ and minimal for Na + ), the more intense is the electroconvection.
A characteristic feature of the nonstationary transport of the strongly hydrated counter-ions Ca 2+ and Mg 2+ is that the chronopotentiogram exhibits potential drop oscillations a few seconds after switching on the electric current ( lim 8 . 0 i i
). The time of appearance of the first local maximum is approximately half of the transition time, and the value of the reduced potential difference at maximum is 20-30 mV. The pattern and period of oscillations, as well as the values of potential drop at which the oscillations are detected, suggest that these oscillations are due to equilibrium electroconvection that develops according to the mechanism of electro-osmosis of the first kind.
The oscillations observed at higher potential differences are due to the instability of the Rubinstein-Zaltzman nonequilibrium electroconvection. Assembling of nascent vortices in clusters and their drift by the action of forced convection from the entrance of the desalination channel to its exit cause low-frequency oscillations in the quasi-stationary portions of the chronopotentiograms. The period and amplitude of the oscillations increase in the following order: Na + < Ca 2+ < Mg 2+ , that is, with an increase in the hydration of counter-ions.
Acknowledgments
This work was supported by the Russian Science Foundation, project no. 14-19-00401 (measurement and general analysis of voltammograms and chronopotentiogram, electroconvection mechanism); the Russian Foundation for Basic Research, project no. 13-08-96518 (surface homogenization effect); and project no. 15-58-16005-NCNIL (chronopotentiometry at small potential drops).
Chapter 3
Effect of homogenization and hydrophobization of a cation-exchange membrane surface on its scaling in the presence of calcium and magnesium chlorides during electrodialysis
Presentation of the article
The main task of this chapter is to study the influence of homogenization and hydrophobization of cation-exchange membrane on its scaling in the presence of divalent ions during electrodialysis.
In the previous chapter, it has been shown that the casting of a thin homogeneous ionconductive perfluorinated MF-4SK film on the surface of a heterogeneous cation-exchange membrane leads to the growth of the limiting current density and electroconvection intensification in comparison with the pristine membrane, in particular, in the presence of divalent ions. This effect is explained by a better distribution of the current lines in the boundary solution and a more hydrophobic surface after the membrane modification, which is beneficial for electroconvection.
As the effect of electroconvection contributes to the scaling mitigation, we can expect that the use of the ion-exchange membranes modified with a perfluorinated film (which is Nafion ® in this study) will essentially decrease the rate or totally prevent salt precipitation.
The main experimental techniques are chronopotentiometry and voltammetry. This study allows us to conclude that the current density is higher and water splitting is lower for the MK-40 MOD membrane, due to a stronger electroconvection in the depleted solution at the surface of this membrane. The formation of scaling is observed only in the case of MK-40/0.04 M MgCl 2 system at current densities in the range from Thus, the surface of the MK-40 MOD membrane (relatively more hydrophobic and less electrically heterogeneous) provides a better distribution of current lines enhancing the ion transport. Electroconvection vortexes mix the depleted solution and add to maintain the ion concentrations higher than the solubility product. In conditions where water splitting becomes dominating at the auxiliary anion-exchange membrane, pH in the overall desalination chamber reduces. As a consequence of these two effects, the hydroxide deposition does not occur.
Abstract
In surface facing the desalination chamber. At higher currents, stronger electroconvection at this membrane and higher water splitting at the AMX-SB membrane (the latter providing lower pH in the desalination chamber) prevented scaling. No scaling was found on the modified membrane at any current. It is due to the Nafion ® film, which is relatively more hydrophobic than pristine MK-40 and which provides a "better" distribution of current lines near the surface, thus enhancing electroconvection and decreasing water splitting.
Introduction
Electrodialysis (ED) with ion-exchange membranes (IEMs) is one of the efficient methods for water desalination and solution concentration [1,2]. It is reasonable to use this method in combination with reverse osmosis (RO) in brackish water desalination [3,4]. Ultrapure water and ultra-concentrated electrolyte solutions can be obtained by means of this hybrid process.
Conventional ED can also be coupled with bipolar membrane ED to generate bases and acids from salts [5], or even protein recovery from the tofu whey [6]. Two other important ED applications are developed: casein production in milk industry [7,8] and tartaric stabilization of wines [9].
In ED, due to different ion transport numbers in the membrane and solution, concentration polarization (CP) occurs [10]. This phenomenon can result in salt precipitation on the membrane surface facing the concentration and/or desalination chambers [7,10,11]. The scaling layer reduces the effective surface area of the membrane, which causes additional resistance to the flow and mass transfer. The scaling phenomenon can be aggravated by membrane fouling in presence of organic matters (proteins, sugars, polyphenols, tannins...) [12,13].
CaCO 3 and CaSO 4 are generally the main salts, which form a precipitate on the membrane surfaces during ED of seawater [14]. The fouling layer consisting of CaCO 3 , Mg(OH) 2 , and Ca(OH) 2 forms on cation-exchange membrane (CEM) interfaces during electromembrane processes used in dairy industry [15]. In the 1950's, Ionics started experiments with systems which showed a radical improvement in scaling control [16]. They proposed to use ED reversal process (EDR), which results in the prevention of calcium carbonate and calcium sulfate scale formation. It was found that EDR may be successfully applied to the production of sodium chloride from the seawater [10,17]. The key moment in the EDR is the changing of the electrode polarity and the flow direction at the same time [1,10,16]. The desalination chambers become concentration chambers, and vice versa. The disadvantage of this method is the loss in the current efficiency, capacity and in the final product during the operation of electrode polarity changing.
Pulsed electric field (PEF) modes are proposed as another effective way for combating the consequences of CP, such as the scaling formation at IEM interfaces [18,19]. In this regime the pulses of current or voltage alternate with pauses. Karlin et al. [20], Sistat et al. [21],
Mikhaylin et al. [7] showed that the duration of pulse and pause lapses affects essentially the amount and the composition of the salt deposits. During pause lapses, the concentration depolarization occurs. As a result, the electric resistance of depleted solution reduces [22] and the salt deposits dissolve, at least partially. Electroconvection (EC) plays an important role in this process. EC appears as a result of the action of electric field on the space electric charge at the membrane surface. The transfer of solution by EC intensifies the exchange between the near-surface solution layer and the bulk solution [23][24][25][26] and contributes to reducing scaling [27].
A modification of a membrane surface by a layer selective to the transfer of singly charged ions is another option, which can be used for mitigation of the scaling and fouling. It is known that the deposition of organic substances on the membrane surface can be reduced when the membrane is covered with a thin ion-conductive layer bearing a fixed charge opposite to the charge of the supporting membrane matrix [28]. As a result, the resistance of the composite membrane is increasing towards multiply charged counter-ions, which causes scaling. A similar effect is obtained by modification of the membrane surface with high-molecular surface active substances [29].
Asraf-Snir et al. [30,31] investigated the effect of IEM surface structure on gypsum scale formation. They found that in the case of heterogeneous anion-exchange MA-40 membrane, the scale formation occurs mainly on the surface of conductive areas, which are ion-exchange resin beads incorporated in the polyethylene supporting matrix. The amount of deposit is essentially lower on the surface of a homogeneous anion-exchange AMV membrane [30,31].
Moreover, the scale forms not only on the heterogeneous membrane surface, but within its macropores also. Based on the latter results [30,31], it can be expected that coating a homogeneous conductive film on a heterogeneous membrane surface could improve its resistance towards fouling.
In practice, the casting of conductive perfluorinated resin solution of Nafion ® or its Russian analogue MF-4SK on the surface of a cation-exchange membrane leads to the growth of limiting current density by 1.5 times in comparison with unmodified membrane [32][33][34]. This effect is explained by a better distribution of the current lines after the surface modification.
Besides, the membrane surface becomes relatively more hydrophobic that is beneficial for EC [34,35].
The positive effect of application a homogeneous film to the membrane surface is supported by theoretical studies of Rubinstein et al. [36]. The current lines are distorted in solution near an electrically heterogeneous surface: they are accumulated at the conductive areas causing higher CP [36], which should lead to increasing scale formation. It is shown [36] that application of a homogeneous conductive layer on heterogeneous surface diminishes the effect of formation of funnel-shaped distribution of current lines in the solution near the membrane surface. Even a very thin and weakly charged conductive layer results in homogenization of current lines distribution over the membrane surface and in the nearsurface solution. As a consequence, the CP at the membrane surface decreases and the limiting current density increases [36].
In the present paper, we study the effect of surface modification of a heterogeneous cationexchange MK-40 membrane with a thin film of Nafion ® on the scale formation. From the literature reviewed above, it can be expected that the application of a thin homogeneous and relatively hydrophobic layer on the surface of a heterogeneous membrane would produce a double effect: it will reduce CP due to homogenization of the current density distribution over the membrane surface, and will stimulate EC. As a consequence, the rate of salt precipitation should be lower than that at the pristine heterogeneous membrane.
Despite the extensive literature in this field, the effect of modification of heterogeneous membranes by a thin conductive homogeneous layer on scaling process has not been studied yet. We will explore this effect in calcium and magnesium chloride solutions of different concentrations.
Experimental
Ion-exchange membranes
Two cation-exchange membranes were used in this study. The MK-40 membrane (Shchekinoazot, Russia) is a heterogeneous membrane containing sulfonic acid fixed groups.
It is produced by hot pressing from the mixture of dispersed cation-exchange resin KU-2 and polyethylene as a binder filler. About 80 % of the MK-40 surface is covered by polyethylene [37] (Fig. 3-1a). The size of ion-exchange KU-2 resin particles is 10 to 50 μm. These particles form "hills" of 5 to 6 μm height on the membrane surface mainly covered by polyethylene.
The height of hills was determined by focusing the optical microscope consecutively on the tops and bottoms of the membrane surface relief (Fig. 3-1b). The MK-40 MOD membrane (Fig. 3-1c) is obtained by coating the MK-40 membrane surface with a homogeneous film obtained from a Nafion ® perfluorinated resin solution after evaporation of the solvent in an oven at 80 0 C [33]. The film thickness in the dry state is 12-15 μm (Fig. 3-1d).The copolymer film covering the surface of the MK-40 MOD membrane does not include non-conductive macroscopic regions, but it is not perfectly smooth. The difference in height between peaks and valleys for this membrane is about 2 μm. The bulk and surface properties of the MK-40 and the MK-40 MOD membranes were previously described elsewhere [33]. After modification, the ion-exchange capacity, electrical conductivity and diffusion permeability vary only slightly, within the margin of experimental error. In contrast, the conductive surface fraction of MK-40 MOD becomes 100 % instead of 20 % for the pristine membrane, and the contact angle increases from 55 º to 64 º (± 3 º) [33,34]. The contact angle characterizes hydrophobic/hydrophilic balance of an ion-exchange membrane surface. It depends on the fixed ions, which attract water molecules, and on the matrix polymer material not containing fixed charged groups. Generally, the morphology of ion-exchange membranes (or the resins constituting the main part of them) may be presented as a system of hydrophilic conducting channels confined within a hydrophobic polymer phase exchange resin) and polyethylene. Polyethylene covers about 80% of the MK-40, however, its contact angle (about 92 0 ) is essentially lower than that of Teflon. The contact angle of polystyrene is 86 0 , however, the exchange capacity (5.0 meq/g dry) and water content (43-53 %) of KU-2 [39] are much higher than respective parameters of Nafion (0.83 meq g -1 dry and 22 % [40]). Thus, the surface areas occupied by the particles of KU-2 on the MK-40 are quite hydrophilic. As a result, the surface of MK-40 is more hydrophilic than that of Nafion.
Protocol
A flow-through four-chamber ED cell shown in Fig. 3-2 (6). A flow pass cell with a pH electrode (7) and a pH meter pHM120 MeterLab (8) are used for pH measurements. The current was set and the potential difference was measured using a potentiostat-galvanostat PARSTAT 2273 (9); the data were registered using a PC (10).
The potential drop (PD), , was measured between the tips of two Luggin capillaries installed from both sides of the membrane under study in its geometric center at a distance of about 0.5 mm from the surface. The procedure for obtaining chronopotentiograms (ChPs) and current-voltage (I-V) curves is described elsewhere [34,37].
The membrane behavior was studied in 0.02 M CaCl 2 , 0.04 M CaCl 2 , 0.02 M MgCl 2 , and 0.04 M MgCl 2 solutions. These solutions were pumped through the desalination chamber at a flow rate of 30 mL min -1 . All other chambers were fed with a NaCl solution of the same concentration as the solution feeding the desalination chamber (Fig. 3-2). The effective membrane area was 4 cm 2 . For preparation of the solutions, we used NaCl (R.P. NORMAPUR TM , VWR International), CaCl 2 2H 2 O, and MgCl 2 6H 2 O (LABOSI, Fisher Scientific s.a.), and demineralized water at 4MΩ (Millipore Elix 3).
Results and discussion
Theoretical value of the limiting current density
The theoretical value of the limiting current density, th i lim , was found using the Leveque equation obtained in the framework of the convection-diffusion model [41]:
2 . 0 47 . 1 ) ( 3 / 1 2 0 lim LD V h t T h FD c z i i i i i th (3.1)
Here 𝑧 and 𝑐 are the counter-ion charge and concentration in the feed solution, respectively; F is the Faraday constant; 𝑇 and 𝑡 are the counter-ion effective transport number in the membrane and the transport number in the solution respectively; V is the flow velocity of solution through the desalination chamber; L is the length of desalination path; D is the electrolyte diffusion coefficient.
In the studied cases V = 0.38 cm s -1 , L = 2 cm, t Ca 2+=0.436, t Mg 2+=0.406, D CaCl 2 = 1.18×10 -5 cm 2 s -1 , D MgCl 2 = 1.10×10 -5 cm 2 s -1 . The electrolyte diffusion coefficients and transport numbers are found from the limiting molar conductivities of the individual ions at 20 0 C [42], see Appendix. The MK-40 and MK-40 MOD membranes, for which the limiting current density is calculated, are assumed to be perfectly permselective, 𝑇 = 1. Indeed, according to Larchet et al. [43], the MK-40 membrane has a high exchange capacity (1.64 meq g -1 dry membrane); the effective transport number of Na + in this membrane bathed in a 1 M NaCl solution is equal to 0.98. Since we use essentially more dilute solutions (the maximum concentration is 0.04 M), and the permselectivity increases with decreasing solution concentration, assumption 𝑇 = 1 seems reasonable. The same assumption (𝑇 = 1) is accepted for the anion-exchange Neosepta AMX membrane, which is a homogeneous membrane with a relatively high exchange capacity (1.4-1.7 meq/g [44]). The values of th i lim calculated using Eq. ( 1) are presented in Table 3-1.
Generally, the application of Eq. (3.1) shows a good agreement of th i lim with the experimental values, lim i , in the case of a homogeneous membrane with flat surface [37]. , the growth of PD does not occur (the case of
th i i lim 5 . 2
is shown in Fig. 3-4a). Instead, PD oscillations of higher amplitude are observed.
It is noteworthy that in all cases the oscillations are higher and the PD is lower in the system with MK-40 MOD membrane. There is no growth of steady state PD over this membrane in all solutions. The oscillations in the obtained ChPs can be explained by electroconvective instability, which occurs above a certain threshold value of ' [47]. The greater the current density, the more intensive the electroconvection oscillations. These oscillations are higher in case of MK-40 MOD membrane showing that EC is stronger near the modified membrane.
EC is also stronger in the MgCl 2 solution then in CaCl 2 solution at the same th i i lim / ratio, all other conditions being equal. It follows from the fact that at the same th i i lim / value, the PD is higher in CaCl 2 solution than that in MgCl 2 . This effect has been discovered by Choi at al.
[48] and confirmed by Gil et al. [49]. It was shown [48,49] that the value of ' increased in the series Mg 2+ < Ca 2+ < Na + . The EC intensification was related to increasing the Stokes radius of counter-ions: the Stokes radius (and the hydration number) of Mg 2+ (0.346 nm [50]) is higher than that of Ca 2+ (0.308 nm [50]).
More intensive EC at the surface of the MK-40 MOD membrane is explained by an easier slip of liquid along the hydrophobic surface [51]. The relative hydrophobicity of the MK-40 MOD surface is higher than that of MK-40: the contact angle of MK-40 MOD is equal to 64 0 , while that of MK-40 is equal to 55 0 [33]. Besides, the growth of EC near the modified membrane may be explained by "better" distribution of electric current lines disturbed by the "funnel" effect [36]. The electric current lines condense on the conductive areas of the electrically nonuniform membrane surface. This produces two effects:
-on the one hand, higher local current density through the conductive areas of a heterogeneous membrane results in a significant decrease of the salt concentration at these areas and, as a consequence, to a higher PD (higher concentration polarization) in comparison with a homogeneous membrane at a given current; and -on the other hand, distortion of the current lines at the borders of conductive and nonconductive areas leads to appearance of an electric tangential force stimulating EC, which enhances ion transport.
Hence, there is an optimum value of electrical surface heterogeneity where the ion transport through the membrane attains its maximum under a given PD. A theoretical analysis based on the coupled 2D Nernst-Planck-Poisson-Navier-Stokes equations was recently made by Davidson et al. [52]. They found that when a fixed pattern period size and a fixed voltage (≈0.5 V) are chosen, a maximum current density is attained for a membrane whose surface contains 40 % of conductive and 60 % of non-conductive areas. This finding agrees qualitatively with our results: the surface of MK-40 membrane contains about 20 % of conductive areas [37], hence, the factor of high CP seems dominant over EC. The conductive permselective film applied on the MK-40 reduces heterogeneity in the current and concentration longitudinal distribution, hence, mitigates CP, while the tangential electric force remains sufficiently high to generate intensive EC.
pH changes
The pH difference (ΔpH = pH out -pH in ) between the outlet and inlet solution measured as a function of time for the desalination chamber is shown in Fig. 345. The ΔpH value depends on the difference in water splitting (WS) rate at the CEM and AEM. The H + ions formed at the CEM go into the membrane, the OH -ions generated at this membrane move towards the bulk solution. The OH -ions formed at the AEM go into the membrane, the H + ions move towards the bulk solution [53]. Thus, ΔpH depends on the difference between the H + and OH -ion fluxes directed from the corresponding membrane surface into the bulk solution. When the flux of H + ions is higher than that of OH -, the solution in desalination chamber becomes acidic. Otherwise, it becomes basic. As in all the experiments the same auxiliary membrane (AMX-SB) is used, the ΔpH value allows one to compare the WS rate at the different CEM samples under study.
The obtained data show that pH of the desalinated solution does not change or becomes acidic for both MgCl 2 and CaCl 2 solutions (Fig. 345). Therefore, the generation rate of H + ions going towards the bulk solution from the AEM is greater than or equal to the generation rate of OH - ions moving towards the bulk solution from the CEM. ). These groups are characterized by a week catalytic activity toward the WS [54,55]. Taking into account this fact, the difference in WS rate can be explained by the impact of two factors. First, the MK-40 MOD membrane provides a more uniform distribution of current lines near the membrane surface, which leads to reducing CP. Second, more intensive EC at the MK-40 MOD surface promotes better mixing of the solution and higher near-surface concentration of salt counter-ions.
Water splitting at the membrane interface becomes noticeable, when the concentration of salt ions at the membrane surface is comparable with the concentration of water ions, H + and OH -, i.e. lower than about 10 -5 M [56]. The current density relating to this event is close to th i lim . At th i i lim , the salt concentration at the interface is sufficiently low for the water ions might compete in current transfer. However, the rate of WS is a function of the catalytic activity of the functional groups. The R-SO 3 -groups of CEMs used in the study have low catalytic activity, as we mentioned above, while the catalytic activity of the secondary and tertiary amino groups present in the AMX-SB is essentially higher [54,57]. As a consequence, the rate of WS is generally higher at the AMX-SB, when a current density
Scaling formation
The increase with time of the quasi-steady state PD in the case of MK-40/MgCl 2 system in the range from
MgCl 2 + 2OH -→ Mg(OH) 2 + 2Cl - (3.1)
with participation of a part of the OH -ions generated at the CEM surface.
A similar reaction with formation of CaOH 2 is possible. However, Mg(OH) 2 is much less soluble than Ca(OH) 2 : indeed, the solubility product, K sp , at T=25 0 C in the case of Ca(OH) 2 is equal to 6 × 10 -6 mol 3 L -3 [59], while in the case of Mg(OH) 2 its value is equal to 1.8 × 10 -11 mol 3 L -3 [59].
Scaling formation of Mg(OH) 2 or Ca(OH) 2 depends on the balance of water dissociation rates at CEM and AEM. When the water dissociation rate at CEM is small and that at AEM is high, No deposit formation in the case of Ca(OH) 2 was observed at any current density (Fig. 3-6c).
If the WS rate at both membranes is the same (the case of MK-40/MgCl 2
th i i lim 4 . 1 (Fig. 3- 5a
)), deposit formation is possible. Effectively, in this case after 5 h of current flowing, we observe deposit of crystals on the MK-40 surface facing the desalination chamber (Fig. 3-6b).
It is noteworthy that the deposit forms on the surface of ion-exchange particles, which are conductive parts of the membrane surface and where WS reaction occurs. There is no deposit on the surface covered by polyethylene. This result is in a good agreement with the results obtained by Oren et al. [30,31], who observed the formation of gypsum at the surface of ionexchange resin particles embedded in a heterogeneous anion-exchange MA-40 membrane and facing the concentration compartment in dialysis [30] and electrodialysis processes [31]. As it can be seen in Fig. 34567, there are three regions on the I-V curves, which are generally distinguished in literature [47,60]. An initial linear region is followed by a plateau of limiting current density, and then by a region of a higher growth of current density. The moderate growth of current density in the plateau region is associated with stable EC, while the higher growth in the final region of the curve, with unstable EC and/or with WS [47,60,61]. In all studied cases, the plateau length is greater and the plateau slope is lower in the case of Ca 2+ , in comparison with the case of Mg 2+ . This is in a good agreement with the aforementioned results reported by Choi et al. [48] and Gil et al. [49].
Voltammetry
The comparison of I-V curves obtained for two different concentrations shows that the experimental limiting current density lim i is higher than the theoretical value th i lim in the case of 0.02 M concentration, and lim i is lower than th i lim in the case of 0.04 M concentration, for both membranes. More generally, the plateau of limiting current lies higher in the case of 0.02 M concentration than in the case of 0.04 M concentration. This behavior is explained by higher stable electroconvection in more dilute solutions. Indeed, in the region of currents/voltages not far from the limiting current state, there are no oscillations of PD in the chronopotentiograms, as it can be seen in Fig. 34for
th i i lim 4 . 1
. Stable electroconvection in this current region was observed also by studying the development of concentration profiles with time using laser interferometry [62,25]. In this current region, EC strongly depends on the parameters of the (quasi)equilibrium electrical double layer (EDL) [35,63]. In 0.02 M solutions, the thickness of the EDL is higher than that in the 0.04 M solutions. The higher the EDL thickness, the more intensive the EC. In the final region of I-V curve, where its slope is higher and there are oscillations in the ChPs (Fig. 34,
th i i lim 5 . 2
), EC is unstable. Its intensity is mainly determined by the extended space charge region, which is independent of the equilibrium EDL parameters [35,63]. Since EC depends only slightly on the equilibrium EDL, the difference between the I-V curve obtained for 0.02 M and 0.04 M solutions is not so great. In this current region, the balance of surface hydrophobicity/hydrophilicity plays an important role: EC increases with increasing the relative surface hydrophobicity [34,35]. In all studied cases, the current density was higher and water splitting was lower for the MK-
Conclusions
Acknowledgments
This investigation was realized in the frame of a joint French-Russian PHC Kolmogorov 2017 project of the French-Russian International Associated Laboratory "Ion-exchange membranes and related processes" with the financial support of Minobrnauki (Ref. N° RFMEFI58617X0053), Russia, and CNRS, France (project N° 38200SF).
Appendix
The value of the electrolyte diffusion coefficient is found using the equation
D z D z D D z z D (3.3)
where z and z are the charge numbers of cation and anion, respectively; D and D are the diffusion coefficients of cation and anion, respectively. The later are calculated using the following equation
i i i z F RT D 0 2 (3.4)
where R is the gas constant; T is the temperature; 0 i is the limiting molar conductivities of the individual ions at 20 0 C.
The transport numbers of Ca 2+ and Mg 2+ in solution are found using the equation
A A i i i i i D z D z D z t (3.5)
where subscript A relates to the anion in the solution.
Presentation of the article
In the previous chapter, it has been shown that the casting of a thin homogeneous ionconductive perfluorinated Nafion ® film on the surface of a heterogeneous MK-40 membrane leads to intensification of electroconvection, reduction of water splitting and prevention of scaling in individual MgCl 2 and CaCl 2 solutions. When water splitting at the auxiliary AMX membrane is dominant (higher than that at the CEM under study), the desalination solution is acidified, as a result, the scale does not formed on the membrane surface in acid medium.
According to the literature review, the use of pulsed electric field modes of high frequency allows reduction of scale formation. However, the effect of pulsed electric field of low frequency is not yet studied.
The main task of this chapter is to study the influence of electroconvection, pH adjustment and pulsed electric field application on the kinetics of membrane scaling. Scale formation on the surface of a heterogeneous cation-exchange MK-40 membrane and its modification MK-
. 2 0 . 1 i i i exp lim exp lim 5 . 1 0 . 1 i i i exp lim 0 . 2 i i exp lim 5 . 1 i i Chapter 4 123
The use of pulsed electric field mode with sufficiently high relaxation time allows essential reduction of membrane scaling. During a pause, the concentration profiles tend to return to the state relative to i=0 that decreases the solubility product involving the concentration of OH -ions. Besides, water splitting rate in this mode is lower, hence, the role of the auxiliary anion-exchange membrane, which produces H + ion into the diluate chamber, increases and pH in this chamber is more acid.
Mitigation of membrane scaling in electrodialysis by electroconvection enhancement, pH adjustment and pulsed electric field application
Introduction
Electrodialysis (ED) is widely used in the treatment of natural and waste waters, solutions for the food industry, such as dairy products and wine [1][2][3], and nutrient recovery [4]. Zero liquid discharge systems are actively developing [5] due to the toughening of ecological requirements for discharge of slightly concentrated solutions. The optimal technological scheme is when the concentrate obtained after water treatment by nanofiltration and reverse osmosis is further concentrated using ED [5][6][7]. The presence of Mg 2+ , Ca 2+ , CO 3 2-, SO 4 2-and other multivalent ions in solutions treated by ED leads to the scale formation on the surface from both sides of ion-exchange membranes (IEMs) facing concentrate and diluate compartments [8][9][10][11]. This scale causes an increase in the electrical and hydraulic resistance of the membrane stack causing a growth of energy consumption [8]. Interruption of industrial process in order to regenerate or replace the scaled membranes leads to additional costs.
Therefore, the membrane scaling is one of the key obstacles to wider ED implementation.
The scaling on the IEM relates to the deviation from solubility equilibrium. Deposition of particles on a membrane is mainly due to the presence of Mg 2+ , Ca 2+ , Ba 2+ cations and CO 3 2and SO 4 2-anions in the processed solutions [12,13]. If the solution region where the product of ion concentrations exceeds the solubility product is near the membrane/solution interface, the scale formation occurs on the membrane surface. In the case where this region is far from the surface, the scale crystals are formed in the bulk of the processed solution [11]. Water splitting at the depleted interface of a cation-exchange membrane (CEM) produces OH -ions, which increase pH of the near-surface solution and causes formation of such precipitates as Mg(OH) 2 and CaCO 3 in the diluate compartment. Note that while electrodialysis metathesis and selectrodialysis [14][15][16] allow one to exclude the simultaneous presence of divalent anions and divalent cations in concentrate compartments of ED stack (hence, to reduce the risk of scaling there), these ions are still present in the diluate compartments. And even when only divalent cations, such as Ca 2+ or Mg 2+ are in the feed solution, scale formation is yet possible on the diluate side of the CEM [17].
The rate of scale formation, its structure and ability to retain on the surface of an IEM is largely determined by the ratio of calcium and magnesium ions in the processed solutions [18,19]. The presence of Mg 2+ ions inhibits the scale growth of CaCO 3 [18]. The alkaline media contributes to the formation of scale due to the formation of hydroxides having a very low solubility product. On the contrary, in an acidic media, CaCO 3 scale can dissolve. Water splitting at the membrane/depleted solution interface leads to an increase of the pH in the near-surface solution of CEM and, as a consequence, to the scale (of Mg(OH) 2 , CaCO 3 and other compounds [17,20]) growth on its surface facing the desalination compartment. At the same time, the "proton-barrier" [21] created by the anion-exchange membrane (AEM) partially prevents the scaling of salts on the AEM side facing the desalination compartment [17,21].
The electrical and geometric heterogeneity of the membrane surface also influences the scale formation on the IEM. Some types of geometric heterogeneity or surface roughness (which is understood as the deviation from flatness) may provoke scaling. Membranes with a rough surface are more exposed to scaling as compared to the membranes with a smooth surface [22]. IEMs are conventionally classified as homogeneous and heterogeneous. Homogeneous membranes are generally fabricated using the "paste method" [23]. The fine powder of styrene, divinylbenzene and poly(vinyl chloride) is coated onto the cloth of a reinforcing material and heated to prepare the base membrane. Then the ion-exchange groups are introduced by appropriate after-treatments [23]. The micro-and mesopores in the range from several nm to several tens of nm are obtained in such membranes [24]. Heterogeneous membranes are fabricated from the powder of ion-exchange resins and polyethylene as filler.
After hot pressing, polyethylene forms continuous network providing mechanical strength; ion-exchange resin particles having micro-and mesopores (with the size in the range from 5 to 50 μm) form the ion pathways. Generally, homogeneous IEMs have very little or no macropores, while heterogeneous IEMs contain considerable amount (of the order of 10 % by vol.) of macropores, which are mainly localized in the contacts between resin particles and polyethylene. A series of studies [8,25,26] showed that the scaling occurs more intensively in the case of heterogeneous membranes compared to the homogeneous ones. In the case of homogeneous membranes, the scale is mainly localized on their surface [8], whereas in the case of heterogeneous membranes it is formed not only on the surface, but also inside the macropores [8,27]. When heterogeneous membranes, such as Russian MA-40 or MK-40 membranes, are used in ED, the scale is localized on the surface of ion-exchange resin particles incorporated into polyethylene, which serves as a filler-binder [8,28]. The partial blocking of the ion conductive pathways on the membrane surface leads to an increase in the local current density across the conducting surface areas. Rubinstein et al. [29] theoretically studied the "funnel effect": distortion of the electric current lines due to their accumulation in the ion conductive pathways on the membrane surface. This effect leads to increasing concentration polarization (CP) at the ion-conductive surface area. An equation connecting the limiting current density with the parameters of membrane surface heterogeneity (the fraction and the size of ion-conductive regions) had been obtained by Batrunas et al. [30] and verified experimentally by Zabolotsky et al. [31].
Chapter 4
128
It is established [10,17] that the development of electroconvection (EC) significantly reduces the scaling on the surface of IEMs. This effect is due to increasing convection transfer, which enhances hydrodynamic effect on the scale and decreases water splitting [17,32]. The latter results in lowering pH changes at the membrane surface, and, as consequence, mitigation of scaling. The effect of EC increases for IEMs with relatively hydrophobic surface [17,33], and in the case where pulsed electric field (PEF) are applied [10,34]. Accordingly, the scaling is mitigated in these cases [10,17].
Thus, the review above shows that despite the great attention paid to the problem of the scale formation in the literature, there is no satisfactory solution to the problem. In this paper we study the impact of membrane surface properties and PEF on the rate of scale formation in a feed solution imitating trice concentrated milked. In comparison with our precedent study [17], where individual calcium and magnesium chlorides were used, a more concentrated solution is chosen now in order to amplify the scaling phenomenon. The amount of scale formed on the membrane surface is characterized by the value of potential drop across the membrane [17,35]. In addition to use a specially prepared membrane, MK-40 MOD , which has shown its effectiveness in mitigation of scaling [17], we apply PEF with the period of pulses in the range from 2 to 30 minutes, which is essentially greater than in other studies [9, 10,
. We show that electroconvection together with optimization of pH and application of PEF produces an essential effect on the rate of scale formation on the membranes in the desalination compartment. For the first time, the pH value of the solution in desalination compartment is adjusted by the choice of the membranes and the current density. surface is covered by polyethylene [28,36] used as the binder filler for the membrane reinforcement. Particles of KU-2 cation-exchange resin, which provide selective conductivity of the membrane, protrude over the polyethylene surface and form "hills" of about 6 μm in height [33]. The thickness of the swollen MK-40 membrane in the Mg 2+ form is about 515 μm.
Experimental
The MK-40 MOD membrane is obtained by casting a Nafion ® water-isopropyl-alcohol solution onto the MK-40 membrane as substrate. After evaporation of the solvent, a thin perfluorinated cation-exchange homogeneous layer is formed on the MK-40 membrane surface (Fig. 4-1c).
More details may be found in [36,37]. The thickness of the Nafion ® film in the Mg 2+ form is 25 μm when the membrane is swollen (obtained using a micrometer by the difference in the thickness of the pristine and modified membranes). Its thickness of the film is about 15 μm in dry state as shown in Fig. 4-1 (SEM images are obtained for dried membranes). The conductive surface fraction of the MK-40 MOD membrane is 100 %, however the film is not perfectly smooth. The difference in height between peaks and valleys for this membrane is about 2 microns. The contact angle of the MK-40 membrane is 55 0 , that of the MK-40 MOD is 15% greater [36]. The surface of the MK-40 MOD membrane is less hydrophilic in comparison with the MK-40 membrane.
A heterogeneous anion-exchange commercial MA-41 (Shchekinoazot, Russia) (Fig. 4-1d) membrane is used as the auxiliary membrane in chronopotentiometric and voltammetry measurements. The structure of this membrane is similar to that of the MK-40 membrane, the difference is that the fixed groups of this membrane are quaternary ammonium groups with a noticeable presence of tertiary and secondary amines. This membrane is characterized by stable properties and moderated water splitting rate. Its stability is important for long-term experiments.
The pair of MK-40 membrane and its modification is chosen for the following reasons. The MK-40 membrane is inexpensive, its properties are stable when it is used in ED processes; however its performance is not so high as that of homogeneous commercial IEMs. In particular, under the same current density, the concentration polarization of this membrane (evaluated by the voltage) is higher in comparison with homogeneous membranes. On the other hand, as it was shown earlier [36,38,39], modification of MK-40 with a thin perfluorinated cation-exchange layer, such as Nafion ® , results in decreasing the concentration polarization/voltage and water splitting. The MK-40 MOD membrane remains cost effective, as the thickness of the expensive Nafion ® layer is small. The main cause of better membrane performance of the modified membrane according to [36,38,39] is lower electrical and geometric surface heterogeneity (which decreases the non-uniformity in current density distribution) and stronger electroconvection. The fact that the MK-40 MOD membrane has these improved properties allows us to anticipate its good scaling resistance.
Electrodialysis cell and experimental setup
The flow-through four-compartment electrodialysis cell used in this study and similar to that applied in our earlier studies [17,40] is shown in Fig. 4-2. The desalination compartment is formed by a cation-exchange membrane (M * ) and an auxiliary anion-exchange membrane.
The MK-40 or MK-40 MOD membrane is used as the M * membrane, and the MA-41 membrane as the auxiliary one. The intermembrane distance in this compartment, h, is 6.4 mm, the membrane area available to electric current passage is 2×2 cm 2 . The study is performed with a mixed salt solution used for feeding the central desalination compartment (compartment 2 in Fig. 4-2). This solution is composed of Na 2 CO 3 (1000 mg L -1 ), KCl (800 mg L -1 ), CaCl 2 *2H 2 O (4116 mg L -1 ) and MgCl 2 *6H 2 O (2440 mg L -1 ). In this solution, the Mg/Ca molar concentrations ratio is 2/5 and the total concentration of CaCl 2 and MgCl 2 is 0.04M.
The pH of the solution is adjusted to 6.5 by adding HCl. The mineral composition of this solution corresponds approximately to that of trice concentrated milk. This composition is chosen because the dairy products (such as milk whey) are often subjects of ED processing (demineralization). The main components are also present in seawater, other natural and waste waters. The component concentration is so that the rate of scale formation on the membrane is in the range of relatively easy measurements. The current is supplied by KEITHLEY 220 current source (11), the potential difference is measured using HEWLETT PACKARD 34401A multimeter (12); the data are registered using a PC (13).
The auxiliary compartment (compartment 3 in Fig. 4-2) formed by an auxiliary MK-40 membrane and the tested (M * ) membrane, and two electrode compartments (compartments 4 in Fig. 4-2) are fed with a 0.04 M NaCl solution. The anode and the cathode are platinum polarizing electrodes. All three compartments are fed in parallel each with a rate of 30 mL min -1 . The concentration of this solution is taken the same as the total concentration of the solution feeding the desalination compartment in order to minimize the diffusion transport of auxiliary solution components into compartment 2. The electrically driven transport of cations from the anode compartment into the desalination compartment is restrained by the MA-41
anion-exchange membrane. The volume of the auxiliary solution circulating through compartments 3 and 4 was 8 L. The volume of the solution circulating through desalination compartment 2 was 5 L; the rate of its flow was also 30 mL min -1 . The volumes of both solutions were so that during one experimental run (maximum 5 hours of current flowing) the decrease/increase in component concentrations was negligible. Indeed, the current was 42 mA; during 5 hours, the decrease/increase of a component in the circulating solution according to the Faraday law does not exceed 10 mmol, while more than 400 mmol (in terms of a singly charged anion, such as Cl -) were initially in 5 L of the feeding solution.
Before chronopotentiometry measurements, the membranes were equilibrated for 24 h with the solutions to be used subsequently. Two electric current modes are used in chronopotentiometric measurements: constant current and four pulsed electric field modes. In all cases total working time is the same (75 min or 105 min). In PEF mode the phase of constant current application (pulse lapse, T on ) is alternated with the phase of zero current (pause lapse, T off ). The following regimes of PEF mode are used: T on /T off =15min/15min, T on /T off =15min/7.5min, T on /T off =5min/5min, T on /T off =1min/1min. It was impossible to apply lower values of T on and T off because of limitations of the available software.
Analysis methods
Scanning electron microscopy and X-ray elemental analysis
Visualization of the membrane structure is made by a scanning electron microscope (Merlin, Carl Zeiss Microscopy GmbH) equipped with an energy dispersive spectrometer. The energy dispersive spectrometer conditions are 6 kV accelerating voltage with a 9.9-mm working distance. The dried membrane samples are coated with a thin layer of platinum in order to make them electrically conductive and to improve the quality of the microscopy photographs.
The membrane sample preparation for X-ray analysis is the same like for SEM.
pH
The pH value of the diluate solution is measured using a flow cell (Fig. 4-2) with a pH meter pHM120 MeterLab with a pH electrode. curve (Fig. 4-3). An initial linear region at low currents is followed by a plateau of limiting current density, i lim . When approaching i lim , the electrolyte interfacial concentration gets nearly zero that initiates coupled effects of membrane CP: current-induced convection (generally, electroconvection and gravitational convection) and water splitting [17]. EC arises due to the action of the electric field on the electric space charge in the boundary depleted solution. Gravitational convection, which develops due to the non-uniform distribution of solution density, is more likely to occur in relatively concentrated solutions (0.1 mol L -1 or more) and wide compartments [41]. Comparing parameters of I-V curves shows that the plateau length is less while the limiting current density i lim exp and the plateau slope are higher in the case of MK-40 MOD membrane in comparison with the MK-40. Similar results were obtained earlier [17,36,38,39]. Belashova et al. [39] observed a nearly 1.5-fold increase of the limiting current density across a MK-40 membrane after covering its surface with a 20 m Nafion ® film in a 0.02 M NaCl solution.
Results and discussion
Close values of plateau length and plateau slope were found for the MK-40 and MK-40 MOD membranes in 0.02 M and 0.04 M individual solutions of MgCl 2 and 0.04 M CaCl 2 [17].
Pismenskaya et al. [36,38] have found similar parameters of the I-V curves in NaCl solutions of different concentrations, and additionally established the fact that the water splitting rate is essentially lower at the MK-40 MOD membrane in comparison with the MK-40. All these results were explained by stronger electroconvection near the MK-40 MOD membrane [17,36,39]. Indeed, according to [42][43][44][45][46], earlier onset of unstable non-equilibrium electroconvection leads to a shorter length of the plateau region; higher i lim exp and plateau slope relate to higher stable equilibrium and non-equilibrium electroconvection, which develop in the corresponding current range. A more intensive unstable non-equilibrium electroconvection gives rise to a steeper slope of current density curve following the inclined plateau. Thus, the comparison of parameters of I-V curves show that EC near the MK-40 MOD membrane is stronger than that near the MK-40 one.
There are two factors caused by electrical heterogeneity of membrane surface, which affect mass transfer rate. The first one is the funnel effect [29] leading to higher concentration polarization/higher potential drop at a given current density. The second one is the occurrence of tangential electric field, which enhances EC, hence, reduces CP. In the case of MK-40 membrane, the conductive surface area is quite low (about 20% [28]), the size of a conductive Chapter 4
135 spot, an ion-exchange resin particle, is small (about 30 μm [39]) compared to the diffusion layer thickness (about 200 μm in the cell under study), hence, the CP due to the funnel effect is considerable. The homogeneous conductive Nafion ® film on the MK-40 MOD membrane surface essentially reduces the CP caused by funnel effect. However, the current lines near the Nafion ® surface remain sufficiently distorted and the tangential electric field produces effective EC. The distance between the EC vortices near an electrically heterogeneous surface is determined by the length of repeating fragment of heterogeneity on the surface [47,48] (Fig. 4-4). This length is about 100 μm for the MK-40 membrane used in this study [39]. The electrical conditions near the "gates" for ion passage (in particular, the value of tangential electric field and the length and thickness of the space charge region near the surface) determine the size of vortices. Thus, the homogeneous conductive film on the MK-40 MOD membrane allows "optimization" of electric current lines in the depleted near-surface solution (Fig. 4-4). membranes lead to lower concentration polarization and higher electroconvection. Adapted from [38].
Theoretically this type of optimization was studied by Davidson et al. [48] and Zabolotsky et al. [49], who simulated mass transfer near an ion-selective heterogeneous surface involving alternating conductive and non-conductive regions. It was found via simulation based on the Nernst-Planck-Poisson-Navier-Stokes equations that at low fractions of conductive regions, the mass transfer is low due to funnel effect. At very high fractions of conductive regions the mass transfer is also low due to reduced tangential electric field. Hence, there is an optimum value of the conductive area fraction where electric field involves a sufficiently high tangential component causing a strong electroosmotic flow, while the growth of CP caused by the funnel effect remains low. In our case, an effective tangential electric field is formed due to the electrically heterogeneous MK-40 membrane serving as a substrate. Note that Balster et al. [44] and Wesslilng et al. [50] produced and then studied ion-exchange membranes with tailored microheterogeneous surface, which showed quite elevated electroconvection.
Korzhova et al. [51] deposited spots of non-conductive hydrophobic fluoropolymer on the homogeneous membrane, which screened 8 or 12% of the membrane surface, and resulted in up to a 1.5-fold growth of the limiting and overlimiting current density and reduced water splitting. In particular, the surface modification allows an earlier onset of unstable electroconvection leading to an essential decrease of the plateau length.
The surface of MK-40 MOD membrane covered with a fluoropolymer film is relatively more hydrophobic than that of the MK-40 membrane: the contact angles of the MK-40 and MK-40 MOD membranes are 55 0 and 63 0 , respectively. Hydrophobization of membrane surface facilitates water slipping along the surface, which contributes to enhancing EC [33,[51][52][53].
Chronopotentiograms
Fig. 4-5 shows the ChP of MK-40 MOD membrane and its derivative obtained in the model solution at i=17.5 mA cm -2 (i=2 i lim exp ). The shape of the curve is typical for IEMs in a binary solution of monovalent ions at i ≥ i lim [54,55]. An immediate increase of PD is observed after switching-on the direct current. It is related to the ohmic potential drop, Ohm , over the membrane and two adjacent solutions where the concentrations are not yet affected by CP. At i < i lim the value of PD does not increase significantly during one experimental run. At i ≥ i lim the electrolyte concentration at the depleted membrane interface gradually decreased with time, the process is governed by electrodiffusion. As a result, the sharp increase in PD is observed on the ChPs. When the electrolyte concentration at the depleted membrane interface gets sufficiently low, along with electrodiffusion, current-induced convection arises. As a consequence, the rate of PD growth slows down, and finely, PD tends to a quasi-steady state value, St . The transition time, , related to the inflection point on the ChP corresponds to the appearance of an additional mechanism of ion transport, namely, the current-induced convection [54,55]. 3 mA cm -2 , i < i lim ) the observed PD oscillations can be explained by equilibrium electroconvective instability [56]; however, they may be also due to early stages of crystal growth on the membrane surface. The latter cause might be especially relevant at overlimiting currents where scale formation becomes visible on the membrane surface. Equilibrium EC instability arises when a tangential gradient of electrochemical potential occurs near the depleted membrane surface [56]. This gradient can appear due to electrical and/or geometrical heterogeneity of the membrane surface [52]. The PD oscillations at low potential drops (<0.1 V without ohmic contribution) were observed earlier in NaCl [40,51] and individual CaCl 2 and MgCl 2 [17,40] solutions in conditions where no scale formation occurs. The value of d is defined as the distance from the membrane surface on which the concentration profile has an unstable, oscillatory character. The curves are performed using experimental data from Ref. [57].
Scaling formation in constant current mode
It can be seen that PD across the membrane, in the case of both MK-40 and MK-40 MOD membranes, slowly increases with time after reaching the quasi-steady state. This growth occurs for the MK-40 at 7.9 mA cm -2 ≤ i ≤ 15.5 mA cm -2 (i lim exp ≤ i ≤ 2 i lim exp ) and at 8.8 mA cm -2 ≤ i ≤ 13.3 mA cm -2 (i lim exp ≤ i ≤ 1.5 i lim exp ) for the MK-40 MOD membrane. Marti-Calatayud et al. [35], who have studied the chronopotentiometry of IEMs in iron-containing solutions, related the increase of PD with time to the precipitation formation. In order to understand where the precipitation is localized, the scanning electron microscopy (SEM)
images of the surfaces of the membranes at the end of a 5-hour experiment run at i=10.5 mA cm -2 are obtained (Figs. 8a-i). Both surfaces facing the diluate and concentrate compartments are studied.
For all membranes, no precipitation is found on the concentrate side, where a 0.04 M NaCl solution circulates. Note that this solution passes through the concentrate and two electrode compartments in parallel, then return to the tank 8, as shown in Fig. 4 Since at i < i lim the concentration of salt ions at the membrane surface is much higher than that of water ions, hence, the H + and OH -ions cannot compete with the salt ions, there is no water splitting [28]. This statement is justified by the fact that there is no pH variation of the diluate solution at i=4 mA cm -2 (Fig. 456789). Even at i=9 mA cm -2 no essential pH variation is observed.
Insignificant alkalinization of the diluate is observed at 7.9 mA cm -2 ≤ i ≤ 15.5 mA cm -2 (i lim exp ≤ i ≤ 2 i lim exp ) in the case of MK-40 membrane and at 8.8 mA cm -2 ≤ i ≤ 13.3 mA cm -2 (i lim exp ≤ i ≤ 1.5 i lim exp ) in the case of MK-40 MOD membrane. Accordingly, it can be stated that the water splitting rate for the above conditions is higher at the CEM than at the AEM. The explanation follows from the fact that the limiting current density of the MA-41 membrane in the model solution determined from the experimental I-V curve for this membrane is 9.3 mA cm -2 , which is about 1.2 times higher than that of the MK-40 membrane (Table 4-1).
The higher value of i lim for the AEM is due to the larder diffusion coefficient of dominant anion (Cl -) than dominant cations (Ca 2+ and Mg 2+ ). Note that in the case of NaCl, i lim AEM / i lim CEM = 1.5. When the current density is higher than i lim CEM , but lower or slightly higher than i lim AEM , the water splitting rate is higher at the CEM. However, when the current density is sufficiently high, water splitting at the AEM becomes determinant and the solution in the diluate compartment is acidified. This situation occurs at i > 13.3 mA cm -2 (i > 1.4 i lim AEM ) in the case of MK-40 MOD membrane and at i > 15.5 mA cm -2 (i > 1.7 i lim AEM )
in the case of MK-40.
According to the range of catalytic activity of ionogenic groups towards the water splitting reaction [65],
-N(CH 3 ) 3 < -SO 3 H < =NH, -NH 2 < ≡N, (
the MK-40 and MK-40 MOD membranes (containing the -SO 3 -groups) are considered as the membranes with a week catalytic activity. The MA-41 membrane has two types of fixed ionexchange groups: the strongly basic quaternary ammonium (-N + (CH 3 ) 3 ) and the weakly basic secondary (=NH) and tertiary amine (≡N) groups. The latter have strong catalytic activity towards water splitting, which explains higher catalytic effect of the MA-41 membrane (and the most other anion-exchange membranes [66]), as well as acidification of the solution in the desalination compartment at relatively elevated currents. Note also that Choi and Moon [67] showed that at high current densities, the quaternary ammonium groups initially present in an AMX anion-exchange membrane are partially converted into the secondary and tertiary amines. Zabolotskiy et al. [58,68] found that in particular, this process occurs in the case of MA-41 membrane. These changes lead to an increase in water splitting rate at the depleted solution/MA-41 membrane interface.
Thus the stronger catalytic activity of ionogenic groups of the MA-41 membrane than those of the MK-40 and MK-40 MOD membranes explains the higher rate of water splitting at this membrane as well as the overall course of curves shown in Fig. 4-9: under a similar degree of CP (which can be estimated by the i/i lim ratio), the water splitting rate is higher at the MA-41 membrane in comparison to the MK-40 and MK-40 MOD membranes.
The water splitting rate at the MK-40 MOD membrane is lower than that at the pristine one in view of the fact that the diluate acidifies more in the case of MK-40 MOD membrane (the case of i=17.5 mA cm -2 is shown in Fig. 456789). This difference in the water splitting rate can be explained by the impact of two factors. First, the modified membrane provides a more uniform distribution of current lines near the membrane surface, which leads to reducing CP.
Second, more intensive EC at the modified surface promotes better mixing of the solution and higher near-surface concentration of salt ions. At higher currents, higher water splitting rate at the MA-41 membrane prevents scaling in the desalination compartment. The scale blocks the ion pathways on the membrane surface, which enhances water splitting.
There are two causes for this effect: 1) the local current density through the conductive area as well as CP strongly increase due to the reduction of conductive surface fraction; 2) the scale can contain compounds, which catalyze water splitting. The metal hydroxides, the Mg(OH) 2 especially, are known as the catalyzers of water splitting [65,70,71]. Thus, it can be expected that water splitting rate increases during an experimental run, which in turn increases the rate (Fig. 4-11b). This shows that the rate of water splitting at the MK-40 MOD membrane is essentially lower in the PEF mode. Note that lower water splitting rate in the PEF mode was also detected by Malek et al. [72] and Mikhaylin et al. [20].
There is a delay in pH variation at the beginning of the experiment; this delay is well seen in Fig. 4-11d, where it is close to 10 min. There are several reasons for this. First, pH is measured in the flow cell, which is at a distance about 60 cm from the desalination compartment outlet; thus it takes about 9 s for the outlet solution to reach the flow cell.
Second, after switching-on the current, a certain time (the transition time) should pass before the electrolyte concentration at the membrane surface attains a sufficiently low value where water splitting begins; the transition time may be determined by the inflection point of the chronopotentiogram and/or evaluated using the Sand equation. This time at current density 10.5 mA cm -2 is equal approximately to 20 s. Finely, this delay may be due to the time needed for accumulation of the precipitate on the membrane surface. This process can take several minutes. Indeed, initially water splitting causing precipitation may be low. However, the presence of the scale on membrane surface stimulates water splitting as explained above.
Hence, the rate water splitting increases with time following increasing amount of precipitate on the membrane surface. As Figs. Apparently, the main reason for lower PD and lower water splitting rate in the PEF mode is the fact that during the pause lapse, a relaxation of the concentration profile occurs at the membrane surface. This relaxation can be complete or partial, depending on the ratio of the pause duration to the relaxation time; the latter may be approximated by the Sand transition time. Hence, the species concentrations return to their values in the bulk solution, and the product of ion concentrations becomes lower than the solubility product. The reason for the scale formation ceases, and in the case where the precipitation is reversible, the scale may dissolve.
One can remark larger scattering in PD in the case of short PEF period, when T on /T off = 5/5
and even more when T on /T off = 1/1 (Fig. 4-11d) in comparison to higher period (Fig. 4-11c).
This should be due to a lower stability of the scale under short PEF periods. When a short current pulse is applied, a small amount of precipitate only a germ may be formed. During the pause the precipitate adherence to the surface weakens. After applying a new pulse, electroconvective flow could wash off the precipitate. Weak adherence of the precipitate may be also due to the fact that the maximum of concentration product of the scale-forming ions is located not on the membrane surface, but at a distance comparable with the diffusion boundary layer thickness. The simulation of concentration profiles in the case of Mg(OH) 2 ) is shown in Fig. 4-10.
EC is an important factor in scale formation. Generation of additional liquid flow near the surface enhances solution mixing, which results in higher salt ion concentration at the depleted interface. This leads to 1) lower PD, and 2) lower water splitting -since the appearing water ions will have greater difficulty in competing with the salt ions in charge transfer [28]. As we have mentioned above, the MK-40 MOD membrane initiates stronger EC, which is due to relatively higher surface hydrophobicity and electrically more uniform surface layer providing "better" distribution of current lines (Fig. 4-4), in the sense of stimulating EC.
Another factor, which contributes to lower scaling of the MK-40 MOD membrane in comparison to the MK-40 membrane, is the smooth surface of the former and the rough surface of the latter. The membranes with smooth surfaces are less susceptible to fouling than those with rough surface [73,74]. Colloidal particles preferentially accumulate at the valleys of the rough membrane surface [75]. As a result, valleys become blocked and the formed precipitate acts as the center of crystallization. Our recent study [17] shows that the scaling of Mg(OH) 2 crystals starts on the surface of ion-exchange resin particles embedded onto the heterogeneous MK-40 membrane surface. These data are consistent with the results of Asraf-Snir et al. [8,27], which found the crystals of CaSO 4 on the surface of resin particles of a heterogeneous MA-40 membrane at initial stages of scaling, while no scale was detected on the surface occupied by polyethylene. The amount of deposition on the membrane is much larger in this study, the resin particles are screened by the scale (Figs. 4-8a-h). Apparently, initially the scaling occurs on the resin particles. Further, the precipitate grows up and passes over the surface covered with polyethylene.
Conclusions
The scale formation on the surface of two cation-exchange membranes, a heterogeneous MK- The degree of the membrane surface screening by scale can be characterized by the value of the potential drop across the membrane, , which increases with increasing amount of scale.
The rate of scaling determines the rate of potential drop growth with time observed on the chronopotentiograms in quasi-steady state.
The mitigation of scaling on the MK-40 MOD membrane with modified surface is explained by three effects. The first one is the higher electroconvection at this membrane, which in addition to the increase in mass transfer helps to wash out the scale from the membrane surface. The second one is the reduction of water splitting, which allows adjusting pH in a slightly acid range due to the contribution of the anion-exchange membrane. The third effect is the reduction of the crystallization centers due to homogeneous surface of the MK-40 MOD . These centers are formed on a heterogeneous surface in the areas where the current lines are concentrated.
Another important factor allowing mitigation of scaling is the use of PEF mode. During the pause lapse, a relaxation of the concentration profile occurs at the membrane surface. The species concentrations return partially or completely to their values in the bulk solution, and the product of ion concentrations becomes lower than the solubility product, which causes (partial) dissolving the scale. ); in the other one, multicharged cations (such as Ca 2+ ) and singly charged anions (Cl -). In order to improve this process, monovalent-ion-selective membranes are used.
Acknowledgments
A modification of membrane surface by a layer selective to the transfer of singly charged ions is a largely used option. It has been shown in literature that polymerization of polyaniline (PANI) on the cation-exchange membrane surface results in the decrease of water transport number and appearance of permselectivity towards the transfer of singly charged ions. Thus, the membranes modified with PANI present a high interest for metathesis electrodialysis.
Potentially, these membranes may be used as monovalent-cation-selective ones. However, their behavior was not studied from the point of view of possible scaling.
The aim of the study described in this chapter is the preparation of a cation-exchange membrane modified with a thin surface layer of PANI and the investigation of kinetics of scale formation on this membrane in the same solution as used in the previous chapter (a model of trice-concentrated milk). The chapter involves two sections. The first one is devoted to the development of the principles of preparation of a PANI modified membrane in an applied electric field on the basis of a perfluorinated cation-exchange MF-4SK membrane. In the second section, we use the obtained knowledge for preparing a composite PANI membrane using as the substrate a cost effective heterogeneous cation-exchange MK-40 membrane. Then the kinetics of scaling process on the obtained membrane is investigated.
Presentation of the article 1
There are various methods for preparation of composite materials based on PANI and ionexchange membrane. They can be divided into two groups: template synthesis of PANI directly in the matrix of membrane and introduction of the pre-prepared commercial PANI into the membrane at the stage of its formation, for example when casting the film from the suspension. For example, the powder of PANI particles is blended with liquid PVDF when preparing heterogeneous anion exchange membranes by casting the obtained solution on glasses followed by either wet phase inversion or dry phase inversion. In the case of template synthesis, the matrix of ion-exchange membrane serves as a template, which determines the structure of the resulting PANI phase and its spatial arrangement.
Introduction
Composite materials based on polyaniline (PANI) and perfluorinated membranes are very promising for use in various electrochemical and sensor devices [1]. Various methods for the preparation of these materials were developed. They can be divided into two large groups: template synthesis of polyaniline directly in the matrix of the membrane and introduction of commercial polyaniline [2][3][4][5]. Both groups of methods have their own advantages and disadvantages. When commercial polymer is introduced in the membrane at the stage of its formation, for example, during film casting from a suspension containing polyaniline and an F-4SK solution, it is difficult to have polyaniline uniformly distributed in the membrane.
Moreover, forming a continuous path of charge transfer along the chain of conjugated carbon-carbon bonds requires contact between the polyaniline particles. This can only be achieved by increasing the polyaniline content in the membrane phase, which adversely affects the strength characteristics of the composite material [6]. One of the advantages of this method is the optimum conductive properties of the modifier. Synthesis of polyaniline directly in the phase and/or on the membrane surface seems more promising. The oxidative polymerization of aniline directly in the transport channels of the membrane matrix creates conditions for a continuous path for electric-and mass transfer along the polyaniline chain across the membrane. However, an obvious disadvantage of this group of methods is the impossibility of creating the conditions of synthesis that would provide the maximum electron conductivity of polyaniline [7,8].
There are many variants of the template synthesis of polyaniline. An essential condition of this synthesis is monomer and oxidant concentration in the phase or near the membrane surface at which polymerization starts. This can be achieved in various ways: by keeping the membrane in solutions containing the monomer and oxidant in sequence [9]; sequential diffusion of the polymerizing solutions across the membrane into water; and counter-diffusion of the monomer and oxidant across the membrane [10]. To prepare composites with uniform distribution of PANI in the MF-4SK polymer base matrix, iron (III) chloride or ultraviolet radiation is used [11]. Materials in which polyaniline is distributed mainly on the surface of the composite are obtained by treatment with oxidants such as persulfate, permanganate, or dichromate anions, etc., which are co-ions relative to the MF-4SK membrane. Due to their anisotropic structure and asymmetric transport properties, these composites are most promising for use in various devices and electrochemical processes. The preparation of these materials in a concentration field using high reagent and supporting acid concentrations was described in [5]. It can be assumed that the use of an electric field will accelerate the
The green color typical for polyaniline in the form of emeraldine appeared after 20 min of synthesis when potassium dichromate was used and after 30 min in the case of ammonium persulfate. All the composite membrane samples had different color intensities. Fig. 5-2 shows the electronic absorption spectra of the samples, which exhibit absorption maxima typical for the emeraldine form of polyaniline: a maximum around 300 nm corresponding to the π-π* electron transitions in the benzene rings of polyaniline and a broad band at 800 nm corresponding to the localized polarons or radical cations [13]. Visual comparison of the composites obtained in the presence of ammonium persulfate and potassium dichromate at the same current density showed that the intensity of coloring was higher for the latter in all cases. This is indirect evidence for the larger amount of polyaniline in the membrane when potassium dichromate is used as an oxidant that initiates polymerization of aniline, though its redox potential is smaller than that of ammonium persulfate. This assumption is confirmed by the fact that the optical density at the absorption maxima for the MF-4SK/PANI-100-1 composite is higher than for MF-4SK/PANI-100-2. This unexpected result can be explained by the fact that during the preparation of composites, synthesis of polyaniline occurs concurrently with its oxidative destruction [14]. In general, the yield of PANI depends on the rate ratio of these two processes. We can assume that under the action of a stronger oxidizer (ammonium persulfate), the destruction is much faster and the amount of PANI in the membrane ultimately decreases. A comparative study of composites based on carbon nanotubes and polyaniline obtained in the presence of sodium persulfate and potassium dichromate was performed in [15]; it was noted that the yield of PANI was higher in the latter case. The authors explained this by the decomposition of sodium persulfate in water to thiosulfate, liberating molecular oxygen. However, it was also noted that the PANI layer obtained in the presence of sodium persulfate was more uniform and conductive compared with that synthesized in the presence of potassium dichromate.
Polarization behavior of anisotropic MF-4SK/PANI composites
The current-voltage curves (CVCs) were measured in the same cell in which the composites were prepared. The cell (Fig. 5-3) consisted of two near-electrode chambers with platinum polarizing electrodes with an area of 7.1 cm 2 and two near-membrane chambers. To prevent the transfer of the electrolysis products from near-electrode to near-membrane chamber, we used auxiliary anion-and cation-exchange membranes to separate the membrane under study from the anode and cathode chambers, respectively. The given rate of electrolyte circulation in the chambers (14 mL min -1 ) was maintained using a peristaltic pump. Direct current was applied at a specified scan rate to the polarizing electrodes using an Autolab Pgstat302n potentiostat/galvanostat. The change in the potential difference on the membrane was recorded using Luggin-Haber capillaries connected with the membrane and the silver/silver chloride electrodes. The silver chloride electrodes were connected to the potentiostat/galvanostat, from which the signal was fed to a computer, which allowed realtime recording of the experimental ΔЕ value. The measurement of cyclic CVCs at the standard scan rate usually used for measurements (1×10 -4 A s -1 ) showed that the polarization curves of the composites had a pronounced hysteresis, while the measurement of cyclic CVCs on the original MF-4SK membrane at the same scan rate had no such effect. The degree of this effect depends on the current density at which the samples were synthesized.
To determine the reason for the hysteresis, we studied the chronopotentiograms of the composites at different current densities (Fig. 5-5). An analysis of the results showed that the time of reaching the constant voltage drop on the composite membrane is at least 17 min.
Therefore, to reduce the hysteresis, it is necessary to decrease the current scan rate during CVC measurements for membranes. An analysis of the experimental CVCs measured at different scan rates of the polarizing current (Fig. 56) showed that the polarization behavior of the composite orientated by its nonmodified side toward the flow of counter-ions is independent of the scan rate of current in the range 0.4×10 -5 to 4.6×10 -5 A s -1 . However, when the composite is orientated by its polyaniline layer toward the flow of counter-ions, the shape and parameters of CVCs depend strongly on this value. The hysteresis of the polarization curve decreases with the scan rate.
The hysteresis of CVCs was not completely eliminated during the experiment because of high time consumption. Thus, at a scan rate of 0.4×10 -5 A s -1 , the total time of measurement of one cyclic CVC was ~7 h. Therefore, in further experiments, we used the range of scan rates that allowed the minimum hysteresis on the CVC.
Thus, in order to obtain reproducible results with a minimum hysteresis, the following procedure for measuring the CVCs of composite membranes was suggested. After the synthesis, direct current with a density of 280 A m -2 should be passed through the system for 3 h. Thereafter, the CV curves should be measured using the current scan rates from 0.9×10 -5 to 4.6×10 -5 A s -1 depending on the sample preparation conditions. The scan rate can be higher (4.6×10 -5 A s -1 ) for the samples obtained at the minimum current density (100 A m -2 ), but minimum (0.9×10 -5 A s -1 ) for the samples with the densest polyaniline layer obtained at the maximum current density (300 A m -2 ) and having the most pronounced hysteresis of CVCs. 0.8×10 -5 A s -1 ) scan rates of the polarizing current.
The developed procedure was used to study the polarization behavior of a series of anisotropic composites MF-4SK/PANI obtained at current densities of 100, 200, and 300 A m -2 . According to the data presented in Fig. 567, the CV curves are asymmetric for all the samples depending on the orientation of the modifying polyaniline layer relative to the counter-ion flow. If the counter-ion flow meets the polyaniline layer, the limiting current and the conductivity of the EMS decrease in all cases, while the potentials of the start of the overlimiting state increase compared with that for the original membrane. For the reverse orientation of membranes, an increase in the current to 100 A m -2 did not lead to the limiting condition in the EMS and its conductivity was higher than in the case with the counter-ion flow meeting the modified composite surface. A similar form of polarization curves was observed for the anisotropic composites obtained by sequential diffusion of polymerizing solutions across the membrane into water [5,10]. Thus, for the anisotropic MF-4SK/PANI samples, the general tendency of polarization behavior is the asymmetry of CVCs irrespective of the conditions of their synthesis and the oxidant used. We assumed that the reason for this is the presence of two layers formed in accordance with the asymmetric conditions of synthesis in the modified sample. Polyaniline having anion-exchange properties [16] is mainly localized in the surface layer of the sulfocationite membrane. This system is similar to the polyethylene terephthalate track membrane described in [17] and modified in aniline plasma under the conditions with only one open side of the membrane during plasmochemical polymerization in discharge. In this case, a gradient of thickness of the deposited polyaniline layer appears in the pore channels, and carboxyl cationite-anionite layers with bipolar conductivity are formed. The formation of an internal bipolar boundary between the cation-and anion-exchange layers in a perfluorinated membrane can lead to catalytic dissociation of water. When the membrane is orientated toward the counter-ion flow by its polyaniline layer, the counter-ions cease to participate in current transport because they are partially neutralized by hydroxyl ions generated on the bipolar boundary, which should lead to the appearance of a pseudo-limiting state in the EMS (Fig. 5678). This is confirmed by a detailed analysis of the region of small displacements of the potential from equilibrium (from 0 to 100 mV) on the CV curve. The effect of the appearance of the pseudo-limiting state and of the second inflection on the CV curve at low potentials (Fig. 5-7b) is directly related to the emergence of a PANI layer on the membrane surface. The higher the current density used in the PANI synthesis, the larger the second inflection on the CV curve and hence the greater the role of the internal bipolar boundary in the composite membrane. For the samples obtained at the minimum current density of 100 A m -2 , this effect is not observed on the polarization curves.
The influence of the internal bipolar contacts on the polarization behavior of the MF-4SK/PANI composite membrane was analyzed in [18]. The appearance of "fast ions" in the system due to the catalytic dissociation of water was confirmed by analyzing the frequency impedance spectrum.
The evaluation of the conductivity of the EMS from the slope of the ohmic region of CVCs showed that the asymmetry of this parameter is observed regardless of the measurement mode of the polarization curve, the orientation of the anisotropic membrane playing the determining role. Table 5-2 shows the calculated angular slopes of the ohmic region of CVCs and the asymmetry coefficients found as their ratio. According to Table 5-2, the EMS conductivity is lower when the modified side of the membrane is directed toward the counter-ion flow. This effect is consistent with the data of [10,19] for the composite membranes obtained during the diffusion of the polymerizing solutions. The different conductivities of the membrane system can be explained by different concentration profiles formed in this bilayer membrane. The concentration of counter-ions on the internal interface is higher when the counter-ion flow meets the nonmodified side of the membrane. Hence, the electric conductivity of the less conductive modified layer will be higher in this case. For the opposite orientation, the resistance of the modified layer and hence of the whole system is higher because of the lower concentration of counter-ions.
Sample
Angular slope of ohmic segment, A V The asymmetry of conductivity was described in the literature, for example, for metalpolymer composite membranes [20]. The authors of [17,20,21] noted a relationship between the straightening effect similar to the p-n transition in semiconductors and the change in the pore geometry of the membrane after the deposition of a modifier on one of its surfaces. As a result, a bipolar interface is formed between the initial membrane and the deposited layer, which have oppositely charged functional groups in electrolyte solutions. This diode effect or the effect of EMS conductivity switching was also found in the present study for composite anisotropic membranes prepared in an external electric field. The asymmetry coefficient of the slope of the ohmic region of the CVC was considerably higher for the samples obtained by using potassium dichromate (Table 5-2).
According to Table 5-2, as the current density increases during the synthesis of composite membranes orientated by their PANI layer toward the counter-ion flow, their conductivity at first decreases and then begins to increase if potassium dichromate is used. This may be due to the appearance of a contribution of the intrinsic conductivity of PANI to the total conductivity of EMS. For the composites obtained in the presence of ammonium persulfate, however, the conductivity only decreased as the current density increased during the synthesis of PANI irrespective of the membrane orientation. One possible explanation for these differences in the polarization behavior of the composites is the fact that for the samples obtained at the same current densities, the amount of PANI in the composites is lower in the case of ammonium persulfate used as an oxidant than in the case of potassium dichromate.
Conclusions
Composite membranes based on MF-4SK and polyaniline with an anisotropic structure and asymmetric current-voltage curves can be obtained under the conditions of an external electric
field. An analysis of various regions of the CVC curve of the composite membranes showed that as the current density increases during the synthesis of polyaniline, their conductivity decreases, the hysteresis on the cyclic current-voltage curve and the asymmetry of the CVC parameters increase, and the pseudo-limiting current appears due to the appearance of an internal bipolar boundary. Based on the analysis of the polarization behavior of the samples, we can determine their effective applications in various electrochemical devices. Materials with moderate asymmetry of the current-voltage curve and sufficiently high density of the limiting current are promising for use in electrodialysis because, as is known, modification of perfluorinated membranes with polyaniline increases their selectivity and reduces the diffusion and electroosmotic permeability [22][23][24]. Anisotropic composite membranes with a diode-like effect may be used as membrane switches or relays.
Acknowledgments
This study was financially supported by the Russian Foundation for Basic Research and the administration of the Krasnodar region (project nos. 14_08_31704_mol_a and 13_08_96540_r_yug_a).
Presentation of the article 2
In the previous article, the method for the preparation of an anisotropic composite membrane by formation of PANI layer within a cation-exchange membrane and investigation of the The membrane thickness values are averaged from ten measurements at different locations on the effective surface of each membrane.
Membrane electrical conductivity
The membrane electrical conductance, G m , is measured with a specially designed cell coupled to AKTAKOM multimeter (Model ABM-4084, Russia). 0.1 M NaCl reference solution is used. According to the procedure developed by Lteif et al. [2], the transverse electric resistance of the membrane, R m , is calculated according to:
1 1 1 m m m s s R G G G (5.1)
Here G m+s and G s are the conductance of the solution with membrane and without membrane, respectively.
The membrane electrical conductivity, , can be calculated as [2] m
L R S (5.2)
Here S is the electrode area.
Contact angle measurements
Contact angles, , of the wet membranes are measured by the sessile drop method in the sodium form [3]. Data are treated using the software ImageJ.
Electrodialysis cell and experimental setup
The investigations are carried out in the same flow-through four-chamber electrodialysis cell, which was used earlier [4].
Protocol
The procedures for measuring I-V curve and chronopotentiograms (ChP) are described in [3].
A KEITHLEY 220 current source is used to supply the current between the polarizing electrodes. The potential drop (PD) across the membrane under study, , is measured using Ag/AgCl electrodes. These electrodes are placed in the Luggin capillaries. The Luggin tips are installed at both sides of the membrane under study in its geometric center at a distance of about 0.5 mm from the surface. PD is registered by a HEWLETT PACKARD 34401A multimeter. I-V curves are recorded when the current is swept from 0 to 10.5 mA/cm 2 at a scan rate of 0.8 μA/s; the PD remains < 3 V. The chronopotentiometric measurements are made in the range of current densities from 0.
Voltammetry
There are three regions on the I-V curve of IEMs, which are generally distinguished in the literature [5,6]. An initial linear region is followed by a plateau of limiting current density, and then by a region of a higher growth of current density. When approaching i lim , the electrolyte interfacial concentration gets nearly zero that initiates coupled effects of membrane CP: current-induced convection (electroconvection and gravitational convection) and water splitting [5]. When the concentration of salt ions at the depleted bipolar boundary becomes sufficiently low, water splitting there occurs similar to that, which takes place in the bipolar membranes [7,8]. This process produces H + and OH -ions, which are additional current carriers and whose emergence results in decreasing membrane resistance. Thus, an inflection point appears on the I-V curve, after which the rate of current density growth increases (Fig. 5678910).
As the current continue to increase, the ion concentration at the external membrane surface decreases to a low value that causes a new increase in system resistance. This increase is manifested by appearance of a second inclined plateau on the I-V curve. This state relates to the "classical" limiting current density, exp lim i , observed in the case of conventional monopolar ion-exchange membranes [5].
The values of both limiting current density may be determined by the point of intersection of the tangents drawn to the linear parts of the I-V curve on the left and on the right of the region where a sharp change of the rate of current growth with potential drop occurs (Fig. 5678910). There is apparent contradiction between the behavior of the membrane presented by Fig. 5-12b and Fig. 5-12c. The increase in pH of diluate solution is higher at i=8.8 mA cm -2 (Fig. 5-12c) than at i=7.0 mA cm -2 (Fig. 5-12b). However, the growth of PD with time occurs only in the case of i=7.0 mA cm -2 , in the case of i=8.8 mA cm -2 and i=10.5 mA cm -2 no PD growth is detected. On the contrary, there is even a slight decrease of PD with time in the t > 200 s range. The explanation is that at i ≥8.8 mA cm -2 the precipitation on the MK-40/PANI membrane is in the form of gel (Fig. 5-13), while at lower current density the precipitation is in crystal form. The possible foulants and fouling modes are presented in Table 5-5.
These values for both limiting current densities are present in
concentrations of Mg 2+ ions and OH -ions are small, while Mg 2+ may be slightly in excess.
The scale crystals are formed by the Mg 2+ and OH -ions in stoichiometric proportion. Low concentration of Mg 2+ ions does not allow peptization.
In the case of MK-40/PANI membrane at i > exp lim i , the water splitting rate is relatively high and there is an excess of OH -ions in comparison with the Mg 2+ ions. The excess of OH -ions leads to the formation of negatively charged colloidal particles on the membrane surface. The micelle core consists of Mg(OH) 2 . In accordance with the Fajans-Paneth-Hahn law [13],
there is a layer of preferentially adsorbed OH -ions next to the core. The particle is now negatively charged and attracts positively charged ions from solution (Fig. 5
Conclusions
The Processes using ion-exchange membranes now have great potential in modern industry for concentration, demineralization or modification of various products and by-products. In recent decades, many researchers have been interested in developing new ion-exchange membranes because of their academic and industrial values. However, these processes are subject to numerous limitations such as concentration polarization, fouling and scaling, which considerably reduces the efficiency of the process and increases its cost. In order to mitigate, or even avoid the formation of these precipitates, it is necessary to elucidate the processes and mechanisms of minerals precipitation. To do so, it is necessary to undertake a thorough study of the composition of deposits, kinetics of scaling and its dependence of the membrane surface properties, as well as the influence of this process on the performances of ionexchange membranes. where an unstable periodically crumbling scale occurs. We have shown that scaling rate is significantly lower in the pulsed electric field mode. The general tendency is that scaling rate decreases with decreasing the value of T on , while T off remains equal or somewhat less than T on .
Perspectives
This thesis research work revealed the feasibility of the use of pulsed electric field modes to reduce the rate of scale formation. However, the duration of the experimental runs where pulsed electric fields were applied (about 3 h) in this study does not allow us to say with certainty if the scale would be stable and negligible during a long electrodialysis operation or it would increase with time. The question remains, how the system will behave during the time, which is comparable with the time of membrane use in industrial scale. Thus, it is of interest to make an experimental study of electrodialysis in pulsed electric field modes of long duration (several tens of hours) when using a cell similar to those used in industrial stacks.
Further investigations should be accomplished as well to study the effect of short period pulsed electric field (high frequency) in galvanostatic and potentiostatic regimes on the scale mitigation. As it was shown by Mishchuk et al., potentiostatic pulsed electric field regime should lead to a more significant effect in reduction of concentration polarization, intensification of electrodialysis and, eventually, mitigation of scaling. Moreover, other waveforms of pulsed electric field can be studied to enhance solution mixing by electroconvective vortices. It is known that in electroplating of metals, alloys, composites and semiconductors, this approach is widely used for improving the deposition. The waveforms can be quite complicated involving cathodic pulse followed by a period without current (or an anodic pulse), direct current with superimposed modulations, duplex pulses, pulse-on-pulse, pulse reverse current and others.
The membranes modified by PANI have selectivity to the monovalent ions. Further investigations should be undertaken to verify the reasonability of using these membranes in the processes where a barrier for multivalent ion transport is useful, such as electrodialysis metathesis and reverse electrodialysis.
Résumé
Abstract
Scaling on the surface and in the bulk of ion-exchange membranes is a considerable locker for electrodialysis. The scale reduces the effective surface area of the membrane and leads to additional resistance to the mass transfer and solution flow. Three cation-exchange membranes are used in this study: a heterogeneous commercial MK-40 membrane and two of its modifications. The MK-40/Nafion membrane is obtained by mechanical coating the MK-40 membrane surface with a homogeneous ion-conductive Nafion ® film. Modification of the MK-40/PANI membrane is carried out by polyaniline synthesis on the membrane surface. The solutions used in the study are 0.02 and 0.04 mol/L CaCl 2 and MgCl 2 solutions, as well as the solution, imitating the mineral composition of milk, concentrated 3 times. The visualization of the membrane surface is made using optical and scanning electron microscopy. The elemental analysis of the scale on the membrane surface is made by X-ray analysis. The hydrophobic-hydrophilic balance of the membrane surface is estimated by the contact angle measurements. To characterize the cation transport through and the water splitting rate, chronopotentiometry and voltammetry methods are used, pH measurement of the diluate solution is conducted at the same time.
It is shown that the relatively high hydrophobicity of the membrane surface, its electrical and geometric heterogeneity, create conditions for the development of electroconvection. The electroconvection intensity in the case of MK-40/Nafion is significantly higher, and in the case of MK-40/PANI is lower in comparison with that of the unmodified membrane. Electroconvection vortexes cause the mixing of the solution at the membrane surface in a ≈10 µm thick layer. This effect significantly increases mass transfer in intensive current modes and prevents or reduces the scaling process, as well as reduces the water splitting rate at the membrane surface. The rate of electroconvection essentially depends on the counterion hydration degree, it increases with increasing the counterion Stokes radius. The rate of the scale formation on the membrane surface is determined by the slope of the chronopotentiogramme. The formation of Mg(OH) 2 , Ca(OH) 2 and CaCO 3 scales is observed. It is experimentally established that the scaling rate on the surface of MK-40/Nafion is smaller, and on the surface of the MK-40/PANI is larger in comparison with the MK-40 membrane. The scaling rate is significantly reduced when the pulsed electric current mode is applied. Such mode allows the reduction of the potential drop more than twice and achievement of a quasi steady-state because an unstable periodically crumbling scale occurs.
Key words: ion-exchange membrane, scaling, surface properties, electrodialysis, electroconvection
1. 4 . 2
42 Fig.1-1. Schematic representation of a membrane as a fragment of an electromembrane system: R --fixed ions; K, A -counter-ions and co-ions in the membrane and electrolyte solution;-polymer matrix chains; -bridges of polymer agent, cross-linking the main
Fig. 1 - 2 .
12 Fig.1-2. Chemical structure of sulfo cation-exchange materials: a) polystyrene sulfonate, and b) perfluorinated membrane (Nafion ® ).
Fig. 1 - 3 .
13 Fig. 1-3. Microphotographs of cross sections and AFM images of (a, c) the pristine and (b, d)
Fig. 1 - 5 .
15 Fig.1-5. SEM image of the surface of MK-40 (a) and MK-40*/Nf 20 (b) membranes and the cross section of a MK-40 (c) and MK-40*/Nf 20 (d) membranes (adapted from[34]).
Fig. 1 - 15 .
115 Fig. 1-15. Mechanisms of electroconvection.
Fig. 1 - 16 .
116 Fig.[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Profiles of ion concentration: the distribution of electrolyte concentration near membrane surface at lower (curves 1, 1') and higher (curves 2, 2', 3) voltage; is the overall
Fig. 1 - 17 .
117 Fig. 1-17. Principal scheme of the four-electrode cell.
Fig. 1 - 18 .
118 Fig. 1-18. Typical I-V curve.
Fig. 1 - 19 .Fig. 1 - 20 .
119120 Fig. 1-19. Characteristic points of the chronopotentiogram.Fig. 1-20. Schematic distribution of current lines at the homogeneous (a) and heterogeneous
Fig. 1 - 22 .
122 Fig.1-22. Scheme of the RMD cell. 1 -anion-exchange membrane under investigation, 2top half-cell with a NaCl solution (anode chamber), 3 -bottom half-cell with a NaCl solution (cathode chamber), 4 -inlet solution capillary, 5 -outlet solution capillary, 6 -Pt polarizing electrodes, 7 -Luggina-Haber capillaries, 8 -galvanostat, 9 -millivoltmeter, 10 -Ag/AgCl
Fig. 1 - 23 .
123 Fig. 1-23. The diagram of the experimental cell: membranes (1); plastic gaskets (2); elastic
Fig. 1 - 27 .
127 Fig. 1-27. Chronopotentiograms obtained with 0.02M Fe 2 (SO 4 ) 3 solutions and in the overlimiting range of currents for (a) Nafion 117 and (b) HDX 100 (adapted from [98]).
Fig. 1 - 28 .
128 Fig. 1-28. Examples of contact angles measurement methods a) in air and b) in liquid
Fig. 2 - 1 .Fig. 2 - 2 .
2122 Fig. 2-1. Image of the surface of swollen MK-40 (a) and MK-40 MOD (b) membranes as obtained with a Zeiss AxioCam MRc5 light microscope.Fig. 2-2. Chronopotentiogram of the MK-40 MOD membrane in 0.02 M CaCl 2 solution at a current density of i = 4.8 mA cm -2 :
Fig. 2 - 3 . 8 . 2 (
2382 Fig. 2-3. Chronopotentiograms of the MK-40 MOD membrane, as measured for different electrolytes at
Fig. 2 - 5 .
25 Fig. 2-5. Initial portions of chronopotentiograms of the MK-40 MOD membrane in 0.02 M solutions of (a) NaCl and (b) MgCl 2 and the same curves shown in the differential form. The data were obtained at
Fig. 2 - 6 .
26 Fig. 2-6. Plot of values for the first local maximum of reduced CP potential difference (open symbols) and the time taken to reach the maximum (closed symbols) versus the current density normalized to the limiting current density calculated by Eq. (2.5).
Fig. 3 - 1 .
31 Fig. 3-1. SEM images of MK-40 membrane surface in dry (a) and swollen (b) states, the surface (c) and cross section (d) of MK-40 MOD membrane in the dry state.
Fig. 3 - 3 .Fig. 3 - 4 .Fig. 3 - 5 .
333435 Fig. 3-3. Chronopotentiogram of an MK-40 membrane: vs. t, in a 0,04M CaCl 2 solution at current density 11.5 mA.sm -2 .
Fig. 3 - 6 .Fig. 3 - 7 .
3637 Fig. 3-6.SEM images of the surface of dry MK-40 membrane before (a) and after 5h electrodialysis of 0.04M MgCl 2 (b) solution and 0.04M CaCl 2 (c) at i=1.4 lim th i . Fig. 3-7. Current-voltage curves for МK-40 and МK-40 MOD membranes in MgCl 2 (a) and CaCl 2 (b) solutions of different concentrations.
Fig. 4 - 1 .
41 Fig. 4-1. SEM images of the membrane surfaces in a dry state: MK-40 (a), MK-40 MOD (b), MA-41 (d) and the cross section of MK-40 MOD membrane (c).
Fig. 4 - 2 .
42 Fig. 4-2. Principal scheme of the experimental setup. ED cell (1) consists of one desalination
Fig. 4 -
4 Fig. 4-3. I-V curves of the МК-40 (dark blue line) and МК-40 MOD (grey line) membranes in the model solution. p is the "plateau" length; 1/R is the "plateau" slope; i lim exp is the limiting current density determined (for the MK-40 membrane) by the intersection point of tangents.
Fig. 4 - 4 .
44 Fig. 4-4. Schematic distribution of electric current lines near the surface of MK-40 (a) and MK-40 MOD (b) membranes. Due to low fraction of conductive area and narrow "gates" for ion
Fig. 4 - 5 .
45 Fig. 4-5. Chronopotentiogram of the MK-40 MOD membrane in the model solution, vs. t,
Fig. 4 - 6 .Fig. 4 - 7 .
4647 Fig. 4-6. Chronopotentiograms of the MK-40 and МК-40 MOD membrane measured in the model solution at i=5.3 mA cm-2 (i < i lim exp ) (a) and i=12.5 mA cm-2 (i > i lim exp ) (b) and in a 0.1 M NaCl solution at i=5.3 mA cm -2 and i=12.8 mA cm -2 (c) Fig. 4-7. I-V curves (a) and the current density dependences of the size d of convection instability zone near the MK-40 and MK-40 MOD membranes in a membrane channel (the length is 4.2 cm, the intermembrane distance is 5 mm); a 0.02 M NaCl is fed with a velocity 0.2 mm s -1 . The data are obtained using an interferometric setup of the Mach-Zehnder type.
Fig. 4 - 8 .
48 Fig. 4-8. SEM images of the surface of dry MK-40 (a-e), MK-40 MOD (g-h) and MA-41 (f, i) membrane after 5 h continuous electrodialysis of model solution at i=10.5 mA cm -2 . (f) and (i) show the cases where the MA-41 membrane is used together with the MK-40 (f) or MK-40 MOD (i) membrane.
Fig. 4 - 9 .Fig. 4 - 10 .
49410 Fig. 4-9. Time dependence of the pH of diluate for the MK-40//MA-41 (dark blue line) and МК-40 MOD //MA-41 (grey line) membrane pairs at i = 4 mA cm -2 , 9 mA cm -2 , 17.5 mA cm -2 . Fig. 4-10. Concentration profiles obtained using the mathematical model based on the Nernst-Planck-Poisson equations developed by Urtenov et al. [69] (EC does not taken into account); the solid lines show the equivalent fractions of ions (C i ), the dashed line shows the product (C Mg (C OH ) 2 × 50) near a cation-exchange membrane (CEM). Arrows show the direction of H +
Fig. 5 - 1 .
51 Fig. 5-1. Diagram of the formation of a polyaniline layer on the surface of the MF-4SK membrane under the conditions of an external electric field.
Fig. 5 - 2 .
52 Fig. 5-2. Electronic absorption spectra of MF-4SK-PANI composites obtained under the conditions of an external electric field using (1)-(3) ammonium persulfate and (4) potassium dichromate as an oxidant of aniline: (1) MF-4SK/PANI-100-2, (2) MF-4SK/PANI-200-2, (3) MF-4SK/PANI-300-2, and (4) MF-4SK/PANI-100-1.
Fig. 5 - 3 .
53 Fig. 5-3. Diagram of a unit for measuring the CVCs of the membrane system: (1)
Fig. 5 - 4 .Fig. 5 - 5 .
5455 Fig. 5-4. CVCs of the MF-4SK/PANI-1 composite in 0.05 M HCl measured in sequence; measurement no. (1) 1, (2) 5, (3) 10, (4) 12, and (5) 13.Fig. 5-5. Chronopotentiograms of the MF-4SK/PANI-1 anisotropic composite measured in 0.05 M HCl at different polarizing currents i, A dm -2 : (1) 11.3, (2) 7.0, (3) 6.5, (4) 2.8, and (5) 1.3.
Fig. 5 - 6 .
56 Fig. 5-6. CVCs of the MF-4SK/PANI-1 anisotropic composite measured in 0.05 M HCl at (a) high ((1) 1.5×10 -5 A s -1 and (2) 4.6×10 -5 A s -1 ) and (b) low ((1) 0.4×10 -5 A s -1 and
Fig. 5 - 7 .
57 Fig. 5-7. (a), (c) Integrated and (b), (d) differential CVCs of the MF-4SK/PANI composites in 0.05 M HCl obtained at different current densities using (a), (b) potassium dichromate and (c), (d) ammonium persulfate as an oxidant of aniline. Current densities during the synthesis of the composites, A dm -2 : (1) 100, (2) 200, and (3) 300.
Fig. 5 - 8 .
58 Fig. 5-8. Diagram of Н + and ОН -ion flows that appear in the EMS with the membrane orientated by its modified layer toward the (a) anode and (b) cathode (the modified layer is hatched).
Fig. 5 - 9 .
59 Fig. 5-9. Image of the surface of swollen (a) MK-40 and (b) MK-40/PANI membranes as obtained with a Zeiss AxioCam MRc5 light and Altami microscope, respectively.
Fig. 5 -
5 Fig. 5-10. I-V curves for МK-40 and МK-40/PANI membranes in the model solution and corresponding dependence of the pH of diluate solution.
Fig. 5 - 11 .
511 Fig. 5-11. Chronopotentiogram of the MK-40 membrane in the model solution, vs. t, and its derivative, d( )/dt vs t, in the model solution at
Fig. 5 - 12 .
512 Fig. 5-12. Chronopotentiograms of МK-40 and МK-40/PANI membranes in the model solution and corresponding time dependence of the pH of diluate solution.
Fig. 5 - 13 .Fig. 5 - 14 .
513514 Fig. 5-13. The precipitate in the form of a gel on the MK-40/PANI membrane surface at i=10.5 mA cm -2 . Fig. 5-14. SEM images of the surface of (a) Mg(OH) 2 , (b) CaCO 3 and Ca(OH) 2 crystals on the MK-40 membrane surface facing the diluate after 5h of electrodialysis of model solution.
Fig. 5 - 15 .
515 Fig. 5-15. The micelle structure.
Fig. 1 - 2 .
12 Fig. 1-2. Chemical structure of sulfo cation-exchange materials: a) polystyrene sulfonate, and b) perfluorinated membrane (Nafion ® ).
Fig. 1 - 3 .
13 Fig. 1-3. Microphotographs of cross sections and AFM images of (a, c) the pristine and (b, d)
Fig. 1 - 4 .
14 Fig. 1-4. Microphotographs of the membrane MA-40: a) initial; b) profiled (adapted from [4]).
Fig. 1 - 5 .-
15 Fig. 1-5. SEM image of the surface of MK-40 (a) and MK-40*/Nf 20 (b) membranes and the cross section of a MK-40 (c) and MK-40*/Nf 20 (d) membranes (adapted from [34]).
Fig. 1 - 6 .
16 Fig. 1-6. Total current density and partial current density of ions H + [i (H + )] through MK-40and its modifications (adapted from[34]).
1. 2 . 1 Fig. 1 - 7 .
2117 Fig.1-7. Schematic diagram illustrating the principle of electrodialysis (adapted from[40]).
Fig. 1 - 13 .
113 Fig. 1-13. Concentration profile in an EMS. 0 C is the electrolyte concentration in the bulk solution; 1 C and 2 C are the electrolyte concentrations at the boundaries between the diluate and concentrate diffusion layers and the membrane, respectively; the bar over the letter designates that the quantity relates to the membrane phase; and d are thickness of diffusion layer and membrane, respectively.
Fig. 1 - 14 .
114 Fig.1-14. Scheme of gravitational convection (adapted from[66]).
EC can develop according to different mechanisms depending on the applied voltage and other conditions (Fig.1-15).
Fig. 1 - 15 .
115 Fig. 1-15. Mechanisms of electroconvection.
Fig. 1 - 17 .
117 Fig. 1-17. Principal scheme of the four-electrode cell.
Fig. 1 -
1 Fig. 1-18. Typical I-V curve.
2 )
2 , located in the segment, where two mechanisms of electrolyte delivery are present. The first on is electrodiffusion, the second one, current-induced convection. The contribution of the convective component gradually increases after passing the inflection point. Increasing convective contribution allows system to reach a (quasi) steady state. It follows from this analysis, that the position of the inflection point gives a more accurate estimation of the transition time.
Fig. 1 - 19 .
119 Fig. 1-19. Characteristic points of the chronopotentiogram.
Fig. 1 - 20 .
120 Fig. 1-20. Schematic distribution of current lines at the homogeneous (a) and heterogeneous (b) membrane surfaces (adapted from [110]).
Fig. 1 -
1 Fig. 1-21.Chronopotentiograms for heterogeneous (MA-41) and homogeneous (AMX) anionexchange membranes in a 0.1 M solution NaCl. The membrane is in a vertical position, the current density is 17.2 mA cm -2 (adapted from [111]).
Fig. 1 - 22 .
122 Fig. 1-22. Scheme of the RMD cell. 1 -anion-exchange membrane under investigation, 2top half-cell with a NaCl solution (anode chamber), 3 -bottom half-cell with a NaCl solution (cathode chamber), 4 -inlet solution capillary, 5 -outlet solution capillary, 6 -Pt polarizing electrodes, 7 -Luggina-Haber capillaries, 8 -galvanostat, 9 -millivoltmeter, 10 -Ag/AgCl
Fig. 1 - 23 .
123 Fig. 1-23. The diagram of the experimental cell: membranes (1); plastic gaskets (2); elastic gaskets (3,4); square aperture (5); cathode (6); anode(7); plastic capillaries (8); Ag/AgCl chloride electrode or Luggin capillary (9); connecting pipe(10); stream spreader of a comb shape(11); the membrane studied (A*); a cation-exchange membrane (C); an anion-exchange membrane (A) (adapted from[START_REF] Volodina | Ion transfer across ion-exchange membranes with homogeneous and heterogeneous surfaces[END_REF]).
1. 4 . 2
42 Photo imaging, optical microscopy, scanning electron microscopy (SEM), confocal laser scanning microscopy (CLSM), atomic force microscopy (AFM) are ordinary methods for visualization of membrane fouling (Fig.1-25). These methods may be used to identify the presence of membrane fouling, to reveal the fouling structure and distribution on the membrane surface or inside the membrane. The majority of membrane fouling investigations are performed with the use of SEM, biofouling is analysed by CLSM.
Fig. 1 - 25 .
125 Fig. 1-25. Visualization of membrane fouling by photo imaging (A,B), optical microscopy (C), SEM (D), CLSM (E), AFM (F) (adapted from [116]).
Fig. 1 - 26 .
126 Fig. 1-26. Schematic representation of the system for measuring membrane conductance
117 (
117 Du Pont) and heterogeneous HDX 100 cation-exchange membranes in mixed Na 2SO 4 and Fe 2 (SO4) 3 solutions by means of chrononopotentiometric measurements. They showed that the increase with time of the quasi-steady state PD was related to the formation of precipitate on the depleted membrane surface (Fig..
Fig. 1 - 27 .
127 Fig. 1-27. Chronopotentiograms obtained with 0.02M Fe 2 (SO 4 ) 3 solutions and in the overlimiting range of currents for (a) Nafion 117 and (b) HDX 100 (adapted from [98]).
Fig. 1 - 28 .
128 Fig. 1-28. Examples of contact angles measurement methods a) in air and b) in liquid
1. 4 . 3
43 Methods for prevention and control of ion-exchange membrane scaling 1.4.3.1 Ion-exchange membrane modification Nowadays researches try to change the membrane surface properties such as surface charge, hydrophobic/hydrophilic balance and roughness.
Fig. 1-29. Dead-end filtration and cross-flow filtration (adapted from [150]).
1. 4 . 3 . 5
435 Changing regimes of electrodialysis treatment 1.4.3.5.a Control of hydrodynamic conditions
number on the development of electroconvection at the surface of heterogeneous cation-exchange membrane modified with an MF-4SK film Petroleum Chemistry (56) 2016 440-449 V. V. Gil a , M. A. Andreeva a, b , N. D. Pismenskaya a , V. V. Nikonenko a , C. Larchet b , and L. Dammak b a Kuban State University, ul. Stavropol'skaya 149, Krasnodar, Russia b Institut de Chimie et des Matériaux Paris-Est, UMR7182 CNRS-Université Paris-Est, 2 Rue Henri Dunant, 94320 Thiais, France
Fig. 2 - 1 .
21 Fig. 2-1. Image of the surface of swollen MK-40 (a) and MK-40 MOD (b) membranes as obtained with a Zeiss AxioCam MRc5 light microscope.
NaCl (R.P.NORMAPURTM, VWR International), CaCl 2 •2H 2 O, and MgCl 2 •6H 2 O (LABOSI, Fisher Scientific s.a.). Values for the diffusion coefficients ( i D ), Pauling crystal radii ( cr r ), Stokes radii determined from the Einstein-Stokes relation ( St r ), and the hydration numbers of the Na + , Ca 2+ , and Mg 2+ cations and the Cl -anion in aqueous solution are presented in Table 2-1. The electrolyte diffusion coefficients ( D ) and cation transport numbers ( i t ) are also listed there. All the transport characteristics are given for infinitely dilute solutions and a temperature of 20 °C, at which the experiments were conducted. The values for the crystal radii and ion hydration
Fig. 2 -
2 Fig. 2-2 exemplifies a chronopotentiogram (CP) of one of the test systems (MK-40 MOD in 0.02 M CaCl 2 solution), which shows some characteristic sections and points that are used in data processing. Explanations are given in the figure caption.
ohm
coordinates[29]. The value of ' is sometimes called reduced potential difference. When a membrane system reaches a (quasi-)steady state, the potential st slightly changes with time (Fig. 2-2). The values of st and i were used to construct galvanostatic currentvoltage curves (CVCs).
Fig. 2 - 2 .
22 Fig. 2-2. Chronopotentiogram of the MK-40 MOD membrane in 0.02 M CaCl 2 solution at a current density of i = 4.8 mA.cm 2 :
Fig. 2 -
2 Fig. 2-3 and 2-4a show chronopotentiograms of the test membrane systems under comparable conditions: reduced potential differences ' (see Eq. (2.1)) for different electrolytes are
Fig. 2 - 3 . 8 . 2 (
2382 Fig. 2-3. Chronopotentiograms of the MK-40 MOD membrane, as measured for different electrolytes at
MgCl 2 (Fig.2-4b), which is substantially shorter than the transition time = 13 s.The nonmonotonic pattern of the time dependence for the potential drop in the initial portion of the chronopotentiogram develops as potential oscillation with the increasing time.
Fig. 2 - 4 . 2 . 2 (
2422 Fig. 2-4. Chronopotentiograms of the MK-40 MOD membrane, as measured for different electrolytes at
Fig. 2 - 5 .
25 Fig. 2-5. Initial portions of chronopotentiograms of the MK-40 MOD membrane in 0.02 M solutions of (a) NaCl and (b) MgCl 2 and the same curves shown in the differential form. The data were obtained at
Fig. 2 - 6 .
26 Fig. 2-6. Plot of values for the first local maximum of reduced CP potential difference (open symbols) and the time taken to reach the maximum (closed symbols) versus the current density normalized to the limiting current density calculated by Eq. (2.5).
Fig. 2 -
2 Fig. 2-7 shows reduced current-voltage curves for the test membranes in 0.02 M NaCl, CaCl 2 , and MgCl 2 solutions, as derived from chronopotentiograms in the form of dependence of the time-average potential drop in the steady state,
Fig. 2 - 7 .
27 Fig. 2-7. Current-voltage curves for the MK-40 MOD membrane in 0.02 M NaCl, CaCl 2 , and MgCl 2 solutions. The reduced potential difference was determined using Eq. (2.1).The value of
affecting the shape of CP in the vicinity of the transition time (Fig.2-5b). On passing from NaCl to the divalent cations, the frequency of low-frequency oscillations decreases to 0.13 Hz (CaCl 2 ) or 0.08 Hz (MgCl 2 )
Chronopotentiograms and I-V curves are measured for a commercial MK-40 and a modified MK-40 MOD membranes in MgCl 2 and CaCl 2 electrolyte solutions of two (0.02 M and 0.04 M) concentrations. The modification is carried out by casting a homogeneous Nafion ® film onto the MK-40 membrane surface. Another MK-40 membrane and a homogeneous anionexchange AMX membrane are used as auxiliary membranes.
OH) 2 are found on the surface of ion-exchange particles embedded into the MKbecomes dominating at the AMX membrane, thus providing a low pH in the overall desalination chamber, hence, creating the conditions where the hydroxide deposition does not occur. No salt deposit is detected at any currents in the case of CaCl 2 solutions, evidently because of an essentially higher value of the solubility product for Ca(OH) 2 in comparison with Mg(OH) 2 . No scaling is either detected in the case of the MK-40 MOD membrane.
Fig. 3 - 1 .
31 Fig. 3-1.SEM images of MK-40 membrane surface in dry (a) and swollen (b) states, the surface (c) and cross section (d) of MK-40 MOD membrane in the dry state.
[ 38 ]
38 . When comparing the surface of Nafion and MK-40, one can distinguish there hydrophobic and hydrophilic areas. The perfluorinated matrix of Nafion is rather hydrophobic. The contact angle of Teflon having the same chemical structure as the Nafion matrix is 115 0 . The surface fraction of pore mouths forming hydrophilic areas of the surface can be evaluated as about 20 % (equal to the water content in the membrane). Two main components of MK-40 membrane are polystyrene (making the matrix in KU-2 cation-
was used in the study. The central desalination chamber (3) was formed by an auxiliary anion-exchange Neosepta ® AMX-SB membrane (Astom, Japan) (AEM) and the tested (M * ) MK-40 or MK-40 MOD membrane. The intermembrane distance in this chamber, h, was 6.5 mm. The experiment was carried out at 20 0 C.
Fig. 3 - 2 .
32 Fig. 3-2. Principal scheme of the experimental setup. ED cell (1) consists of one concentration (2), one desalination (3) and two electrode chambers (4), two platinum polarizing electrodes (5) and two Ag/AgCl electrodes inserted in Luggin capillaries (6). A
. 3 - 3 .Fig. 3 - 3 .
3333 Fig. 3-3. Chronopotentiogram of an MK-40 membrane: vs. t, in a 0,04M CaCl 2 solution at
Fig. 3 - 4 .
34 Fig. 3-4.Chronopotentiograms of MK-40 (dark blue (darker) lines) and МК-40 MOD (grey (lighter) lines) membranes measured in 0.04 M MgCl 2 (a) and 0.04 M CaCl 2 (b) solutions at th i i lim / = 0.6, 1.4 and 2.5.
Fig. 3 - 5 .
35 Fig. 3-5. Time dependence of pH difference between the outlet and inlet solution passing through the desalination chamber in cases of MK-40//AMX-SB (solid lines) and МК-40 MOD //AMX-SB (dashed lines) measured in 0.04 M MgCl 2 (a) and 0.04 M CaCl 2 (b) solutions.
Fig. 3 -
3 Fig.3-6b shows crystals on the ion-exchange particle embedded onto the MK-40 membrane surface. Formation of the Mg(OH) 2 deposit is due to the following reaction:
3 -
3 the solution is significantly acidified (the case of MK-40 MOD /MgCl 2 5a)) and no deposit is formed. In this case, a part of H + ions generated in the space charge region at the AMX-SB depleted surface is delivered to the MK-40 surface by the electric field. The solution near the MK-40 surface gets more acidic, that prevents the formation of Mg(OH) 2 or Ca(OH) 2 scaling.
Fig. 3 - 6 .
36 Fig. 3-6.SEM images of the surface of dry MK-40 membrane before (a) and after 5h electrodialysis of 0.04M MgCl 2 (b) solution and 0.04M CaCl 2 (c) at
Fig. 3 -Fig. 3 - 7 .
337 Fig. 3-7 shows the I-V curves obtained in CaCl 2 and MgCl 2 solutions of different concentrations. The I-V curves are found from the ChPs of tested membranes measured at different current densities. The value of potential drop related to a current density is the quasisteady state PD averaged in time,
Fig. 3 -
3 Fig. 3-7 shows that the experimental limiting current density, lim i (which can be evaluated by the intersection of the tangents drawn to the first and the second regions of the I-V curve), and the overlimiting current density are always lower for the MK-40 membrane in comparison with the MK-40 MOD membrane. The value of lim i for the MK-40 MOD membrane is about 1.5 times higher than that for the MK-40 membrane. As it was discussed above, this membrane behavior is explained by too low fraction of the surface conductive areas of the MK-40 membrane. The Nafion ® film on the MK-40 MOD membrane surface makes the distribution of current lines near the surface closer to the optimum, thus decreasing CP and enhancing EC.
Chronopotentiograms and current-voltage curves were measured for commercial MK-40 and modified MK-40 MOD membranes in MgCl 2 and CaCl 2 electrolyte solutions of two (0.02 M and 0.04 M) concentrations. The modification was carried out by casting a homogeneous Nafion ® film onto the MK-40 membrane surface.
40 MOD is investigated by means of chronopotentiometry and voltammetry. The study is made in the range of current densities from 0.25 exp lim i to 2.5 exp lim i , where exp lim i is the experimentally determined limiting current density. The modification is obtained by casting a homogeneous cation-conducting Nafion ® film on the surface of MK-40 membrane. In order to obtain a significant scale on the studied membranes, a mixed solution with relatively high concentration of Ca 2+ , Mg 2+ and hydrocarbonates is used. The mineral composition of this solution models a thrice-concentrated milk. Two electric current regimes are used: constant current and four modes of pulsed electric field. In all cases the total working time (not including the time of pause lapses) is the same: 75 min or 105 min. Scale is found not only on the MK-40 and MK-40 MOD membranes, but also on the auxiliary anion-exchange MA-41 membrane. In all cases, the scale forms on the surfaces facing the diluate chamber of the cell. The MK-40 membrane surface is scaled with CaCO 3 , Ca(OH) 2 and Mg(OH) 2 compounds at . The scale amount on the MK-40 MOD is essentially lower; it forms at and contains only CaCO 3 . A small amount of CaCO 3 is detected on the auxiliary MA-41 membrane, when it is used together with the MK-40 membrane. Negligible precipitation is found on the MA-41 membrane paired with the MK-40 MOD membrane. At (for MK-40) and (for MK-40 MOD ), no scale is observed in the studied membrane systems. It can be assumed that higher electroconvection and higher contribution of water splitting at the MA-41 membrane, which adjusts the pH of the depleted solution in a slightly acid range, act together to prevent scaling.
Journal of Membrane Science (549) 2018 129-140 M.A. Andreeva a,b , V.V. Gil b , N.D. Pismenskaya b , L. Dammak a , N.A. Kononenko b , C. Larchet a , D. Grande a , V.V. Nikonenko b a Institut de Chimie et des Matériaux Paris-Est (ICMPE) UMR 7182 CNRS -Université Paris-Est, 2 Rue Henri Dunant, 94320 Thiais, France. b Kuban State University, 149 Stavropolskaya st., 350040 Krasnodar, Russia.
4. 2
2 .1. Membranes Two cation-exchange membranes, a heterogeneous commercial MK-40 (Shchekinoazot, Russia) and its modification MK-40 MOD (Figs. 4-1a-b) are studied. The MK-40 membrane has polystyrene-based polymer matrix with fixed sulfonic acid groups. About 80 % of the MK-40
Fig. 4 - 1 .
41 Fig. 4-1. SEM images of the membrane surfaces in a dry state: MK-40 (a), MK-40 MOD (b), MA-41 (d) and the cross section of MK-40 MOD membrane (c).
Fig. 4 - 2 .
42 Fig. 4-2. Principal scheme of the experimental setup. ED cell (1) consists of one desalination (2), one auxiliary (3) and two electrode compartments (4), two platinum polarizing electrodes (5) and two Ag/AgCl electrodes inserted in Luggin capillaries (6). A mixed salt solution (7) composed of Na 2 CO 3 , KCl, CaCl 2 and MgCl 2 feeds the desalination compartment; NaCl solution (8) circulates through the auxiliary and two electrode compartments. A flow cell with a pH electrode (9) and a pH meter pHM120 MeterLab (10) are used for pH measurements.
4. 2 . 3 .
23 Chronopotentiometric and voltammetry measurementsChronopotentiograms (ChPs) and voltammograms (I-V curve) are obtained using the flowthrough electrodialysis cell presented in Fig.4-2. A KEITHLEY 220 current source is used to supply the current between the polarizing electrodes. The potential drop (PD) across the membrane under study, , is measured using Ag/AgCl electrodes. These electrodes are placed in the Luggin capillaries. The Luggin tips are installed at both sides of the membrane under study in its geometric center at a distance of about 0.5 mm from the surface. PD is registered by a HEWLETT PACKARD 34401A multimeter. I-V curves are obtained from ChPs by taking (quasi) steady state time-averaged values of PD related to a fixed current density in the range up to 18 mA cm -2 while the PD remains < 4V. The experiment is carried out at 20 0 C.
Fig. 4 -
4 Fig. 4-3 shows the I-V curves obtained as described above for MK-40 and MK-40 MOD membranes in the model salt solution. It can be observed three general regions of the I-V
Fig. 4 - 3 .
43 Fig. 4-3. I-V curves of the МК-40 (dark blue line) and МК-40 MOD (grey line) membranes in the model solution. p is the "plateau" length; 1/R is the "plateau" slope; i lim exp is the limiting current density determined (for the MK-40 membrane) by the intersection point of tangents.
Fig. 4 - 4 .
44 Fig. 4-4. Schematic distribution of electric current lines near the surface of MK-40 (a) and MK-40 MOD (b) membranes. Due to low fraction of conductive area and narrow "gates" for ion passage, the funnel effect [29] results in high concentration polarization of MK-40 while electroconvection is low. The increased "gates" and more hydrophobic surface of MK-40 MOD
Fig. 4 - 5 .
45 Fig.4-5. Chronopotentiogram of the MK-40 MOD membrane in the model solution, vs. t, and its derivative, d()/dt vs t, in the model solution at i=17.5 mA cm -2 . Ohm is the ohmic potential drop just after the current is switched on; St is the potential drop at the steadystate; = Ohm is the reduced potential drop; is the transition time.
Fig. 4 -Fig. 4 - 6 .
446 Fig.4-6c shows the ChPs obtained in a 0.1 M NaCl solution at i=5.3 mA cm -2 and i=12.8 mA cm -2 . This experiment is made in order to find out the character of oscillations in a
Fig. 4 - 7 .
47 Fig. 4-7. I-V curves (a) and the current density dependences of the size d of convection instability zone near the MK-40 and MK-40 MOD membranes in a membrane channel (the length is 4.2 cm, the intermembrane distance is 5 mm); a 0.02 M NaCl is fed with a velocity 0.2 mm s -1 . The data are obtained using an interferometric setup of the Mach-Zehnder type.
- 2 . 2 (
22 Due to relatively large volume of this solution (8 L), during 5 h of an experimental run, the salt concentration increases only by 6% while the pH of this solution changes from 6.5 to 7.3.However, the scale formation on the diluate side is established for both MK-40 and MK-40 MOD cation-exchange membranes as well as for the MA-41 anion-exchange membrane used to form the desalination compartment (Fig.4-2). The amount of precipitation at all current densities is less on the MK-40 MOD membrane than on the MK-40 membrane. According to Xray elemental analysis of the membrane surface facing diluate solution, the presence of two main scale-forming ions, Ca 2+ and Mg 2+ in the form of CaCO 3 , Сa(OH) 2 and Mg(OH) Figs.4-8a-e) is established on the MK-40 membrane. Only Ca 2+ in the form of CaCO 3 is found on the MK-40 MOD membrane (Figs.4-8g-h). CaCO 3 is detected also on the MA-41 membrane surface facing the desalination compartment (Figs.4-8f, i). In all cases the amount of the scale on the MA-41 is less than that found on the cation-exchange membrane under study. This amount is negligible in the case where the MA-41 membrane is used together with the MK-40 MOD membrane (Fig.4-8i).
Fig. 4 - 8 .
48 Fig. 4-8. SEM images of the surface of dry MK-40 (a-e), MK-40 MOD (g-h) and MA-41 (f, i) membrane after 5 h continuous electrodialysis of model solution at i=10.5 mA cm -2 . (f) and (i) show the cases where the MA-41 membrane is used together with the MK-40 (f) or MK-40 MOD (i) membrane.
Fig. 4 - 9 .
49 Fig. 4-9. Time dependence of the pH of diluate for the MK-40//MA-41 (dark blue line) and МК-40 MOD //MA-41 (grey line) membrane pairs at i = 4 mA cm -2 , 9 mA cm -2 , 17.5 mA cm -2 .
Fig. 4 -
4 Fig. 4-10 shows numerically simulated concentration profiles and the (C Mg (C OH ) 2 )concentration product in the diffusion boundary layer adjacent to a CEM in the case where a Mg(Cl) 2 solution is fed in the desalination compartment. It can be seen that due to water splitting at the CEM, the OH -ions are generated and their concentration passes through a maximum. As well, the (C Mg (C OH ) 2 ) product reaches a maximum within the diffusion layer, as the concentration of Mg 2+ decreases when approaching the membrane surface. The simulation shows that this behavior occurs even when the bulk solution (at x=0) is slightly acid. With increasing concentration of H + ions in the solution bulk, the concentration of OH -ions in the
of scaling. More intensive EC and more uniform distribution of current density at the surface of MK-40 MOD membrane result in lower CP in comparison with the MK-40 membrane. As the result, water splitting and scaling are lower on the MK-40 MOD membrane surface. This conclusion is valid for all current regimes: under a constant current (Figs. 4-8a-g) and whenapplying PEF (Figs.4-11a-c). The time-averaged value of PD for both MK-40 and MK-40 MOD membranes in the PEF mode is lower than that in the constant current mode in all studied cases, provided that the current density is the same. The least value of averaged PD is found for the minimum period studied (2 min), when T on =T off = 1 min. With growing amount of the scale on the membrane surface, PD across the membrane increases. Thus, one can expect lower amount of the scale in the PEF conditions; it is the lowest in the case of the shortest period used in the study.In the case of MK-40 membrane at i = 10.5 mA cm -2 , pH of the diluate increases with time in the PEF mode; this increase is equal or lower than that found in the constant current mode (Figs.4-11a,c,d). In the case of MK-40 MOD membrane, the behavior of pH in the constant current and PEF modes is quite different. While in the constant current mode, i = 10.5 mA cm -2 , the pH of the diluate grows with time and its acidification occurs only at i > 13.3 mA cm -2 , in the PEF mode at i = 10.5 mA cm -2 the value of pH remains nearly constant (regime T on /T off = 15/7.5) or even decreases with time (regime T on /T off = 15/15)
4-11a-d show, in the case of MK-40, the maximum delay occurs at short period PEF (T on /T off = 1/1 and 5/5), Fig. 11d. According to the analysis above, it should be due to the fact that in these conditions the rate of scale formation is the lowest among the studied cases. Note that in the case of MK-40 MOD , in the T on /T off = 15/7 regime where the diluate is acidifies no Mg(OH) 2 is detected.
Fig. 4 - 11 .
411 Fig. 4-11. Chronopotentiograms and corresponding pH measured for the MK-40 and МК-40 MOD membrane system in the constant current (a) and PEF (b-d) modes. In both modes, when the current flows its density i = 10.5 mA cm -2 Parameters of the PEF mode are indicated near the curves.
40 and its modification MK-40 MOD , during ED of a solution containing the scale-forming cations, Ca 2+ , Mg 2+ and CO 3 2-is investigated. The membrane modification is carried out by casting a homogeneous Nafion ® film onto the MK-40 membrane surface. Deposition of CaCO 3 is detected on the surface of MK-40, MK-40 MOD and an auxiliary MA-41 membrane, the latter used to form the diluate compartment. Mg(OH) 2 and Ca(OH) 2 are found only on the surface of MK-40 membrane. The amount of scale is less in the case where the desalination compartment is formed by the pair of MK-40 MOD and MA-41 membranes. No precipitation is seen on the concentration side of all membranes where a NaCl solution was circulated. The scale is detected in the range 8.8 mA cm -2 ≤ i ≤ 13.3 mA cm -2 (i lim exp ≤ i ≤ 1.5 i lim exp ) for MK-40 MOD membrane and 7.9 mA cm -2 ≤ i ≤ 15.5 mA cm -2 (i lim exp ≤ i ≤ 2 i lim exp ) for MK-40. In the case, where a sufficiently high current density is applied, the water splitting rate at the MA-41 membrane is higher than that at the CEM. It provides an acidification of the solution throughout the desalination compartment and the reduction or the prevention of the scale formation.
The aim of this work is to examine the possibility of obtaining modified composites based on the perfluorinated sulfocation-exchange MF-4SK membrane and PANI under the conditions of an external electric field and to study their electrochemical behavior. The MF-4SK membrane is modified by synthesizing PANI within the membrane in the nearsurface layer in a four-chamber cell under the external electric field. A 0.01 M aniline solution in 0.05 M HCl is fed into the cell chamber on the anode side. Aniline reacts with strong acids to form anilinium (or phenylammonium) ion (C 6 H 5 -NH 3 + ). A 0.002 M potassium dichromate solution or a 0.005 M (NH 4 ) 2 S 2 O 8 solution in 0.05 M HCl is introduced on the cathode side. A 0.05 M HCl solution is fed in the electrode chambers. The phenylammonium cations, formed as the result of aniline protonation, are transferred, in accordance with the direction of electric current through the cation-exchange membrane. The dichromate or persulfate anions, in turn, migrate to the membrane surface, where the oxidative polymerization of aniline occurs. Thus, a PANI layer forms in the near-surface layer within the membrane surface on the cathode side. Since PANI has anion-exchange properties, and the non-modified part of MF-4SK cation-exchange ones, a two-layer structure with a bipolar boundary is formed. The thickness of the anion-exchange layer is much less than that of the cation-exchange one. This conditions some particular properties of the MF-4SK/PANI membrane. One of them is the asymmetry of I-V curves similar to that of bipolar membranes. Another peculiarity is the presence of two limiting current densities. The first limiting current, called pseudo-limiting current, exp lim pseudo i , is due to the formation of depleted (in relation to mobile ions) layer at the bipolar junction / boundary within the membrane. The second one, exp lim i , which is higher than exp lim pseudo i and close to the limiting current of the non-modified membrane is due to saturation of the diffusion across the depleted diffusion layer in solution. Effect of surface modification of perfluorinated membranes with polyaniline on their polarization behavior Russian Journal of Electrochemistry (51) 2015 538-545 N. V. Loza, S. V. Dolgopolov, N. A. Kononenko, M. A. Andreeva and Yu. S. Korshikova Kuban State University, ul. Stavropol'skaya 149, Krasnodar, Russia
Fig. 5 - 1 .
51 Fig. 5-1. Diagram of the formation of a polyaniline layer on the surface of the MF-4SK membrane under the conditions of an external electric field.
Fig. 5 - 2 .
52 Fig. 5-2. Electronic absorption spectra of MF-4SK-PANI composites obtained under the conditions of an external electric field using (1)-(3) ammonium persulfate and (4) potassium dichromate as an oxidant of aniline: (1) MF-4SK/PANI-100-2, (2) MF-4SK/PANI-200-2, (3) MF-4SK/PANI-300-2, and (4) MF-4SK/PANI-100-1.
Fig. 5 - 3 .Fig. 5 - 4 .
5354 Fig. 5-3. Diagram of a unit for measuring the CVCs of the membrane system: (1) measurement cell, (2) near-electrode chambers, (3) platinum polarizing electrodes, (4) nearmembrane chambers, (5) membrane under study, (6) Luggin-Haber capillaries, (7) auxiliary membranes, (8) silver/silver chloride electrodes, (9) Autolab pgstat302n potentiostat/galvanostat, (10) PC, (11) Heidolph pumpdrive 5101 peristaltic pump, and (12) vessels for solutions.
Fig. 5 - 5 .
55 Fig. 5-5. Chronopotentiograms of the MF-4SK/PANI-1 anisotropic composite measured in 0.05 M HCl at different polarizing currents i, A dm -2 : (1) 11.3, (2) 7.0, (3) 6.5, (4) 2.8, and (5) 1.3.
Fig. 5 - 6 .
56 Fig. 5-6. CVCs of the MF-4SK/PANI-1 anisotropic composite measured in 0.05 M HCl at (a) high ((1) 1.5×10 -5 A s -1 and (2) 4.6×10 -5 A s -1 ) and (b) low ((1) 0.4×10 -5 A s -1 and (2)
Fig. 5 - 7 .
57 Fig. 5-7. (a), (c) Integrated and (b), (d) differential CVCs of the MF-4SK/PANI composites in 0.05 M HCl obtained at different current densities using (a), (b) potassium dichromate and (c), (d) ammonium persulfate as an oxidant of aniline. Current densities during the synthesis of the composites, A dm -2 : (1) 100, (2) 200, and (3) 300.
Fig. 5 - 8 .
58 Fig. 5-8. Diagram of Н + and ОН -ion flows that appear in the EMS with the membrane orientated by its modified layer toward the (a) anode and (b) cathode (the modified layer is hatched).
electrochemical behavior of the composite membrane have been described. The specificity of such type of membranes is in formation of an internal bipolar boundary between the cationand anion-exchange layers. The anion-exchange layer formed by PANI presents a barrier for the transfer of multicharged cations, which is interesting for the use in metathesis electrodialysis. However, the bipolar boundary lead to catalytic water splitting that can stimulate the scale formation on the depleted surface of the modified membrane. On the other hand, modification of the membrane with PANI leads to a smoother surface, which should reduce the risk of nucleation of scale crystals on the membrane surface.The main goal of the article presented in this section of the chapter is to study how PANI modification of heterogeneous membranes affects the scaling process. Scale formation on the surface of a heterogeneous cation-exchange MK-40 membrane and its modification MK-40/PANI is investigated by means of chronopotentiometry and voltammetry in a mixed solution with relatively high concentration of Ca 2+ , Mg 2+ and hydrocarbonates. The study is made in the range of current densities from 0.2 exp lim i to 1.5 exp lim i , where exp lim i is the experimentally determined (second) limiting current density. This study allows us to conclude that the modification of the MK-40 membrane does not change its thickness: the thickness of MK-40/PANI membrane is equal to that of the pristine membrane. The conductivity of MK-40/PANI is about 16 % lower than that of MK-40 membrane. The contact angle of MK-40 and MK-40/PANI is nearly the same (and close to 55 º). The surface of modified membrane is smoother than that of the pristine one. The bipolar junction formed by the PANI layer in the MK-40/PANI membrane causes an essential increase in water splitting rate and a reduction in electroconvection. As a result, a precipitate in the gel form is detected in the range i ≥ 0.8 exp lim i for the MK-40/PANI membrane. However, at current densities i < 0.8 exp lim i no scaling is observed. Thus, it is possible to recommend the MK-40/PANI composite membrane for monovalent-ion-selective electrodialysis at underlimiting current densities. Effect of surface modification of MK-40 membrane with polyaniline on scale formation In preparation M.A. Andreeva a,b , N.V. Loza b , N.D. Pismenskaya b , L. Dammak a , N.A. Kononenko b , C. Larchet a , V.V. Nikonenko b a Institut de Chimie et des Matériaux Paris-Est (ICMPE) UMR 7182 CNRS -Université Paris-Est, 2 Rue Henri Dunant, 94320 Thiais, France. b Kuban State University, 149 Stavropolskaya st., 350040 Krasnodar, Russia. Two ion-exchange membranes (IEMs) are used in this study, an MK-40 (Fig. 5-9a) and its modification MK-40/PANI (Fig. 5-9b). The heterogeneous cation-exchange MK-40 membrane (Shchekinoazot, Russia) contains sulfonic acid fixed groups. The MK-40/PANI membrane is obtained by synthesizing polyaniline (PANI) within the MK-40 membrane nearsurface layer in external electric field according to the method described in Ref. [1]. A fourchamber flow-through cell is used. A 0.01 M aniline solution in 0.05 M HCl and 0.002 M potassium dichromate solution in 0.05 M HCl are fed into the cell chambers next to the membrane on the anode and cathode sides, respectively. A 0.05 M HCl solution is circulated through the electrode chambers. The synthesis is carried out at i=40mA cm 2 for 90 min.
Fig. 5 - 9 .
59 Fig. 5-9. Image of the surface of swollen (a) MK-40 and (b) MK-40/PANI membranes as obtained with a Zeiss AxioCam MRc5 light and Altami microscope, respectively.
The cell comprises the tested MK-40 or MK-40/PANI membrane and two auxiliary heterogeneous membranes: an anion-exchange MA-41 membrane (Shchekinoazot, Russia) and a MK-40 membrane. The intermembrane distance in the cell compartments is 6.4 mm, the membrane area exposed to the current flow is 4 cm 2 . The anode and the cathode are platinum polarizing electrodes. The experimental setup involves two closed loops containing a model salt solution and a 0.04 M NaCl electrolyte solution. The model solution circulates across the central desalination compartment; the 0.04 M NaCl solution, through an auxiliary and two electrode compartments in parallel; the solution flow rate through each compartment is 30 mL min -1 . The model salt solution is composed of Na 2 CO 3 (1000 mg L -1 ), KCl (800 mg L -1 ), CaCl 2 *2H 2 O (4116 mg L -1 ) and MgCl 2 *6H 2 O (2440 mg L -1 ). In this solution, the Mg/Ca molar concentrations ratio is 2/5 and the total concentration of CaCl 2 and MgCl 2 is 0.04M. The pH of the solution is adjusted to 6.5 by adding HCl. The mineral composition of this solution corresponds approximately to that of trice concentrated milk. Each closed loop is connected to a separated external plastic reservoir, allowing continuous recirculation.
iTable 5 - 3 .
53 is the experimentally determined limiting current density. The experiment is carried out at 20 0 C.5.6 Results and discussion5.6.1. Physico-chemical characteristics of ion-exchange membranesThe thickness of MK-40/PANI membrane does not change in the process of its modification, i.e. the thickness of MK-40/PANI membrane is the same as that of MK-40 membrane (Table5-3). The conductivity of MK-40/PANI is about 16% lower than that of MK-40 membrane.The contact angle of MK-40 and MK-40/PANI is nearly the same, close to 55 º.The PANI layer within the MK-40 membrane has anion-exchange properties. Thus, its formation leads to a bi-layer membrane, whose the main layer is the non-modified part of the pristine cation-exchange membrane. The anion-exchange PANI layer presents a barrier for the transfer of multicharged cations. Besides, it can be expected that the bipolar boundary will stimulate the water splitting by the MK-40/PANI membrane. The main properties of the MK-40 and MK-40/PANI membranes.
Fig. 5 -
5 Fig. 5-10 shows the I-V curves obtained for MK-40 and МK-40/PANI membranes in the model salt solution. In the case of MK-40 membrane the shape of the curve is the typical one.In the case of МK-40/PANI membrane the I-V curve has two inflection points. The first inflection point is in the region of small displacements of the potential from equilibrium (about 300 mV); it is related to the depletion of the bipolar boundary within the membrane.Under the action of external electric field, the cations migrate from the bipolar junction into the bulk of the cation-exchange layer; the anions migrate from this junction into the bulk of the anion-exchange layer. Diffusion of anion through the thin PANI layer allows reduction of the deficit of ions at the bipolar boundary. However, this reduction is only partial, and at a certain current density the ion concentration at this boundary becomes sufficiently low to cause an essential increase in the membrane resistance. The latter is seen by appearing a plateau on the I-V curve in a range of current densities close to 4 mA cm 2 . This critical current density, which is caused by the depletion of the bipolar boundary, may be called the pseudo-limiting current density, exp lim pseudo i
Fig. 5 -Table 5 - 4 .Fig. 5 -Figs. 5 -
55455 Fig. 5-10. I-V curves for МK-40 and МK-40/PANI membranes in the model solution and corresponding dependence of the pH of diluate solution.
Fig. 5 - 12 .
512 Fig. 5-12. Chronopotentiograms of МK-40 and МK-40/PANI membranes in the model solution and corresponding time dependence of the pH of diluate solution.
Fig. 5 - 13 .
513 Fig. 5-13. The precipitate in the form of a gel on the MK-40/PANI membrane surface at i=10.5 mA cm -2 .
Fig. 5 -
5 Fig. 5-14. SEM images of the surface of (a) Mg(OH) 2 , (b) CaCO 3 and Ca(OH) 2 crystals on the MK-40 membrane surface facing the diluate after 5h of electrodialysis of model solution.
-15). Since the surface layer formed by PANI contains positively charged fixed groups, the colloidal particles are attracted to the surface. That gives stability to the gel-type layer formed by colloidal particles. This layer contains mobile ions and conducts electric current, while the cake-type layer of crystals is non-conductive. The conductivity of the gel-type layer explains the fact that there is no PD growth with time across the MK-40/PANI membrane at i > exp lim i .
Fig. 5 - 15 .
515 Fig. 5-15. The micelle structure.
scale formation on the surface of two ion-exchange membranes, a heterogeneous MK-40 and its modification MK-40/PANI, during ED of a solution containing the scale-forming cations, Ca 2+ and Mg 2+ , is investigated. The membrane modification is carried out by synthesizing PANI within the MK-40 membrane near-surface layer in external electric field. In the case of МK-40/PANI membrane in comparison with MK-40 membrane the I-V curve has two inflection points. The first pseudo-limiting current density, exp lim pseudo i , is in the region of small displacements of the potential from equilibrium; it is related to the depletion of the bipolar boundary within the membrane. As the current continue to increase, the ion concentration at the external membrane surface decreases to a low value that causes a new increase in system resistance and the "classical" limiting current density, exp lim i , is observed. The precipitate is detected in the range i ≥ 0.8 exp lim i for MK-40/PANI and 1.0 exp lim i ≤ i ≤ 1.5 exp lim i for MK-40 membrane. In the case of MK-40 membrane at current density 1.0 exp lim i ≤ i ≤ 1.5 exp lim i the formation of scale occurs in the form of crystals of CaCO 3 , Mg(OH) 2 и Ca(OH) 2 . In this case, a part of the surface is screened by these crystals, which form a type of the cake film. As a consequence, the PD across the membrane does not reach the steady-state value and growth with time. The precipitation on the MK-40/PANI membrane is in crystal form at exp lim pseudo i < i < exp lim i and in the form of gel at i > exp lim i .Nous avons démontré que l'hydrophobicité relativement élevée de la surface de la membrane MK-40/Nafion et la structure spécifique de la couche proche de la surface (une couche homogène sur le substrat hétérogène, qui fournit une meilleure répartition des lignes de courant près de la surface) créent les conditions pour améliorer l'électroconvection. L'intensité de l'électroconvection dans le cas de MK-40/Nafion est significativement plus élevée, et dans le cas de MK-40/PANI est plus faible par rapport à celle de la membrane vierge. L'intensité de l'électroconvection dépend essentiellement du degré d'hydratation des contre-ions. Les ions de calcium et de magnésium hautement hydratés impliquent le mouvement d'un plus grand volume d'eau par rapport aux ions de sodium. Lorsqu'un courant surlimite constant est appliqué à la membrane, les vortex électroconvectifs dans des solutions 0,02 M de CaCl 2 et MgCl 2 sont générés dans les 3-8 secondes, ce qui est inférieur au temps de transition évalué à 7-16 secondes. Une électroconvection plus intense dans le cas de la MK-40/Nafion améliore le transfert de masse et entraîne une réduction de la chute de potentiel et du taux de dissociation de l'eau. Ces facteurs ainsi que la surface plus lisse de la MK-40/Nafion entraînent une réduction significative du taux et de la vitesse de formation des précipités sur la surface membranaire, voir même de sa prévention totale dans les zones où le courant atteint sa valeur limite. La couche PANI de la membrane MK-40/PANI constitue une barrière pour le transport de cations bivalents, tels que Ca 2+ et Mg 2+ . Cependant, la jonction bipolaire entre cette couche et le matériau de la MK-40/PANI provoque une augmentation du taux de dissociation de l'eau et une réduction de l'électroconvection. Par conséquence, le taux de précipitation des composés contenant du Ca 2+ et du Mg 2+ est plus élevé que dans le cas des membranes MK-40 et MK-40/Nafion. La formation de précipités se produit à des courants limites excessifs. Cependant, à des courants inférieurs au courant limite, aucune précipitation n'est observée. Ainsi, il est possible de recommander la membrane composite MK-40/PANI pour une électrodialyse sélective par ionisation monovalente à des densités de courant inférieures. Deux facteurs influent sur la formation de précipités sur les surfaces de membranes : l'électroconvection et la valeur locale du pH. Le précipité est détecté dans l'intervalle1.0𝑖 ≤ 𝑖 ≤ 2.0𝑖 pour la MK-40, quand 𝑖 ≥ 0.8𝑖 pour les membranes MK-40/PANI et dans l'intervalle 1.0𝑖 ≤ 𝑖 ≤ 1.5 pour la MK-40/Nafion. Dans l'intervalle 1.0𝑖 > 𝑖 > 2.0 pour la MK-40, quand 𝑖 < 0.8𝑖 pour les membranes MK-40/PANI et 1.0𝑖 > 𝑖 > 1.5 pour la MK-40/Nafion, aucune précipitation n'est observée. Dans le cas de la membrane MK-40 et MK-40/Nafion, où une densité de courant suffisamment élevée est appliquée, la vitesse de dissociation de l'eau au niveau de la membrane MA-41 est supérieure à celle de la membrane échangeuse de cations en cours d'étude. Ceci provoque une acidification de la solution dans tout le compartiment de dessalement et par conséquent la réduction ou l'élimination de la formation de précipités. L'analyse élémentaire du précipité sur la surface de la membrane réalisée par analyse des rayons X a montré la formation des cristaux de Mg(OH) 2 , Ca(OH) 2 , et de CaCO 3 . Comparé au taux de précipitation des cristaux sur la surface de la MK-40, on établit expérimentalement que celui sur la surface de MK-40/Nafion est plus petit, mais celui sur la surface du MK-40/PANI est plus grand. Un autre facteur important permettant l'atténuation de la formation de précipités est l'utilisation du mode de champ électrique pulsé. Pendant la pause, une relaxation du profil de concentration se produit en solution à la surface de la membrane. Les concentrations d'espèces retrouvent partiellement ou complètement leurs valeurs initiales en solution, et le produit des concentrations d'ions devient inférieur au produit de solubilité, ce qui provoque une dissolution partielle ou totale du précipité. Un tel mode permet, d'une part, de réduire de plus de deux fois le gap de potentiel, et d'autre part d'atteindre un état quasi stationnaire où se produit une dissolution périodique des précipités formés. Il est démontré que le taux de précipitation est significativement plus faible dans le mode du champ électrique pulsé. La tendance générale est que la vitesse de précipitation diminue en diminuant la valeur de T on , tandis que T off reste égal ou légèrement inférieur à T on . Perspectives Ce travail de thèse nous a permis de mettre en évidence la faisabilité de l'utilisation du mode de champs électriques pulsés pour réduire le taux de formation de précipités sur la surface de membranes échangeuses d'ions. Cependant, la courte durée (environ 3 h) des essais expérimentaux où ces champs pulsés ont été appliqués lors de cette étude ne nous permet pas de dire avec certitude si la formation des cristaux serait stable et négligeable lors d'une longue opération d'électrodialyse, ou bien au contraire, elle augmenterait avec le temps. La question qui demeure posée est « comment le système se comportera-t-il sur des durées comparables au temps d'utilisation de la membrane à l'échelle industrielle ». Ainsi, il est intéressant de faire une étude expérimentale de l'électrodialyse dans les modes de champ électrique pulsé de longue durée (plusieurs dizaines d'heures) lors de l'utilisation d'une cellule similaire à celle utilisée dans les unités industrielles. D'autres recherches devraient également être réalisées afin d'étudier l'effet du champ électrique pulsé à courte période (haute fréquence) dans des régimes galvanostatiques et potentiostatiques sur l'atténuation de la formation de précipités. Comme le montre Mishchuk et al., le régime du champ électrique pulsé devrait conduire à un effet plus significatif dans la réduction de la polarisation de la concentration, l'intensification de l'électrodialyse et, éventuellement, l'atténuation de la formation des précipités. De plus, d'autres formes d'ondes de champ électrique pulsé peuvent être étudiées pour améliorer le mélange de solutions par les vortex électroconvectionnels. En effet, il est connu que dans la galvanoplastie des métaux, des alliages, des composites et des semi-conducteurs, cette approche est largement utilisée pour améliorer la nature et la qualité du dépôt. Les formes d'onde peuvent être assez compliquées impliquant par exemple une impulsion cathodique suivie d'une période sans courant (ou une impulsion anodique), un courant continu avec des modulations superposées, des impulsions duplex, une impulsion sur impulsion, un courant inverse d'impulsion, etc. Les membranes modifiées par PANI ont une sélectivité vis-à-vis des ions monovalents. D'autres recherches devraient être entreprises pour vérifier l'intérêt de l'utilisation de ces membranes dans les procédés où une barrière pour le transport d'ions multivalent est nécessaire, comme par exemple la métathèse par électrodialyse et l'électrodialyse inverse.
Three cation-exchange membranes were used in this study. The commercial heterogeneous MK-40 membrane and two of its lab-made modifications: MK-40/Nafion and MK-40/PANI. The MK-40/Nafion membrane is obtained by mechanical coating the MK-40 membrane surface with a homogeneous film of Nafion ® perfluorinated resin. The Nafion ® and the MK-40 materials have the same negative-charged sulfonate groups. The preparation of the MK-40/PANI is carried out by synthesizing PANI within the MK-40 membrane near-surface layer in external electric field. The PANI polymer chains form a layer within the MK-40 membrane near its surface; this layer contains fixed H N groups whose charge sign is opposite to that of the MK-40 membrane. A bipolar junction between the PANI layer and the bulk material is formed within the MK-40/PANI membrane. The thickness of Nafion layer on the MK-40 membrane in the dry state is 12-15 μm. The thickness of MK-40/PANI membrane does not change in comparison with the pristine membrane. The conductivity of MK-40 and MK-40/Nafion membranes is identical, while that of MK-40/PANI is about 16 % lower. The contact angle of MK-40 and MK-40/PANI is nearly the same (close to 55 º), while that of MK-40/Nafion membrane is higher (63 º). The surface of both modified membranes is smoother than that of the pristine one. The electrochemical behavior of the membrane samples is studied by means of chronopotentiometry and voltammetry in the chloride solutions containing Na + , Ca 2+ and Mg 2+ ions. We have shown that the relatively high hydrophobicity of the MK-40/Nafion membrane surface, the specific structure of near-surface layer (a homogeneous layer on the heterogeneous substrate, which provides a "better" distribution of current lines near the surface) create conditions for enhancing electroconvection. The electroconvection intensity in the case of MK-40/Nafion is significantly higher, and in the case of MK-40/PANI is lower in comparison with that of the pristine membrane. The electroconvection intensity substantially depends on the degree of counter-ion hydration. Highly hydrated calcium and magnesium ions involve in motion a larger volume of water as compared with sodium ions. When a constant overlimiting current is applied to the membrane, electroconvective vortices in 0.02 M CaCl 2 and MgCl 2 solutions are generated already within 3-8 s, which is lower than the transition time varying in the range of 7-16 s. More intensive electroconvection in the case of MK-40/Nafion enhances the mass transfer, causes a reduction in the potential drop and in the water splitting rate. These factors together with the smoother surface of MK-40/Nafion lead to a significant reduction in the rate of scale formation, or to its total prevention in the overlimiting current ranges. The PANI layer in the MK-40/PANI membrane forms a barrier for the transport of bivalent cations, such as Ca 2+ and Mg 2+ . However, the bipolar junction between this layer and the bulk material of the MK-40/PANI causes an increase in water splitting rate and a reduction in electroconvection. As a result, the precipitation rate of Ca 2+ and Mg 2+ containing compounds is higher than that in the case of the MK-40 and MK-40/Nafion membranes. The scale formation occurs at overlimiting currents. However, at currents less than the limiting current no scaling is observed. Thus, it is possible to recommend the MK-40/PANI composite membrane for monovalent-ion-selective electrodialysis at underlimiting current densities. Two factors affect the scaling: electroconvection and local pH value. The scale is detected in the range 1.0𝑖 ≤ 𝑖 ≤ 2.0𝑖 for MK-40, 𝑖 ≥ 0.8𝑖 for MK-40/PANI and 1.0𝑖 ≤ 𝑖 ≤ 1.5 MK-40/Nafion membranes. In the range 1.0𝑖 > 𝑖 > 2.0 for MK-40, 𝑖 < 0.8𝑖 for MK-40/PANI and 1.0𝑖 > 𝑖 > 1.5 for MK-40/Nafion membranes, no scale is observed. In the case of MK-40 and MK-40/Nafion membranes, where a sufficiently high current density is applied, the water splitting rate at the MA-41 membrane is higher than that at the cation-exchange membrane under study. It provides an acidification of the solution throughout the desalination chamber and the reduction or the inhibition of the scale formation. The elemental analysis of the scale on the membrane surface made by X-ray analysis has shown the formation of Mg(OH) 2 , Ca(OH) 2 and CaCO 3 scales. It is experimentally established that the scaling rate on the surface of MK-40/Nafion is smaller, and on the surface of the MK-40/PANI is larger in comparison with the MK-40 membrane. Another important factor allowing mitigation of scaling is the use of pulsed electric field mode. During the pause lapse, a relaxation of the concentration profile occurs in solution at the membrane surface. The species concentrations return partially or completely to their values in the bulk solution, and the product of ion concentrations become lower than the solubility product, which causes partial or complete dissolving of the scale. Such mode allows the reduction of the potential drop more than twice and achievement of a quasi steady-state
Table 2 - 2 .
22 Fig.2-4. Chronopotentiograms of the MK-40 MOD membrane, as measured for different The data were obtained at current densities of i = 4.5 mA cm-2
electrolytes at lim / theor i i =2.2.
Table 1 -
1 1. The main characteristics of ion-exchange membranes.
Table 2 -
2 1. Some characteristics of ions and electrolyte solutions containing the ions (at 20°C).
Table 2 -
2 2. Parameters of current-voltage curves for the test membrane systems.
Table 3 -
3 1. Theoretical value of limiting current density of CEM, CEM
th i , lim , and AEM, AEM th i , lim , in
Table 4 -
4 1. Some parameters of I-V curves of MK-40, MK-40 MOD and MA-41 membranes in the mixed model solution at 20 0 С.
Table 5 - 1 .
51 Conditions for the preparation of MF-4SK/PANI composites.
Table 5 - 2 .
52 Conditions for the preparation of MF-4SK/PANI composites.
Table 5 -
5
3. The main properties of the MK-40 and MK-40/PANI membranes. Table 5-4. The values of limiting current densities for MK-40 and MK-40/PANI membranes in the mixed model solution at 20 0 С.
Table 5 -
5
5
. The possible foulants and fouling modes.
Table 1 - 1 .
11 The main characteristics of ion-exchange membranes.
When the plates are horizontal and the density gradient is vertical, two cases
are possible. If the lighter solution sublayer (depleted boundary layer) is on the top of the
solution layer (hence, just under the top plate), no convection arises. If the lighter sublayer is
at the bottom, there is a threshold in development of the gravitational convection determined
by the Rayleigh number, Ra
-16) that is located outside the EDL.
Electroconvection
Bulk electroconvection Electroconvection induced by electro-osmotic slip
Electro-osmosis of the 1 st kind Electro-osmosis of the 2 nd kind
(Quasi) equilibrium electroconvection Fig. 1-16. Profiles of ion concentration: the distribution of electrolyte concentration near Non-equilibrium electroconvection
membrane surface at lower (curves 1, 1') and higher (curves 2, 2', 3) voltage; is the overall
diffusion layer thickness; 1 , 2 and 3 are the thicknesses of the electrically neutral region,
extended SCR and quasi-equilibrium SCR, respectively; 0 C is the bulk electrolyte
concentration. The difference between counter-ions (curve 2) and co-ions (curve 3)
Table 2 -
2 1) show that the ability of the cations to structure water increases in the following order: Na + <Ca 2+ < Mg 2+ .
Ion Electrolyte
i D , 10 5 cm 2 s -1 cr r , Å St r , Å hydration number D , 10 5 cm 2 s -1 i t of cation
Na + 1.18 0.97 2.1 5 NaCl 1.43 0.395
Ca 2+ 0.70 0.99 3.5 8 CaCl 2 1.18 0.436
Mg 2+ 0.62 0.65 4.0 10 MgCl 2 1.10 0.406
Cl - 1.80 1.81 1.4 2
Table 2 -
2
1. Some characteristics of ions and electrolyte solutions containing the ions (at 20°C).
Fig.2-2). The smallness of s C 1 and a large value of cause the appearance of additional transport mechanisms other than electrically driven diffusion occurring at low values of . Such additional mechanisms can be the dissociation of water (which generates additional charge carriers, the H + and OH -ions), as well as conjugated convection,
1 z is the counter-ion charge; and F is Faraday's number. According to Eq. (2.2), s C 1 decreases with time, resulting in increased potential drop (DP) across the membrane. When s C 1 becomes small relative to 1 C , the potential drop dramatically increases (
Table 2 -
2
2.
Table 2 -
2 2 presents values for the limiting current density calculated by Eq. (2.5), theor i lim and the data obtained by the graphic processing of CVCs. The value of exp
lim i was determined by
p '
Table 2 -
2 2. Parameters of current-voltage curves for the test membrane systems.
Theoretical values Experimental values
-ion , μm theor i lim , mA cm -2 exp lim i , mA cm -2 i lim / exp theor lim i p , V ' CVC plateauslope, mS cm -2
Na + 248 2.0 2.4 1.2 1.8 1.2
Ca 2+ 233 3.5 4.2 1.2 1.6 2.1
Mg 2+ 228 3.1 4.1 1.4 0.5 3.7
this paper, two cation-exchange membranes, i.e. MK-40 and MK-40 MOD , were studied by chronopotentiometry and voltammetry, MK-40 MOD being obtained by covering the heterogeneous surface of a commercial MK-40 membrane with a homogeneous 20 μm thick
Nafion ® film. Electrodialysis process was realized in an electrodialysis flow-through
laboratory cell, in which the cation-exchange membrane under study formed a desalination
chamber with an auxiliary anion-exchange (Neosepta AMX-SB) membrane. 0.02 M and
0.04 M solutions of CaCl 2 , MgCl 2 and NaCl were used. The current densities were changed in
the range from 0.25 th i lim to 2.5 th i lim , where the theoretical limiting current density, th i lim , was
calculated using the Leveque equation. The potential drop over the modified MK-40 MOD
membrane and the water splitting at this membrane turned out to be lower in all studied cases.
Formation of scaling was observed only in the case of the MK-40/0.04 M MgCl 2 system at
current densities in the range from 1.1 th i lim to 1.4 th i
lim . For these current densities, the (quasi)steady state value of potential drop slowly increased with time and the crystals of Mg(OH) 2 were found on the ion-exchange particles embedded onto the MK-40 membrane
Table 3 -
3 1. Theoretical value of limiting current density of CEM, CEM
th i , lim , and AEM, AEM th i , lim , in
th i i
WS at the CEM in the case of MgCl 2 . Apparently, this is associated with more intensive EC in the MgCl 2 solution.
i AEM lim th i , is applied. It
is noteworthy that AEM lim th i , CEM lim th , 3 . 1 in CaCl 2 and AEM lim th i , CEM lim th , 5 . 1 in MgCl 2 solution
(Table 3-1). When the current density i CEM lim th i , 4 . 1 is applied, the overlimiting current occurs
at the AEM in the case of CaCl 2 solution, but it is lower than th i , lim AEM in the case of MgCl 2
solution. For this reason, at i CEM lim th i , 4 . 1 , the solution in desalination chamber is acidified,
when the CaCl 2 solution is used, and its pH value remains nearly constant in the case of
MgCl 2 (Fig. 3-5b).
Stronger acidification of MgCl 2 solution in comparison with CaCl 2 at i CEM lim th i , 5 . 2 is
explained by lower
40 MOD membrane. This effect is explained by a stronger electroconvection in the depleted solution at this membrane. The reason is that the surface of the modified membrane is relatively more hydrophobic and less electrically heterogeneous, thus providing a better distribution of current lines enhancing the ion transport. For both membranes, electroconvection is higher in the MgCl 2 solutions than in the CaCl 2 solutions. That is associated with higher hydration of Mg 2+ cations in comparison with Ca 2+ analogues. state PD did not vary with time, and no salt deposit was observed on the MK-40 surface. No growth of steady state PD with time and no salt deposit were detected at any currents in the case of CaCl 2 solutions, evidently because of an essentially higher value of the solubility product for Ca(OH) 2 in comparison with Mg(OH) 2 . No scaling was either detected in the case of the MK-40 MOD membrane. In the case of MK-40 MOD , EC is more intensive than in the case of MK-40 membrane. This more intensive EC results in higher near-surface salt counter-ion concentration, due to better mixing of solution. Moreover, reduced CP leads to lower WS rate at the CEM, hence to lower concentration of OH -ions at the depleted membrane interface. WS becomes dominating at the AEM, thus providing a low pH in the overall desalination chamber, hence, the conditions where the hydroxide deposition does not occur.
Formation of scaling was observed only in one case of MK-40/0.04 M MgCl 2 system at
current densities in the range from th i lim 1 . 1 to th i lim 4 . 1 In this current range, the (quasi) steady
state value of PD slowly increased with time, and crystals of the Mg(OH) 2 were found on the
ion-exchange particles embedded onto the MK-40 membrane surface. At i th lim i 1 . 1 and
i th lim i 4 . 1 , the steady th i i lim 4 . 1 , EC is stable and relatively low,
and the WS rate is nearly the same at CEM and AEM (ΔpH≈0). At i th lim i 4 . 1 , EC is stronger
Two factors affect the scaling: EC and local pH value. EC mixes the depleted solution and adds to maintain the ion concentrations higher than the solubility. Increasing pH reduces the solubility. In the case of MK-40/0.04 M MgCl 2 , and unstable, and the rate of WS is higher at the AEM resulting in a reduced pH near the CEM. EC should sweep down the deposit of crystals on the CEM surface, if only they could appear in the conditions of lower pH.
Table 4 -
4 1 presents some characteristics of experimental I-V curves of MK-40, MK-40 MOD and MA-41 membranes: the experimental limiting current density, i lim exp , the plateau length and the plateau slope determined as shown in Fig. 4-3.
membrane i lim exp Plateau length, p , V Plateau slope, 1/R, mA cm -2 V -1
MK-40 7.9±0.1 2.8±0.1 2.0±0.1
MK-40 MOD 8.8±0.1 1.1±0.1 3.1±0.1
MA-41 9.3±0.1 1.7±0.1
Table 4 -
4 1. Some parameters of I-V curves of MK-40, MK-40 MOD and MA-41 membranes in the mixed model solution at 20 0 С.
In the literature review, electrodialysis metathesis was considered as an effective approach to fight scaling. The design of electrodialysis stack is organized in a way that there are two concentrate chambers: in one of them, singly charged cations (such as Na + ) and multicharged
anions (SO 4 2-
This investigation was realized in the frame of a joint French-Russian PHC Kolmogorov 2017
project of the French-Russian International Associated Laboratory "Ion-exchange membranes
and related processes" with the financial support of Minobrnauki
(Ref. N° RFMEFI58617X0053), Russia, using scientific equipment of the Core Facility
"Environmental Analytical Center" of the Kuban State University (unique identifier
RFMEFI59317Х0008) and CNRS, France (project N° 38200SF). M. Andreeva thanks the
French Embassy in Moscow, the Paris-Est University and the ICMPE CNRS for giving the
possibility to carry out this work.
Chapter 5
Effect of surface modification of cation-exchange
membrane with polyaniline on polarization
characteristics and scale formation
Table 5 -
5
-1 m -2
nonmodified side modified side directed Asymmetry
directed toward the toward the counter-ion coefficient
counter-ion flow flow
MF-4SK/PANI-100-1 1207 48 25
MF-4SK/PANI-200-1 475 15 32
MF-4SK/PANI-300-1 475 38 12
MF-4SK/PANI-100-2 3490 2050 1.7
MF-4SK/PANI-200-2 690 42 16
MF-4SK/PANI-300-2 205 30 7
2. Conditions for the preparation of MF-4SK/PANI composites.
Table 5 -
5 4.
Le colmatage à la surface et dans la masse d'une membrane échangeuse d'ions est un obstacle majeur à son utilisation en électrodialyse. En bloquant les voies conductrices d'ions à travers la membrane, le dépôt réduit la surface active de la membrane et conduit à une résistance au transfert de matière supplémentaire. Trois membranes échangeuses de cations ont été utilisées lors de ce travail: une membrane commerciale hétérogène MK-40 et deux de ses modifications (une membrane MK-40/Nafion obtenue par revêtement de la surface de MK-40 avec un film de Nafion ® homogène conducteur d'ions, et une membrane MK-40/PANI obtenue par synthèse de polyaniline sur la surface de la MK-40). Les solutions utilisées sont des solutions de CaCl 2 et MgCl 2 aux concentrations 0,02 et 0,04 mol/L, ainsi qu'une solution modélisant la composition minérale du lait, concentrée 3 fois. La visualisation de la surface de la membrane est réalisée par microscopie optique et microscopie électronique à balayage. L'analyse élémentaire des dépôts sur la surface de la membrane est réalisée par l'analyse aux rayons X. Le caractère hydrophobe-hydrophile de la surface de la membrane est estimé par la mesure de l'angle de contact. La chronopotentiométrie et la voltamétrie ont été utilisées pour caractériser la vitesse de transport des cations à travers les membranes et la dissociation d'eau à la surface ; la mesure du pH de la solution dessalée a été effectuée en parallèle. Il est démontré que l'hydrophobicité relativement élevée de la surface de la membrane, son hétérogénéité électrique et géométrique créent les conditions favorables au développement de l'électroconvection. L'intensité de l'électroconvection par rapport à la membrane non modifiée est significativement plus élevée dans le cas de la MK-40/Nafion mais plus faible dans le cas de la MK-40/PANI. L'électroconvection provoque le mélange de la solution à la surface de la membrane dans une couche d'environ 10 μm d'épaisseur. Cet effet augmente de manière significative le transfert de matière en mode de courant intensif, empêche ou réduit le colmatage et réduit aussi le taux de dissociation de l'eau sur la surface de la membrane. L'intensité de l'électroconvection dépend essentiellement du degré d'hydratation du contre-ion ; elle augmente avec son rayon de Stokes. Le taux de croissance des dépôts minéraux Mg(OH) 2 , Ca(OH) 2 et CaCO 3 sur la surface de la membrane échangeuse d'ions est déterminé par la pente du chronopotentiogramme. On établit expérimentalement que, par rapport à la vitesse de colmatage sur la membrane MK-40 non modifiée, celle sur la surface de MK-40/Nafion devient plus petite mais celle sur la surface de la MK-40/PANI, devient plus grande. Le taux du colmatage est considérablement réduit lorsqu'un mode de courant électrique pulsé est appliqué. Un tel mode permet de réduire de moitié la différence de potentiel et d'atteindre un état quasi-stable du fait que le précipité devient instable.
Mots clés : membrane échangeuse d'ions ; colmatage ; propriétés de surface ; électrodialyse ; électroconvection.
Acknowledgements
So, the three-year scientific journey has passed. This work was realized in the frame of the French-Russian International Associated Laboratory "I would like to thank the French Government and the Embassy of France in Russia for my French Postgraduate Award scholarship ("Metchnikov" program) in France, as well as the Russian Government for my Research Scholarship in Russia.
Abstract
The transport of sodium, calcium, and magnesium ions through the heterogeneous cationexchange membrane MK-40, surface modified with a thin (about 15 μm) homogeneous film MF-4SK. By using chronopotentiometry and voltammetry techniques, it has been shown that the combination of relatively high hydrophobicity of the film surface with its electrical and geometrical (surface waviness) heterogeneity creates conditions for the development of electroconvection, which considerably enhances mass transfer in overlimiting current regimes. The electroconvection intensity substantially depends on the degree of counter-ion hydration. Highly hydrated calcium and magnesium ions involve in motion a much larger volume of water as compared with sodium ions. When constant overlimiting direct current is applied to the membrane, electroconvective vortices in 0.02 M CaCl 2 and MgCl 2 solutions are generated already within 5-8 s, a duration that is the transition time characterizing the change of the transfer mechanism in chronopotentiometry.
The generation of vortices is manifested by potential oscillations in the initial portion of chronopotentiograms; no oscillation has been observed in the case of 0.02 M NaCl solution.
More intense electroconvection in the case of doubly charged counter-ions also causes a reduction in the potential drop ( ) at both short times corresponding to the initial portion of chronopotentiograms and long times when the quasi-steady state is achieved. At a fixed ratio of current to its limiting value, decreases in the order Na + > Ca 2+ > Mg 2+ . compounds. The scale amount on the MK-40 MOD was essentially lower and it contained only CaCO 3 . A small amount of CaCO 3 was detected on the auxiliary MA-41 membrane, when it was used together with the MK-40 membrane. Negligible precipitation was found on the MA-41 membrane paired with the MK-40 MOD membrane. At currents 1.5 to 2 times higher than the limiting current, higher electroconvection and higher contribution of water splitting at the MA-41 membrane, which adjusted the pH of the depleted solution in a slightly acid range, act together to prevent scaling. Lower scaling of the MK-40 MOD membrane is explained by its more appropriate surface properties: the thin Nafion ® layer forming the surface is smooth, homogeneous and relatively hydrophobic. These three properties together with a heterogeneous substrate aid an earlier onset of electroconvective instability; enhanced electroconvection partially suppresses water splitting at the cation-exchange membrane, which allows slight acidification of the diluate solution due to a more pronounced water splitting at the anion-exchange membrane. The use of pulsed electric fields with sufficiently high relaxation time allows essential reduction of membrane scaling. diffusion layer decreases, however, in a certain pH range the rate of water splitting at the CEM may be sufficient to obtain the (C Mg (C OH ) 2 ) product value higher than the Mg(OH) 2 solubility product.
Effect of homogenization and hydrophobization
Scale formation in PEF modes
ChPs measured in the model salt solution at i = 10.5 mA cm -2 in the constant current and PEF modes are presented in Figs. 4-10a-d. In PEF modes, during the pulse lapse the current density is also equal to 10.5 mA cm -2 . When counting the duration of an experimental run (75 min or 105 min), the 'off' time (when i=0) is not taken into account (Figs. 4-11b-d).
Figs. 11a-d show also the value of pH of the diluate solution measured in the flow cell shown in Fig. 4-2 as a function of time. As Fig. 4-11a shows, the increase of during one experiment run in the constant current mode is about 0.6 V for the MK-40 membrane, while it is only about 0.1 V in the case of MK-40 MOD membrane. Another effect is the growth of pH of the diluate solution, which circulates across the desalination compartment (compartment 2 in Fig. 4-2) and an intermediate tank (tank 7 in Fig. 4-2) in batch mode. As discussed above, this growth is explained by higher rate of water splitting at the CEM in the specified current range (i lim CEM < i < k i lim
AEM
, where in the constant current mode k=1.4 for MK-40 MOD and 1.7
for MK-40).
Abstract
The possibility of obtaining composite membranes based on MF-4SK and polyaniline with an anisotropic structure and asymmetric current-voltage curves in an external electric field was considered. The polarization behavior of the obtained composite membranes was studied. As the current density increased during the synthesis of polyaniline, the conductivity of the membranes decreased; the hysteresis on the cyclic current-voltage curve, as well as the asymmetry of the parameters of the current-voltage curves, increased; and pseudo-limiting current appeared due to the formation of a bipolar internal boundary.
preparation of the material and reduce the concentrations of the polymerizing solutions, as was the case with the preparation of bulk-modified composites in the presence of FeCl 3 [12].
The aim of this work was to examine the possibility of obtaining surface-modified composites based on the MF-4SK membrane and polyaniline under the conditions of an external electric field and to study their electrochemical behavior. for 60 min (Table 5-1). The phenylammonium cations were transferred, in accordance with the direction of electric current, across the MF-4SK cation-exchange membrane into the concentration chamber (Fig. 5-1). The dichromate or persulfate anions, in turn, migrated to the membrane surface, where the oxidative polymerization of aniline occurred. Thus, a polyaniline layer formed on the membrane surface on the cathode side. The difference in conditions causing formation of crystals or gel may be explained as follows.
Experimental
Sample
Generally, the presence of positively and negatively charged ions results in formation of precipitate, if the product of their concentration is higher than the solubility product. When the concentrations of positively and negatively charged ions are in stoichiometric proportion, electrically neutral crystals are formed. However, when freshly prepared electrically neutral precipitate is treated with a solution, where one the ions constituted the crystal is in excess, charged colloidal particles can be formed. This process is called is called peptization [11,12].
The cause is the selective adsorbtion of ions common to the colloidal particles. For example, if freshly prepared ferric hydroxide precipitate is treated with a small quantity of FeCl 3 |
01759358 | en | [
"shs.eco"
] | 2024/03/05 22:32:10 | 2018 | https://shs.hal.science/halshs-01759358/file/CivilServantsPrivateSector.pdf | Anne Boring
Claudine Desrieux
Romain Espinosa
Aspiring top civil servants' distrust in the private sector *
Keywords: Public service motivation, private sector motivation, career choices, civil servants. Méfiance envers le secteur privé des aspirants hauts fonctionnaires Service public, Secteur privé, choix de carrière, fonction publique
In this paper, we assess the beliefs of aspiring top civil servants towards the private sector. We use a survey conducted in a French university known for training most of the future highranking civil servants and politicians, as well as students who will work in the private sector. Our results show that students aspiring to work in the public sector are more likely to distrust the private sector, to believe that conducting business is easy, and are less likely to see the benefits of public-private partnerships. They are also more likely to believe that private sector workers are self-interested. These results have strong implications for the level of regulation in France, and the cooperation between the public and private sector.
Introduction
Government regulation of the economy is strongly and negatively correlated with trust [START_REF] Aghion | Regulation and distrust[END_REF]. More distrustful citizens tend to elect politicians who promote higher levels of regulation of the private sector. However, elected officials are not the only ones deciding on levels of regulation. Civil servants, especially high ranking officials, also design and enforce regulations that directly impact the private sector. 1 Civil servants are different from elected officials, because they tend to remain in office when there are political changes. In many countries, and in France in particular, civil servants tend to spend their entire professional careers working in the public sector. The beliefs that these non-elected government officials have regarding the private sector are therefore likely to influence the regulations that apply to the private sector. If high-ranking civil servants have more negative beliefs regarding the private sector, they might promote higher levels of regulation than what the population would vote for through a democratic process.
In this paper, we aim to document French civil servants' beliefs regarding the private sector. To do so, we analyze how students aspiring to high-ranking positions in the public sector differ in their trust in the private sector compared to other students. Analyzing the beliefs of these aspiring civil servants is important given their future influence on the regulation of the private sector. We more specifically study whether these potential top regulators show greater distrust towards the private sector.
Our analysis relies on an original dataset that includes information collected from a survey addressed to students enrolled at Sciences Po, one of the most prestigious universities in France. Sciences Po is known to be the best educational program leading to the highest positions in French higher administration. Since 2005, between 70% and 88% of the students admitted to the French National School of Administration (Ecole nationale d'administration or ENA) are former Sciences Po students. While a large share of high ranking civil servants graduated from this university, a majority of Sciences Po students choose careers in the private sector. We can therefore compare the beliefs of students who will be the future public sector leaders, with students who will work in the private sector. To measure students' beliefs, the survey includes questions that assess students' level of distrust in the private sector, how they perceive people who choose to work in the private sector, and their views regarding the private provision of public goods. We also collect information on their motivations to aim for a career in the public sector, and their trust in public institutions.
To conduct our analysis, we rely on standard statistical methods (group comparison tests), ordered response models (ordered probit and logit), and principal component analyses. Using ideal point estimation techniques, we also compare students' trust toward the private sector, and their views on how easy they think it is for entrepreneurs or firms to conduct business.
We find that students who aspire to work in the public sector: (i) tend to show more distrust in the private sector, (ii) believe that conducting business is relatively easy, (iii) are less likely to see benefits in public-private partnerships, and (iv) tend to trust public institutions more than the other students. Our results suggest that students who aspire to work in the public sector have a stronger taste for public regulation of economic activities.
These results provide some evidence of a selection bias in career choices: higher administration workers may have more negative beliefs regarding the private sector. This difference may worsen over time, given that civil servants, once in office, have limited experience of the private sector and share their offices with civil servants who hold similar beliefs. These beliefs could also complicate collaborations between the public and the private sector, for instance in the provision of public services.
The paper is structured as follows. Section 2 relates our work to the existing economics literature. Section 3 describes the survey and students' responses. Section 4 presents our data analyses. We conclude in section 5.
Literature
Our paper is related to three strands of the economics literature: on the interaction between trust and regulation, on civil servants' characteristics, and on intrinsic versus extrinsic motivation of workers. In this section, we explain how our analysis connects to each of these topics.
First, our paper is inspired by the literature on the relationship between trust and regulation, as developed by [START_REF] Aghion | Regulation and distrust[END_REF], [START_REF] Cahuc | Civic Virtue and Labor Market Institutions[END_REF] and [START_REF] Carlin | Public trust, the law, and financial investment[END_REF]. The seminal work by [START_REF] Aghion | Regulation and distrust[END_REF] documents how government regulation is negatively correlated with trust: distrust creates public demand for regulation, and regulation discourages the formation of trust because it leads to more government ineffectiveness and corruption. They therefore show the existence of multiple equilibria, as beliefs shape institutions and institutions shape beliefs. Most of this literature relies on general measures of trust 2 . We adopt a different approach. Indeed, we precisely measure the level of trust towards the private sector of people aspiring to work in the public sector. Our measure is therefore more accurate to address our goal, which is to determine whether civil servants have more or less trust in the private sector than people working in the private sector. As discussed in the introduction, a higher level of civil servants' distrust in the private sector could lead to a risk of over-regulation of the private sector.
Second, our paper is related to the literature on preferences of civil servants. This literature shows that public sector workers are often more pro-social than private sector workers. For instance, [START_REF] Gregg | How important is prosocial behaviour in the delivery of public services[END_REF] use data from the British Household Panel Survey to show that individuals in the non-profit sector are more likely to donate their labor (measured by unpaid overtime), compared to those in the for-profit sector. Using the American General Social Surveys, [START_REF] Houston | Walking the Walk" of Public Service Motivation: Public Employees and Charitable Gifts of Time, Blood, and Money[END_REF] finds that government employees are more likely to volunteer for charity work and to donate blood, than for-profit employees. However, he finds no difference among public service and private employees in terms of individual philanthropy. Analyzing data from the American National Election Study, [START_REF] Brewer | Building social capital: Civic attitudes and behavior of public servants[END_REF] shows that civil servants report higher participation in civic affairs. Using survey data from the German Socio-Economic Panel Study, [START_REF] Dur | Intrinsic Motivations of Public Sector Employees: Evidence for Germany[END_REF] also find that public sector employees are significantly more altruistic than observationally equivalent private sector employees, but that they are also lazier. Finally, using revealed preferences, [START_REF] Buurman | Public sector employees: Risk averse and altruistic?[END_REF] show that public sector employees have a stronger inclination to serve others, compared to employees from the private sector. 3 All these papers have post-employment choice settings. In our paper, we study beliefs regarding the private and public sectors when students are choosing their professional careers. Within the literature focusing on students, [START_REF] Carpenter | Public service motivation as a predictor of attraction to the public sector[END_REF] provide evidence showing that students with a strong public service orientation (evaluated by surveys addressed to American students) are more attracted to government jobs. Vandenabeele (2008) uses 2 In surveys, the measure of trust is most often measured with the "generalized trust" question. This question runs as follows: "Generally speaking, would you say that most people can be trusted, or that you can't be too careful when dealing with others?" Possible answers are either "Most people can be trusted" or "Need to be very careful." The same question is used in the European Social Survey, the General Social Survey, the World Values Survey, Latinobarómetro, and the Australian Community Survey. See [START_REF] Algan | Trust, Well-Being and Growth: New Evidence and Policy Implications[END_REF] 3 However, when tenure increases, this difference in pro-social inclinations disappears and even reverses later on. Their results also suggest that quite a few public sector employees do not contribute to charity because they feel that they have already been contributing enough to society through work for too small a paycheck.
data on students enrolled in Flemish Universities to show that students with high pro-social orientation have stronger preferences for prospective public employers. Through experiments conducted on students selected to work for the private and public sectors in Indonesia, [START_REF] Banuri | Pro-social motivation, effort and the call to public service[END_REF] show that prospective entrants into the Indonesian Ministry of Finance exhibit higher levels of pro-social motivation than other students. Our paper adds to this literature, by focusing on the beliefs students have regarding the private sector, instead of measuring pro-social behavior to explain student selection of careers in the public or private sector. Our different approach is important, because individuals interested in working in the public sector may put more value on public services or exhibit more pro-social values, while showing no particular distrust towards the private sector. Alternatively, they can simultaneously show more interest in the public sector and more distrust in the private sector. Our analysis enables us to distinguish between these two possibilities, using a unique French dataset. Very few papers have investigated the possibility of self-selection of civil servants as a consequence of negative beliefs towards the private sector. Papers by [START_REF] Saint-Paul | Le Rôle des Croyances et des Idéologies dans l'Économie Politique des Réformes[END_REF][START_REF] Saint-Paul | Endogenous Indoctrination: Occupational Choices, the Evolution of Beliefs and the Political Economy of Reforms[END_REF] are an exception. They a adopt a theoretical approach to explain why individuals who are negatively biased against market economies are more likely to work in the public sector. Our approach is complementary and provides empirical evidence supporting this claim.
Finally, our paper is related to the literature on intrinsic and extrinsic motivation, as defined by Bénabou andTirole (2003, 2006). Extrinsic motivation refers to contingent monetary rewards, while intrinsic motivation corresponds to an individual's desire to perform a task for its own sake or for the image the action conveys. Many papers have explored the consequences of intrinsic motivation in different contexts, such as on wages [START_REF] Leete | Wage equity and employee motivation in nonprofit and for-profit organizations[END_REF]), knowledge transfers [START_REF] Osterloh | Motivation, knowledge transfer, and organizational forms[END_REF]), cooperation [START_REF] Kakinaka | An interplay between intrinsic and extrinsic motivations on voluntary contributions to a public good in a large economy[END_REF], training (DeVaro et al. ( 2017)), and law enforcement [START_REF] Benabou | Laws and Norms[END_REF]). However, there lacks studies that analyze the extent to which values and beliefs regarding professional sectors matter when individuals choose their jobs. Our paper aims to fill this gap by exploring the different arguments that students use to explain their career choices, distinguishing between extrinsic and intrinsic motivations. In addition, we explore the beliefs of students aspiring to work in the public sector regarding the reasons why other students choose to work in the private sector.
Setting and survey
In this section, we describe the institutional context of our study (subsection 3.1). We also provide information about data collection and respondents (subsection 3.2).
Careers of graduates
Sciences Po is a prestigious French university that specializes in social sciences. In the cohort of students who graduated in 2013 and who entered the labor market in the year following their graduation, 69% worked in the private sector, 23.5% worked in the public sector, and 7.5% worked in an international organization or a European institution. The university is especially well-known for educating France's high-ranking civil servants: a large share of the top positions in the French administration are held by Sciences Po alumni. Sciences Po is the university students attend when they ambition to get admitted to the ENA or other schools leading to high-ranking civil servant positions, and which are only open through competitive exams (concours). In the cohort of students who passed these competitive exams in 2016-17, 82% of students admitted to the ENA were Sciences Po graduates. Sciences Po graduates also represented 67% of those admitted to top administrative positions in the National Assembly, 32% of future hospital directors, and 57% of future assistant directors of the Banque de France. 4 A large majority of top diplomats are also Sciences Po graduates. Sciences Po graduates therefore have a large influence on policy-making and regulation in France.
Students who graduate from this university tend to hold high-ranking positions, whether in the public or private sector. Differences in wage ambitions may partly explain students' preferences for the private sector over the public sector. While wages tend to be high for most of the public sector positions held by Sciences Po graduates, young graduates in private sector areas such as law and banking tend to earn substantially higher wages right after graduation. On the other hand, public sector jobs provide more employment security. Beliefs regarding other characteristics of each sector are likely to have a large influence on students' choices for careers. The goal of our survey is therefore to get a better understanding of how these beliefs may guide students' choices for one sector over another.
The survey
In order to investigate differences in students' beliefs regarding the private sector, we designed an online survey that was only accessible to students. The survey included questions on students' beliefs regarding (i) the public sector and the private sector; (ii) their classmates' views of both sectors; (iii) social relations at work, more specifically on unions and labor laws; (iv) entrepreneurship and economic regulation; and (v) a case study on public-private partnerships. The survey also included a question on students' choices for future jobs and careers.
The questionnaire was sent by the administration in mid-September 2014, two weeks after the beginning of classes, to the undergraduate and graduate students from the main campus (in Paris) and one of the satellite campuses (in Le Havre), representing a cohort of approximately 10,000 students. A total of 1,430 students completed at least part of the survey (including seven students who were not directly targeted by the survey), with approximately half of these students answering all of the questions (see Table 1 in Appendix A for a description of the sample sizes by year of study). The survey took approximately 15 minutes to complete from start to finish. Answers were recorded as students made progress through the questions, such that we are able to analyze answers to the first parts of the survey for students who did not complete it.
Among the students who completed at least part of the survey, only a few (5%) are from the satellite undergraduate campus. Overall, 62% of respondents are Master's degree students, and 38% are undergraduates. There are also three PhD students and one student preparing administrative admissions' exams who answered the survey. The share of female and male students who answered the survey is representative of the gender ratio in the overall student population (40% of respondents are male students, whereas the overall Sciences Po male student population was 41% in 2014). The share of respondents is similar across Master's degrees (Table 2). For instance, 19.5% of all Master's students were in public affairs in 2014, compared to 20.6% of respondents.
A large share (88%) of respondents are French, leading to an overrepresentation of French students. We therefore decided to drop the non-French students from our original sample. Several reasons motivated our approach. First, including non-French students in the analysis would increase the heterogeneity of respondents. Indeed, the students' answers to the questions are likely to be highly culture-dependent. The size of our dataset and the strong selection effects of foreign students prevent us from any inference from this subsample. Second, our objective is to measure the level of distrust regarding the private sector in France for prospective French civil servants. In this regard, French citizens are more likely to work in France after their graduation. Focusing on these students increases the external validity of our study. Third, some questions deal specifically with French institutions, further justifying keeping only the answers submitted by French students. While foreign students may be familiar with some of these institutions (such as the Constitutional Court or the National Assembly), other institutions (such as the Conseil de Prud'hommes i.e. French labor courts), are very unlikely to be known by 20-year old foreign students. Finally, the share of students who went to high school in France and who were admitted as undergraduates (the "undergraduate national" admissions program in Table 2) are overrepresented among respondents. Indeed, they represent 41.7% of all students, but 62.3% of respondents. This overrepresentation mostly disappears when only French students are included. Indeed, 65.1% of all French students were admitted through the undergraduate national admissions procedure, whereas 69.9% of French respondents were admitted through this procedure.
The final dataset therefore includes the answers given by the 1,255 French students who completed at least part of the survey5 . A total of 740 students answered the question on their professional goals: 41% of these students were considering working in the public sector, and 59% in the private sector. The data suggest that a selection bias of respondents as a function of students' study or admissions program type is unlikely (comparing columns ( 4) and ( 5) of Table 2). 6The dataset does not provide any specific information on socioeconomic background. However, 9.8% of the French students who are included in the final dataset were admitted as undergraduates through a special admissions program designed for high school students from underprivileged education zones. These students represent 9.2% of the share of the French students who responded to the survey, suggesting no significant selection bias of respondents according to this criterion.
Finally, we also checked whether our sample was representative of the overall Sciences Po student population by checking for differences in the two other observable characteristics of students, namely age and grades. Regarding age, French students who answered the survey were slightly younger than the overall French Sciences Po student population (20.5 vs. 21.1). The difference is statistically significant for both undergraduate and graduate students, but remains small in size. We checked whether this difference could be explained by the fact that the respondents were better students, and might therefore be younger. While the dataset does not provide any information on high school grades, we use students' first year undergraduate grades to compare French respondents with the overall French student population. Indeed, in the first year of undergraduate studies, the core curriculum is mandatory for all students and similar across campuses, enabling us to compare students. We know the grades of students who were Master's degree students in 2014-15, and who were first year undergraduates in 2011-12. The difference in average grades between the 227 French respondents (13.4) and the 774 French non-respondents (13.2) is weakly significant (t-test p-value=0.09 ) 7 . A two-sample Kolmogorov-Smirnov test comparing the distributions of grades between students who answered the survey with the other students yields an exact p-value of 0.063. Comparing the distribution of grades for students who completed the question on professional aspirations with the other French students yields an exact p-value=0.215. The difference in average first year grades between the 253 French Master's students who were first year students in 2010-11, and the other 769 French students, is not statistically significant (13.2 for respondents, compared to 13.1 for the other French students, with a p-value=0.40 ). 8 . The two-sample Kolmogorov-Smirnov tests are not statistically significant when comparing the grades of all French students with respondents (p-value=0.225 ) nor with the smaller sample of students who completed the question on aspirations (p-value=0.198 ), suggesting no significant difference in the distribution of grades between respondents and non-respondents. Although we do not have the grades of the other respondents, the statistical evidence presented suggests that the respondents are likely to be representative of the overall French Sciences Po student population at the university.
Results
Beliefs regarding the private sector
Our survey enables us to investigate aspiring civil servants' perception of the private sector. The survey was addressed to all students, i.e. those aspiring to work in the public sector and those wishing to work in the private sector. This approach allows us to evaluate whether the answers of students aspiring to work in the public sector are different from the answers of the other students. We use three series of questions to evaluate students' beliefs regarding the private sector.
Reasons to work in the private sector
First, the survey asked students to report their perception of the factors that determine their classmates' motivations for careers in the private sector. The suggested motivations were the following: to work with more competent or more motivated teams (Competence and Motivation), to benefit from more work flexibility and a stronger sense of entrepreneurial spirit (Flexibility and Entrepreneurship), and to have the opportunity to earn higher wages (Wage). For each of these items, respondents could answer: Strongly Disagree, Disagree, Neither Agree nor Disagree, Agree, or Strongly Agree. We ordered these answers and assigned them numerical values from 1 (Strongly Disagree) to 5 (Strongly Agree).
Figure 1 shows the respondents' average perceptions for each of the factors driving other students to work in the private sector, according to the respondents' prospective careers. Table 3 displays the distribution of answers by students' career choices. Table 4 in further shows the results of ordered probit estimations and p-values of two-group mean-comparison tests.
First, we find that students tend to have diverging beliefs regarding the reasons that drive people to work in the private sector (Table 3). Only 28% of the students who want to take a public sector exam (strongly or weakly) agree with the fact students who plan to work in the private sector are attracted by more competent teams, compared to 42% of students who want to work in the private sector. We observe similar differences for more motivated teams (35% vs. 49%) and for entrepreneurial spirit (69% vs. 88%). However, both types of students seem to have similar beliefs regarding the attractiveness of the private sector in terms of the work flexibility (50% vs. 51%) and the higher wages (97% vs. 94%) it offers.
These results are confirmed by univariate analyses (two-group mean comparison tests and ordered probit estimates in Table 4).9 Students who aspire to become civil servants are indeed less
Competence Motivation
Flexibility Entrepreneurship Wage PCA Note: Bars correspond to the average scores of each of the two groups of students for each question. Answers take values from 1 (Strongly Disagree) to 5 (Strongly Agree). The distribution of answers is presented in table 3. Full description of the questions are given in the Online Appendix.
likely to say that other students aspire to work in the private sector because of (i) greater entrepreneurial spirit (p-values<0.001 ), (ii) more competent teams (p-values<0.001 ), and (iii) more motivated teams (p-values<0.001 ). In other words, these students have relatively more negative beliefs regarding the private sector.
To confirm the existence of latent negative beliefs regarding the reasons that drive people towards the private sector, we run a principal component analysis (PCA) on the above dimensions. The first axis of the associated PCA, which explains 38.6% of the variations, is mainly correlated with Competence, Motivation and Entrepreneurship 10 . Students who aspire to become civil servants show statistically higher scores on this first axis than those who want to work in the private sector. This result confirms that students interested in becoming civil servants have a worse underlying perception of the reasons that drive people to choose the private sector.
We can interpret this result in two non-mutually excluding ways. First, it might be that students who plan to become civil servants believe that other reasons than the ones listed in the survey motivate their classmates' choices. However, the list contains the main arguments usually mentioned to explain the choices for preferring the private sector over the public sector. Second, it might be that students who aspire to work in the public sector are less likely to believe that the private sector 10 More specifically, we have: ρ = 0.607 for Competence, ρ = 0.593 for Motivation, ρ = 0.236 for Flexibility, ρ = 0.449 for Entrepreneurship and ρ = 0.150 for Wage. The two dimensions that are the least correlated with the first axis (i.e., Wage and Flexibility) do not discriminate between prospective sectors of work.
allows for more entrepreneurship, more competent and/or more motivated teams.
Result 1 Students who plan to work in the public sector are less likely to see entrepreneurship, competence and motivation as factors that drive other students to choose to work in the private sector.
Preferences towards regulation of the private sector
We now investigate students' preferences towards regulation of the private sector. The survey included a series of questions about the challenges that private sector companies and employees face in France. We group these questions into two dimensions. The first set of questions measures the level of Distrust in companies. We designed the second set of questions to capture students' beliefs regarding the easiness to conduct business in France today (Easy to do business). The questions associated with each set are presented below, together with a positive or negative sign to represent how answers correlate with the associated dimension.
Distrust in companies is associated with students' beliefs on whether:
• union representatives should benefit from extra protection against being fired (+);
• employees should have a stronger role in the company's decision-making process (+); • controls of labor law enforcement are currently sufficient in France (-);
• thresholds above which union representation becomes mandatory in the company are too high (+);
• layoffs should be banned when companies make profits (+);
• the government should legislate to limit employers' excessive remunerations (+).
Easy to do business is associated with students' beliefs on whether:
• procedures to fire an employee should be made easier for the employer (-);
• procedures to create a new business should be made easier (-);
• procedures to hire an employee should be simplified (-);
• labor costs are contributing to high unemployment in France (-);
• it is currently easy to create a company in France (+);
• it is currently easy to find funds to open a business in France (+);
• it is currently easy for a young entrepreneur in France to obtain legal advice and support to start a business (+).
Descriptive Statistics Tables 5 and6 in the appendix show the distribution of answers to the questions associated with the two dimensions (Distrust in Companies and Easy to Conduct Business respectively). First, we observe that students who plan to work in the public sector have a higher tendency to distrust the private sector (Table 5). For instance, 31% of these students think that the government should legislate to limit employers' excessive remunerations, against 24% for their classmates who want to work in the private sector. Similarly, students who aspire to become civil servants are more likely to believe that controls on the enforcement of labor regulations are currently insufficient in France (52% vs. 45%). They are also more likely to believe in a strong support of higher levels of protection for union representatives in firms (18% vs. 12%).
Second, students who plan to work in the public sector are more likely to believe that conducting business in France is easy (table 6). For instance, 47% of these students weakly or strongly oppose reforms that would facilitate laying off employees, against 35% for their classmates who aspire to work in the private sector. Similarly, they are: (i) less likely to disagree with the statement that creating a business in France is easy (55% vs. 60%), (ii) more likely to believe that procedures to hire new employees should not be facilitated (10% vs. 5%), and (iii) more likely to believe that finding funds to open a business is easy (29% vs. 24%).
Ideal points To further investigate differences in beliefs between the two groups of students, we propose to locate students on the two dimensions (Distrust in companies and Easy to conduct business), using an augmented version of the graduated response model often used in ideal point estimations. Our method departs from PCA in two ways. First, we use the above definition of the dimensions to constrain the sign of the correlation between the questions and their associated dimension. For instance, we assume that a stronger support for the protection of union representatives against being laid off cannot be negatively correlated with the level of distrust in companies on the entire sample. The correlation can be either positive or null. Second, we do not consider the answers as continuous but as ordered variables. The estimation of ideal points takes into account this information to generate the two dimensions.
More specifically, we estimate the following logistic model:
y ij = α j θ i + u ij (1)
where α j is a discrimination parameter associated to question j, θ i is individual i's score on the estimated dimension, and u ij is an idiosyncratic logistic random term. The parameter α j represents the correlation between the question at stake and the dimension we aim to capture. The signs of the α j are constrained by the above definition of the axes. Parameters θ i represent students' opinions on the associated dimension. Higher scores for the first dimension are associated with stronger distrust in the private sector. Individuals who display higher θs are more likely to believe that conducting business is easy. The full methodology of the estimation of the two dimensions is presented in Appendix B.
For robustness purposes, we also run PCA for each of the two dimensions, and we obtain identical results. The correlation coefficient between the first axis of the PCA and our first dimension is greater than 0.99.11 It is equal to 0.975 for the second dimension.12 Figure 2 represents the average individual scores on the two dimensions (i.e., θ i ) according to the students' willingness to work in the public sector. Table 7 in Appendix A shows the results of two-group mean-comparison tests. We find that students who plan to work in the public sector display a stronger distrust in the private sector (p-value=0.088 ), and are more likely to think that conducting business in France is currently relatively easy (p-value=0.017 ). Considering that our questions deal with regulation issues related to the private sector, this result implies that students who aspire to work in the public sector have a stronger taste for public regulation of economic activities.
Result 2 Students who plan to work in the public sector have a higher level of distrust in the private sector, and are more likely to believe that doing business is easy. Overall, they have a Note: Bars correspond to the average scores of each of the two groups of students for the two dimensions. The mean of each group is presented in table 7. Full description of the questions are given in the Online Appendix.
stronger taste for public regulation of economic activities.
Perception of public-private partnerships
The survey included a case study about public-private partnerships. The questions relate to students' beliefs regarding the benefits of the private provision of public goods. The questions reflect the perception of the relative advantages of the private and public sectors. The first question asked students whether they perceived delegated management of public goods as a good tool per se (Delegated Management). 13 The three following questions asked students whether delegating management is a good tool to reduce management costs, to foster innovation, and to improve the quality of the services (Cost Reduction, Innovation, Quality Improvement, respectively). The following question described a conflict between the contracting public authority and its private partner, and investigated whether students perceived the public authorities' decision to expropriate the private firm as legitimate (Legitimate Expropriation). Students were then asked about the extent to which the State should compensate the firm for the expropriation (Damages).
The final set of questions analyzed the answers to a case about arbitration aimed at solving the conflict (instead of litigation by national courts). Students answered questions about the extent to which the arbitration decision should take into account the following arguments: the state must stick to its contractual commitments towards the firm (Commitments), the state must be allowed to nationalize sectors it considers as essential for economic growth (Nationalization), the fact that water is a vital good justifies that the state can override the contractual agreements (Necessary Good ), and devaluation is a legitimate motive for the firm to increase prices (Devaluation). Finally, we run a PCA, and explain the scores on the first dimension, which represents an overall positive perception of private provision of public goods14 .
Figure 3 shows the average scores for the two groups of students. Table 8 shows the distribution of answers for the questions about the relative advantages of the private provision of public goods. Table 9 in Appendix A presents the associated estimates of regression estimations and p-values of two-group mean-comparison tests for all items.
First, we observe a general trend: students who plan to work in the public sector are less enthusiastic about the use of public-private partnerships than their classmates who plan to work in the private sector. For instance, they are only 21% to strongly agree with the fact that publicprivate partnerships can foster innovation, against 30% of their classmates. Similarly, only 16% strongly believe that public-private partnerships can improve the quality of the provision (vs. 25% for students who want to work in the private sector). They are also slightly less likely to strongly believe that delegated management is a good thing (12% vs. 15%) and that it helps reduce costs (30% vs. 33%).
The statistical analysis confirms these findings. Students who want to become civil servants are statistically less likely to see delegated management as improving the quality of services (p-value=0.012 ) or as fostering innovation (p-value=0.010 ). Moreover, they are more likely to consider expropriation as legitimate (p-value=0.020 ). Third, students who plan to work in the public sector are also more likely to consider that the state must be allowed to nationalize key sectors (p-value=0.002 ).
Finally, we run a PCA on all items associated to the public-private partnerships. Results show that students who aspire to become civil servants have more negative beliefs regarding the overall benefits of the private provision of public goods. The first axis of the PCA, which can be viewed as a pro-business preference for the provision of public goods15 , and which explains 30% of the variations, is indeed significantly higher for students who plan to work in the private sector.
Result 3 Students who aspire to become civil servants are less likely to see benefits in the private provision of public goods, and are thus more likely to support the government in case of publicprivate partnerships.
Beliefs regarding the public sector
The above results suggest that students aspiring to work in the public sector tend to distrust the private sector to a greater extent. This greater distrust can either result from a general distrust in society or can be specifically targeted against the private sector. If future civil servants are more Note: Bars correspond to the average scores of each of the two groups of students for each question. Answers take values from 1 to 4 (for all items). The distribution of answers is presented in table 8. Full description of the questions are given in the Online Appendix.
distrustful in general, they may not necessarily increase the level of regulation by the state, because distrust in both sectors would offset each other. However, if they trust the public sector more than the other students, we would expect higher levels of government regulation of the private sector.
Our survey therefore included questions designed to evaluate students' beliefs regarding the public sector. These questions enable us to investigate whether the relative distrust of students who aspire to become civil servants is generalized or targeted against the private sector only. We therefore investigate how students perceive the public sector, including by asking them which factors explain their choice to work as civil servants.
Trust in institutions
Students were asked to report their level of trust on an 11-point scale (from 0-no trust to 10-total trust) for a list of seven public institutions: the Upper Chamber (Senate), the Lower Chamber (National Assembly), the police (Police), the legal system in general (Legal System), judges in general (Judges), the French Constitutional Court (Constit. Council ), and the French Administrative Supreme Court (Conseil d'Etat). Figure 4 graphs the average level of trust for both types of students (i.e. those aspiring to work in the public sector, and those preferring the private sector). The two columns on the right-hand side of the graph show the average scores for the first dimension of a PCA, which represents the generalized level of trust in public institutions. 16 Table 10 in Appendix A displays the summary statistics associated with these questions together with the p-value associated with the two-group mean comparison test for each variable. First, we find that judicial institutions benefit from the highest levels of trust (Legal System, Judges, Constitutional Council and Conseil d'Etat). Political institutions and the police benefit from significantly lower levels of trust. Although this dichotomy holds for both kinds of students under scrutiny, we observe systematic higher scores for students who plan to become civil servants than those who aspire to careers in the private sector. Students who want to work as civil servants have higher levels of trust in the Senate (5.87 vs 5.49), the National Assembly (6.25 vs. 5.77), the Police (6.08 vs 5.88), the Legal System in general (6.93 vs. 6.73), Judges (7.25 vs. 6.84), the Constitutional Council (7.36 vs. 7.08), and the Administrative Supreme Court (7.39 vs. 6.83). The differences are statistically significant for the Lower Chamber (p-value=0.002 ), the Upper Chamber (p-value=0.022 ), Judges (p-value=0.005 ), the Constitutional Council (p-value=0.078 ), and the Administrative Supreme Court (p-value<0.001 ). The first dimension of the PCA, which represents the generalized level of trust in institutions, is also significantly higher for prospective civil servants (p-value=0.001 ). Although beliefs regarding the legal system in general and the police are not statistically different across students, the students aspiring to become civil servants still display a higher average level of trust. These results show that students who plan to work in the public sector do not have a generalized distrust towards society, but show distrust targeted against the private sector.
Result 4 Students who plan to become civil servants display a higher level of trust in public institutions.
Reasons to become a civil servant
To complete our analysis, we asked students about their beliefs regarding the factors that explain why individuals aspire to work in the public sector. More precisely, students were asked to report their beliefs regarding the factors that determine their classmates' choices to become civil servants.
We included a list of potential benefits of being a civil servant, related to both extrinsic and intrinsic motivation. Among the extrinsic motivation factors, we suggested a lower workload (Lower Workload ), a more convenient family life (Easy Family), and greater job security (Greater Security). For intrinsic motivation, we suggested the following factors: a source of social gratification (Social Gratification), more opportunities to change society (Change Society), and personal satisfaction of being involved in public affairs (Satisfaction).
Figure 5 shows the average scores for each group of students, ranking the answers from 1 (Strongly Disagree) to 5 (Strongly Agree). Table 11 shows the distribution of answers according to career aspirations. Table 12 presents the results of ordered probit estimations and the p-values of two-group mean-comparison tests.
Students who aspire to become civil servants are more likely to believe that pro-social reasons are driving their classmates' choices for the public sector. The estimations indicate that students who aspire to work in the public sector are more likely to believe that their classmates choose to become civil servants (i) for the satisfaction of being involved in public affairs (84% mildly or strongly agree vs. 73% for students who want to work in the private sector, p-values < 0.001 ), and (ii) for the opportunities they have to change society (74% vs. 63%, p-values < 0.001 ). On the contrary, students who do not plan to become civil servants are more likely to believe that their classmates are interested in working in the public sector for self-concerned reasons, i.e. the lower workload (19% vs. 13%, p-values < 0.001 ), and the convenience to organize family life (only 33% disagree vs. 43% of future civil servants disagree, p-value=0.105 for the ordered probit estimation, and p-value=0.085 for the two-group mean-comparison test).
Finally, we run a PCA on these six dimensions. The first axis, which explains 35% of the variations, is positively correlated with the pro-social motivations to choose the public sector (i.e. Satisfaction (ρ = 0.392), Social Gratification (ρ = 0.36), Change Society (ρ = 0.386)), depicted in blue in Figure 5, and negatively with the self-concerned motivations (i.e. Greater Job Security (ρ = -0.247), Lower Workload (ρ = -0.516), More Convenient Family Life (ρ = -0.49)), depicted in red in Figure 5. The comparison of the PCA scores of the two types of students shows that, on average, students aspiring to a career in the public sector are more likely than their classmates to think that students aspire to become civil servants for pro-social reasons (p-value<0.001 ).
Result 5 Both types of students recognize that people aspiring to work in the public sector generally do so for pro-social reasons (i.e. the satisfaction of being involved in public affairs, and the opportunity to change society). However, this result is stronger for students who want to become public servants. Students who plan to work in the private sector are more likely to believe that their classmates aspire to careers in the public sector for self-concerned reasons (i.e. lower workload, more convenient family life).
Robustness Checks
The survey contained questions about the perception of the public and the private sectors, and students' aspiration to work in the public sector after graduation. The fact that both the dependent and the independent variables were obtained from the same survey might generate some methodological concerns, usually referred to as the Common Source Bias (CSB). In our case, we are not able to rule out the possibility that participants sought to reduce cognitive dissonance or to improve / protect self-image by aligning their answers. Nevertheless, the impact of the CSB is limited by the fact that questions were asked on successive screens, and that half of the dimensions discussed above were explored before participants were asked about their personal professional aspirations. Moreover, the aspiration to work in the public sector was obtained by asking whether participants intended to take exams to enter the public sector. Given the long preparation that these exams require, it seems unlikely that previous declarations about the attractiveness of each sector affected participants' declaration about their intention to take these exams.
In order to test the robustness of our results to the CSB, we use respondents' identifier to retrieve the Master's program of graduate students. We then associate to each graduate student the average proportion of students registered in his/her graduate school who ended up working in the private sector (based on the post-graduation employment survey of students who graduated in 2015). This measure reflects the average ex-post propensity to really work in the private sector, and is not subject to the CSB. We observe that this variable is highly correlated with the individual declarations in the survey (ρ = 0.468, p < 0.001). We run all the previous estimations replacing the potentially biased self-declaration by this exogenous measure. We cluster observations at the graduate school level given the level of aggregation of information. The new results, displayed in table 13, lose in statistical significance, mostly because of the reduction in the variance in the explanatory variable and in the degrees of freedom, but confirm the above results. Indeed, individuals with higher chances of working in the public sector trust public institutions (National Assembly, Senate, Judges, Administrative Supreme Court) significantly more, are more likely to believe that public servants work in the public sector for noble reasons (Gratification, Change) and less likely to believe they do so for the potentially lower workload (Workload). They are also less likely to believe that students who want to work in the private sector plan to do so because of greater entrepreneurial spirit. They are also more likely to believe that unionists should be more protected against employers. Moreover, in the case of public-private partnerships, they are more likely to find that government intervention is legitimate, and they are more likely to accept nationalization.
Attrition Bias
The survey included five successive sections. A non-negligible proportion of respondents answered only part of it. Students who did not complete the survey may represent a specific subset of the population, such that students who answered the last set of questions may not be representative of the set of students who started the survey but quit before the end. To investigate whether attrition in the survey changed the composition of respondents over the different sections, we regress each of our dependent variables on the number of sections the respondents completed. Should an attrition bias on the dependent variable emerge, we would observe a significant coefficient associated with the number of screens. For instance, if the least trustful students stopped answering first, we would observe that the level of trust in public institutions (first screen of the survey) significantly increases with the number of screens completed.
Implementing this strategy over 25 dependent variables, we obtain only three significant relationships (two at 10% and one at 1%). Assuming that there is no attribution effect (i.e., that the dependent variables are not correlated with the number of completed screens), the probability to have at least 3 out of 25 regressions in which the coefficient is significant at 10% equals 20.5%. 17 Therefore, we are not able to reject the hypothesis of no attrition effect regarding the dependent variables.
17 The probability of no significant relationship is ( 9 10 ) 25 . The probability of one significant relationship is 25! (25-1)! 1 10 ( 9 10 ) 24 . The probability of two significant relationships is 25! (25-2)! ( 1 10 ) 2 ( 9 10 ) 23 . The probability of at least three significant relationships at 10% equals 1 minus the sum of these three probabilities, which is p = 20.5%.
Conclusion
Our results suggest that future civil servants distrust the private sector to a greater extent than the other students. They also believe that conducting business is relatively easy, and are less likely to see benefits in public-private partnerships. These results provide some evidence of a selection effect in career choices: individuals working in the public sector hold more negative beliefs regarding the private sector. Our evidence also suggests that this distrust is more specifically targeted towards the private sector and is not generalized: future civil servants show high level of trust regarding public institutions.
Civil servants' distrust towards the private sector has strong implications in terms of political economy. First, civil servants are in charge of the design and implementation of regulation. Their distrust of the private sector may lead to its over-regulation, therefore generating difficulties in conducting business. The 2018 Doing Business report edited by the World Bank provides evidence of these difficulties. This report provides objective measures of business regulations and their enforcement across 190 economies. It captures several important dimensions of the regulatory environment as it applies to local firms. 18 The global indicator that accounts for this regulatory environment is called "Ease of Doing Business". Each country is evaluated through its distance to frontier (DTF), which measures the distance of each economy to the "frontier" representing the best performance observed on each of the indicators across all economies in the Doing Business sample since 2005. An economy's DTF is represented on a scale from 0 to 100, where 0 represents the lowest performance and 100 represents the frontier. The ease of doing business ranking ranges from 1 to 190. France ranks 31 st , and performs lower than the average score of OECD Countries. 19 Our results suggest that a possible explanation for the large regulation of private business may come from the relatively greater distrust of public sector workers towards the private sector.
This distrust may also have a negative impact on the judiciary. Most judges in French courts are civil servants. [START_REF] Cahuc | Les juges et l'économie: une défiance française[END_REF] show that judges distrust business and free market economy more than the rest of the population. Our results confirm this distrust of top civil servants against private business. They can also shed a new light on a regular debate concerning the identity of judges. In some particular courts (such as labor or commercial courts in France), judges are lay judges (i.e. they are not civil servants but representatives of employees and/or employers nominated by their peers). 20 In many others countries, labor or commercial courts are composed of both lay and professional judges, or even only professional judges. An argument supporting lay judges could be to avoid the distrust of professional judges (who are civil servants) towards the private sector.
Third, the distrust of public sector workers towards the private sector could have a negative impact on cooperative projects such as public-private partnerships. These contracts aim at organizing a cooperation between public and private sector actors to build infrastructures and provide public services. Public-private partnerships combine the skills and resources of both the public and private sectors by sharing risks and responsibilities. Yet, the success of such partnerships depends on the ability of the two sectors to cooperate. For this reason, the World Bank has identified the cooperation and good governance between public and private actors as a key to successful public-private partnerships.21 As a consequence, distrust towards public-private partnerships could hurt such projects.
Finally, the results presented in this paper can also be read in an optimistic manner: students who propose to work in the public sector display the highest levels of trust in the public sector. Despite the lower wages proposed by the public sector, prospective civil servants devote their career to the public affairs because they believe that they can be useful to society in doing so. Overall, the strong motivation of prospective civil servants and their beliefs in their mission are two factors that might contribute to the well-functioning of the State. Note: Panel A includes Master's degree students only. "Respondents" refers to students who at least started completing the survey. Some Master's level students were admitted as undergraduates, whereas others were admitted as undergraduates. "Career specified" refers to those students who completed the survey at least to the point where they indicated their intention to work in the public sector or not. Panels A and B do not include information on four French students who answered the survey: two were postgraduate students (one PhD and one preparing administrative exams), one was an undergraduate student on a campus that did not receive the survey, and one was an executive education student. Hence the total of French respondents is 1,253 students. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. The dependent variables are the listed reasons to work in the private sector. The independent variable is a dummy variable equal to 1 if the student plans to take a public sector exam. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. The dependent variables are the listed arguments in favor or against the use of Public-private partnerships (PPP). The independent variable is a dummy variable equal to 1 if the student plans to take a public exam. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite. The dependent variables are the listed arguments in favor or against the use of Public-private partnerships (PPP). The independent variable is a dummy variable equal to 1 if the student plans to take a public exam. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 436 who said the opposite.
A Tables
B Ideal Points Estimates
The Bayesian estimation of ideal points is usually referred to as the one dimensional item response theory. Such models were originally aimed at measuring students' performance on a test, and to locate them on a unique dimension. The original objective consisted in estimating three sets of parameters: (i) an ability parameter for each student, (ii) a difficulty parameter for each question of the test, and (iii) a discrimination parameter for each question. Bayesian methods were developed to discriminate students according to their ability, by taking into account questions' difficulty level, and by estimating their "relevance" to correctly discriminate students.22 These models have since been used in the political science literature, especially in the case of Supreme Court voting [START_REF] Bafumi | Practical issues in implementing and understanding bayesian ideal point estimation[END_REF], [START_REF] Martin | Dynamic ideal point estimation via markov chain monte carlo for the u.s. supreme court, 1953-1999[END_REF], [START_REF] Martin | The median justice on the united states supreme court[END_REF], where researchers located Justices on a liberal-conservative dimension.
Our goal consists in estimating students' preferences on two dimensions (Distrust in Companies and Easy to do Business). To do so, we use the students' answers described in subsection 4.1.2. The possible answers to these questions had the following ordering: strongly disagree, slightly disagree, indifferent, slightly agree, strongly agree.
The model is defined by a logistic utility model, where the latent utility depends on both the questions' and students' parameters:
y * ij = α j θ i + u ij
where α j is the discrimination parameter of question j, θ i is the score of individual i on the estimated dimension, and u ij is a random component.
Given that we have five possible ordered answers, the associated observed choices are given by:
y ij = 1 if y * ij ≤ φ 1j y ij = 2 if y * ij > φ 1j et y * ij ≤ φ 2j . . . y ij = 5 if y * ij > φ 4,j
where φ j is the vector of thresholds for the ordinal choice model.
The hyperpriors are set as follows:
α j ∼ N (µ α , σ 2 α ) φ j ∼ N (µ φ , σ 2 φ ) θ i ∼ N (0, 1)
µ α ∼ N (0, 1) and σ α ∼ Exp(0.1) µ φ ∼ N (0, 1) and σ φ ∼ Exp(0.1)
Given that we know a priori the correlation of the answers with the desired axes, we are able to reverse the order of the answers for the questions that are negatively correlated (see section 4.1.2). We use this information and overidentify the model by setting: ln(α j ) ∼ N (µ α , σ • l'Assemblée Nationale ;
• le Sénat ;
• le système légal ;
• la police ;
• les juges ;
• le Conseil Constitutionnel ;
• le Conseil d'État. • Une plus grande sécurité de l'emploi ;
• Une plus grande satisfaction vis-à-vis de soi-même de s'occuper des affaires publiques ;
• Une plus grande gratification vis-à-vis d'autrui de s'occuper des affaires publiques ;
• La possibilité de changer la société ;
• Une charge de travail moins importante ;
• Une organisation facilitée de la vie familiale. • Un salaire plus élevé ;
• Une plus grande flexibilité du travail ;
• Un meilleur esprit d'entrepreneuriat ;
• Des équipes plus compétentes ;
• Des équipes plus motivées dans leur travail.
4. Parmi les autres étudiants inscrits dans votre master et de destinant à la fonction publique, quels sont selon vous les facteurs déterminant ce choix de carrière ? (Veuillez indiquer les trois facteurs les plus importants par ordre d'importance.)
• Le salaire ;
• La sécurité de l'emploi ;
• La liberté d'entreprendre ;
• La charge de travail ;
• Le fait d'être utile à la société ;
• La reconnaissance sociale ;
• L'ambition politique ;
• L'équilibre de la vie familiale. 3. D'un point de vue personnel, quels facteurs influencent vos choix de carrière ? (Premier facteur, deuxième facteur, troisième facteur)
• Le salaire ;
• La sécurité de l'emploi ;
• La liberté d'entreprendre ;
• La charge de travail ;
• Le fait d'être utile à la société ;
• La reconnaissance sociale ;
• L'ambition politique ;
• L'équilibre de la vie familiale. • Réduire les coûts de gestion ;
• Encourager l'innovation ;
• Améliorer la qualité des services.
Énoncé Consignes Les questions ci-dessous vous proposent d'analyser un cas relatif à un contrat entre une partie publique et une partie privée. Dans un premier temps, le cas vous est décrit de manière succinte, vous apportant les éléments nécessaires à la compréhension du litige qui oppose les parties. Ensuite, vous serez amené à répondre à plusieurs questions liées au cas. La plupart des questions sont sujettes à interprétation, si bien qu'il n'existe pas de "bonne" ou de "mauvaise" réponse : n'hésitez donc pas à donner votre avis.
Présentation du cas Considérons une autorité gouvernementale qui établit un contrat de concession avec une entreprise privée étrangère afin d'assurer la distribution de l'eau auprès de sa population. Cette entreprise emprunte en dollars pour réaliser les investissements nécessaires au contrat de concession. Quelques années plus tard, la monnaie du pays est dévaluée par décision du gouvernement, ce qui cause un important problème de rentabilité à l'entreprise privée : elle perçoit ses recettes en monnaie locale et a des charges en dollars, liées à son emprunt. L'entreprise demande alors à l'autorité gouvernementale une autorisation pour augmenter le prix de l'eau de 10% pour combler une partie de ses recettes manquantes. Le gouvernement refuse la réévaluation du prix. L'entreprise ne peut plus poursuivre ses investissements, et certains foyers ne parviennent pas à se faire raccorder aux réseaux de distribution d'eau. Excédée par ces problèmes de distribution d'eau à toute la population, l'autorité gouvernementale décide unilatéralement de mettre fin au contrat (avant son terme) en expropriant l'entreprise de ses investissements. • Aucune indemnité ;
• Une partie de l'investissement ;
• L'intégralité de l'investissement ;
• L'intégralité de l'investissement et une partie des profits escomptés ;
• L'intégralité de l'investissement et la totalité des profits escomptés ;
• L'intégralité de l'investissement, la totalité des profits escomptés et des dommages punitifs.
Énoncé Arbitrage Afin de contester les décisions de l'Etat, l'entreprise investisseuse saisit une cour d'arbitrage internationale spécialisée ainsi que le droit de l'Etat concerné l'avait prévu lors de la signature du contrat.
5. À votre avis, quelle importance convient-il d'attributer à chacun des arguments suivants pour résoudre le litige ? [0 : aucune importance ; 10 : essentiel à la résolution du litige]
• L'Etat doit respecter ses engagements vis-à-vis de l'entreprise investisseuse et lui garantir la pérennité de ses actifs.
• L'Etat doit pouvoir nationaliser les secteurs qu'il estime essentiels au développement économique de son pays.
Figure 1 :
1 Figure 1: Average perception of factors driving students to choose the private sector according to the prospective sector of work.
Figure 2 :
2 Figure 2: Average attitudes towards the private sector according to the prospective sector of work.
Figure 3 :
3 Figure 3: Average perception of the private provision of public goods according to the prospective sector of work.
Figure 4 :
4 Figure 4: Average level of trust in institutions according to the prospective sector of work (0 to 10 scale).
Figure 5 :
5 Figure 5: Average perception of factors driving students to choose the public sector according to the prospective sector of work.
2 .
2 Parmi les arguments suivants, quels sont ceux qui, à votre avis, motivent les individus à s'engager dans une carrière publique ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord]
3 .
3 Parmi les arguments suivants, quels sont ceux qui, à votre avis, motivent les individus à exercer une activité dans le secteur privé ?[1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord]
Partie 2 :
2 Relations sociales au travail 1. D'un point de vue personnel, vous paraît-il justifié qu'un délégué syndical bénéficie d'une protection renforcée par rapport aux autres salariés, notamment en matière licenciement ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 2. D'un point de vue personnel, vous paraît-il nécessaire aujourd'hui de renforcer les pouvoirs du salarié dans la prise de décision en entreprise (par exemple, par un renforcement de leur participation dans les conseils d'administration) ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 3. La justice prud'homale est aujourd'hui organisée autour des syndicats : les conseillers prud'homaux sont des individus élus par les salariés et par les chefs d'entreprise pour les représenter et trancher les litiges provenant de conflits individuels au travail. Ce système vous semble-t-il juste ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 4. Pensez-vous à titre personnel qu'il existe assez de contrôles de l'application du droit du travail en France (ex : inspection du travail) ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 5. Pensez-vous à titre personnel qu'il faille abaisser les seuls au-delà desquels une représentation syndicale en entreprise est obligatoire ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 6. Pensez-vous à titre personnel que les procédures de licenciement devraient être allégées ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord]
4 .Partie 5 :
45 Êtes-vous membre d'un parti politique ? [0 : Non ; 1 : Oui] 5. Militez-vous à Sciences Po ? [0 : Non ; 1 : Oui] 6. Êtes-vous membre d'un syndicat à Sciences Po ? [0 : Non ; 1 : Oui]7. Êtes-vous membre d'une association Sciences Po ? (Hors parti, hors syndicat) [0 : Non ; 1 : Oui] Cas pratique PPP Énoncé On parle de gestion directe d'un service public local (distribution d'eau, collecte des déchets, approvisionnement des cantines scolaires, etc...) lorsque la collectivité locale concernée assure elle-même l'exploitation et la gestion de ce service. C'est une structure publique (une régie publique) qui assure le service. On parle de gestion déléguée lorsque la collectivité confie ce service à une entreprise, généralement privée, qui opère sous son contrôle. Le choix de l'entreprise privée se réalise le plus souvent par appel d'offres ce qui implique une mise en concurrence des candidats à la gestion du service. 1. D'un point de vue personnel, diriez-vous que la gestion déléguée est une bonne chose ? [1 : Non, pratiquement jamais ; 2 : Oui, mais dans quelques cas seulement ; 3 : Oui, dans la plupart des cas ; 4 : Oui, dans la majorité des cas] 2. Recourir à un contrat avec une entreprise privée pur gérer un service local vous paraît-il un moyen de... [1 : Non, pratiquement jamais ; 2 : Oui, mais dans quelques cas seulement ; 3 : Oui, dans la plupart des cas ; 4 : Oui, dans la majorité des cas]
3 .
3 D'un point de vue purement personnel, pensez-vous que la décision d'expropriation de l'État était justifiée ? [1 : pas du tout d'accord ; 2 : plutôt pas d'accord, 3 : indifférent, 4 : plutôt d'accord, 5 : tout à fait d'accord] 4. Si vous aviez la possibilité d'indemniser l'entreprise investisseuse, vous proposeriez d'indemniser à hauteur de (une seule réponse possible) :
Table 1 :
1 Sample size of respondents vs. overall student population.Note: The table only includes students from the Le Havre and Paris campuses, and only College or Master's students. Third year students participate in a mandatory study abroad program, which explains why they did not participate in the survey. "Other students" includes Master's students who had a special status, and who were not registered in a specific year.
All students Only French students
Overall Respondents Overall Respondents
All Career specified
(1) (2) (3) (4) (5)
1st year students 933 287 794 261 155
2nd year students 2,587 243 831 228 131
3rd year students 1,046 2 940 0 0
4th year students 2,711 438 1,726 372 203
5th year students 2,755 453 2,027 392 249
Other students 10 0 5 0 0
Observations 10,042 1,423 6,323 1,253 738
Table 2 :
2 Descriptive statistics of respondents vs. overall student population, by Master's degree and admissions program.
All students Only French students
Overall Respondents Overall Respondents
All Career specified
(1) (2) (3) (4) (5)
Panel A: Field of Master's degree (graduate students only)
Business 12.64% 13.13% 15.49% 13.87% 13.27%
Economics & finance 15.39% 15.26% 16.03% 15.05% 17.26%
Environment 1.77% 2.47% 1.70% 2.49% 2.21%
European affairs 4.58% 3.37% 3.84% 2.62% 3.10%
History 0.81% 1.80% 1.14% 1.96% 1.55%
International Affairs 20.27% 18.52% 11.82% 14.53% 11.73%
Journalism 1.66% 1.80% 2.07% 1.83% 0.66%
Law 8.65% 9.88% 10.28% 9.82% 10.18%
Other 6.08% 0.00% 0.47% 0.00% 0.00%
Political science 2.18% 3.03% 2.37% 3.40% 3.1%
Public Affairs 19.47% 20.65% 26.18% 23.82% 26.11%
Sociology 0.61% 1.35% 0.90% 1.44% 1.99%
Urban 5.89% 8.75% 7.71% 9.16% 8.85%
Total 100.00% 100.00% 100.00% 100.00% 100.00%
Observations 4,696 891 2,997 764 452
Panel B: Admissions program (all students)
Undergraduate national 41.73% 62.33% 65.10% 69.91% 71.54%
Undergraduate international 9.28% 9.49% 5.80% 5.59% 4.47%
Undergraduate priority 6.45% 8.29% 9.77% 9.18% 8.94%
International exchange 18.65% 0.07% 0.60% 0.00% 0.00%
Master's national 10.50% 12.02% 15.42% 13.01% 12.60%
Master's international 10.58% 6.89% 2.56% 2.08% 2.17%
Other 2.81% 0.91% 0.74% 0.24% 0.27%
Total 100.00% 100.00% 100.00% 100.00% 100.00%
Observations 10,042 1,423 6,323 1,253 738
Table 3 :
3 Descriptive statistics of reasons driving students to choose the private sector.
Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree
Competence 10% 33% 29% 23% 5%
Motivation 10% 25% 30% 28% 7%
Yes Flexibility 3% 26% 20% 42% 8%
Entrepreneurship 1% 10% 20% 49% 20%
Wage 0% 1% 2% 40% 57%
Competence 3% 28% 27% 29% 13%
Motivation 3% 20% 28% 35% 14%
No Flexibility 3% 31% 15% 38% 13%
Entrepreneurship 1% 6% 15% 45% 33%
Wage 1% 3% 3% 39% 55%
Table 4 :
4 Perception of the factors driving students to choose the private sector according to the prospective sector of work.
Reason Ordered Probit Estimated Effect t-stat p-value Mean comparison p-value
Wage .111 1.247 .212 .107
Flexibility -.029 -.361 .718 .931
Entrepreneurship -.322 -3.981 <0.001 <0.001
Competence -.416 -5.268 <0.001 <0.001
Motivation -.39 -4.951 <0.001 <0.001
Table 5 :
5 Descriptive statistics of questions associated with Distrust in Companies
Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree
Union representatives should benefit from extra protection against being fired. 14% 24% 8% 35% 18%
Employees should have a stronger role in the company's decision-making process. 2% 9% 11% 44% 35%
Controls of labor law enforcement are cur-rently sufficient in France. 13% 39% 21% 22% 5%
Yes Thresholds above which union representa-
tion becomes mandatory in the company 13% 25% 30% 25% 8%
are too high.
Layoffs should be banned when companies make profits. 16% 30% 12% 31% 11%
The government should legislate to limit employers' excessive remunerations. 7% 20% 10% 32% 31%
Union representatives should benefit from extra protection against being fired. 14% 27% 12% 35% 12%
Employees should have a stronger role in the company's decision-making process. 2% 8% 11% 45% 33%
Controls of labor law enforcement are cur-rently sufficient in France. 9% 36% 28% 23% 4%
No Thresholds above which union representa-
tion becomes mandatory in the company 10% 25% 33% 24% 8%
are too high.
Layoffs should be banned when companies make profits. 20% 30% 13% 28% 8%
The government should legislate to limit employers' excessive remunerations. 12% 16% 15% 33% 24%
Table 6 :
6 Descriptive statistics of questions associated with Easy to Conduct Business
Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree
Procedures to fire an employee should be made easier for the employer. 16% 31% 14% 31% 9%
It is currently easy to create a company in France. 12% 43% 16% 25% 4%
Procedures to create a new business should be made easier. 1% 6% 17% 43% 33%
Yes Procedures to hire an employee should be simplified. 1% 9% 11% 46% 33%
Labor costs are contributing to high un-employment in France. 8% 27% 9% 38% 19%
It is currently easy to find funds to open a business in France. 9% 42% 20% 25% 4%
It is currently easy for a young en-
trepreneur in France to obtain legal advice 4% 40% 21% 32% 2%
and support to start a business.
Procedures to fire an employee should be made easier for the employer. 9% 26% 20% 32% 12%
It is currently easy to create a company in France. 16% 44% 13% 23% 5%
Procedures to create a new business should be made easier. 1% 3% 15% 43% 38%
No Procedures to hire an employee should be simplified. 0% 5% 15% 46% 33%
Labor is too costly, which currently con-tributes to high unemployment in France. 7% 22% 10% 40% 21%
It is currently easy to find funds to open a business in France. 9% 48% 19% 21% 3%
It is currently easy for a young en-
trepreneur in France to obtain legal advice 6% 39% 26% 26% 3%
and support to start a business.
Table 7 :
7 Attitudes towards the private sector according to the prospective sector of work.Means in plain text, standard errors in parentheses. The p-value corresponds to a two-group mean comparison test. Note: The sample of students consists of 304 respondents who declared their intention to become civil servants, and 435 who said the opposite.
Dimension Private Sector Public Sector p-value
Distrust in companies -.014 .097 .088
(.041) (.05)
Easy to conduct business -.067 .084 .017
(.039) (.051)
Table 8 :
8 Descriptive statistics of questions associated with Provision of public goods.
Want to take public exams? Reason Never Sometimes Most often Always
Delegated Management 9% 31% 48% 12 %
Yes Cost Reduction Innovation 9% 14% 24% 31% 37% 34% 30% 21%
Quality Improvement 18% 38% 29% 16%
Delegated Management 10% 29% 46% 15%
No Cost Reduction Innovation 12% 10% 22% 29% 34% 30% 33% 30%
Quality Improvement 16% 30% 29% 25%
Note: The sample of students consists of 277 respondents who declared their intention to become civil servants,
and 385 who said the opposite.
Table 9 :
9 Perception of private provision of public goods according to the prospective sector of work.
Arguments for/against PPP Ordered Probit Estimated Effect t-stat p-value Mean comparison p-value
Delegated Management -.032 -.381 .703 .745
Reduce Cost -.01 -.121 .903 .956
Foster Innovation -.224 -2.651 .008 .010
Service Quality -.209 -2.479 .013 .012
Legitimate Expropriation .212 2.544 .011 .020
Expropriation Compensation -.109 -1.313 .189 .239
Engagements -.204 -1.067 .286 .286
Nationalize .682 3.163 .002 .002
Vital Good .301 1.446 .148 .149
Devaluation -.312 -1.455 .146 .146
PCA -.43 -3.193 .001 .001
Table 10 :
10 Descriptive statistics of levels of trust in institutions according to the prospective sector of work.
Intend to take exams for civil servants?
Institution No Yes p-value
National Assembly 5.771 (.099) 6.25 (.115) .002
Senate 5.493 (.104) 5.868 (.126) .022
Legal System 6.734 (.094) 6.928 (.107) .179
Police 5.878 (.103) 6.079 (.119) .207
Judges 6.835 (.097) 7.247 (.104) .005
Constitutional Council 7.078 (.105) 7.359 (.117) .078
Administrative Supreme Court 6.826 (.103) 7.385 (.104) <0.001
PCA 6.423 (.076) 6.783 (.079) .001
Means in plain text, standard errors in parentheses.
Note: The sample of students consists of 304 respondents who declared their in-
tention to become civil servants, and 436 who said the opposite.
Table 11 :
11 Descriptive statistics of reasons driving students to choose the public sector. : The sample of students consists of 304 respondents who declared their intention to become civil servants, and 436 who said the opposite.
Want to take public exams? Reason Strongly Disagree Indifferent Agree Disagree Strongly Agree
Lower Workload 29% 45% 14% 11% 2%
Easy Family 13% 30% 22% 29% 6%
Yes Greater Security 4% 13% 15% 46% 22%
Social Gratification 2% 14% 20% 43% 21%
Change Society 2% 12% 13% 48% 26%
Satisfaction 0% 4% 12% 53% 31%
Lower Workload 18% 41% 22% 16% 3%
Easy Family 10% 23% 32% 29% 6%
No Greater Security 3% 11% 13% 49% 24%
Social Gratification 2% 13% 23% 45% 17%
Change Society 4% 17% 17% 48% 15%
Satisfaction 1% 7% 18% 51% 22%
Note
Table 12 :
12 Perception of the factors driving students to choose the public sector according to the prospective sector of work.
Reason Ordered Probit Estimated Effect t-stat p-value Mean comparison p-value
Lower Workload -.329 -4.098 <0.001 <0.001
Easy Family -.127 -1.619 .105 .085
Greater Security -.103 -1.284 .199 .178
Social Gratification .063 .795 .427 .519
Change Society .334 4.142 <0.001 <0.001
Public Affairs Satisfaction .291 3.54 <0.001 <0.001
Table 13 :
13 Robustness check for the Common Source Bias: Results of regressions of the dependent variable (first column) on the proportion of students in the Master's program who end up working in the public sector.
Dependent Variable Coefficient T-stat P-value
Competent Teams -.0042 -.985 0.324
Motivation -.0022 -.479 0.632
Flexibility .0038 1.009 0.313
Entrepreneurship -.0048 -1.668 0.095
Higher Wage .0051 1.595 0.111
Protection of unionists .0082 2.521 0.040
Participation of employees .0045 .674 0.522
Controls of law enforcement .0019 .631 0.548
Representation threshold -.0023 -.378 0.717
Limited firing if profits .0007 .077 0.941
Limitation of remuneration .0079 .844 0.426
Facilitation of firing -.0092 -1.509 0.175
Easy to create new firm .0027 1.318 0.229
Facilitation of new firms .0013 .42 0.687
Facilitation of hiring .0026 1.693 0.134
Labor is too costly -.0044 -.767 0.468
East to get funds .0044 .969 0.365
Easy to get counsel .0024 .901 0.398
Delegated Management .0019 .36 0.719
Service Quality -.002 -.298 0.765
Foster Innovation -.0017 -.332 0.740
Reduction of Costs .0007 .143 0.886
Legitimate Expropriation .0117 2.124 0.034
Commitments -.0078 -.683 0.516
Nationalize .026 2.505 0.041
Vital Good .0186 1.667 0.140
Devaluation .01 .695 0.509
National Assembly .022 3.408 0.011
Senate .0251 4.158 0.004
Legal System .0173 1.781 0.118
Police .0121 1.331 0.225
Judges .0202 2.444 0.045
Constitutional Council .0199 1.622 0.149
Administrative Supreme Court .0368 2.61 0.035
Greater Security .0012 .457 0.648
Public Affairs Satisfaction .0132 5.601 0.648
Social Gratification .0084 3.229 0.001
Change Society .0108 5.89 0.000
Lower Workload -.0105 -1.867 0.062
Easy Family .0015 .337 0.736
The coefficients correspond to the estimated difference between students who plan to work in the public sector relatively to those who want to work in the private sector. The econometric specification is either a linear model or an ordered choice model, depending on the nature of the dependent variable. Note: Standard errors are clustered at the graduate school level.
Aspiring top civil servants' distrust in the private sectorA. Boring, C. Desrieux and R. Espinosa
Online Appendix Questionnaire
2 α )
. Partie 1 : Service public et secteur privé
1. Sur une échelle de zéro à dix, quel est votre niveau de confiance dans les institutions suivantes
? [0 : aucune confiance, 10 : confiance totale]
For instance, in France, civil servants have a large influence on the law-making process when they write the content of new laws that are debated in Parliament(Chevallier,
2011). They also have power over how new laws are to be interpreted and implemented.
http://www.sciencespo.fr/public/fr/actualites/ena-82-des-nouveaux-admis-viennent-de-sciences-po
We included only students who were on the Paris or Le Havre campuses, and who were not in an executive education program. The 1,255 observations include the responses by two students who were post-graduate students.
In a robustness check, we further explore the issue of selection through an attrition bias. We find no effect.
The analysis includes all French respondents.
Some Master's students completed their undergraduate studies at another university before being admitted to Sciences Po.
We are interested in the overall difference in the beliefs of the two types of students. We therefore care about the differences between the two groups, regardless of the differences in the composition of the groups. The two groups of students might differ on several dimensions (such as social background, wealth, grades) that might explain their different beliefs regarding the private and public sectors. However, these differences of composition across students will result in differences of composition between public and private workers, which is the object of interest. We therefore estimate by univariate analyses the overall difference between prospective civil servants and private sector workers, regardless of the differences of composition.
The first axis of the PCA explains 38.7% of the total variations. It is positively correlated with all dimensions, except for the controls on labor law enforcement, as our ideal point estimation assumes.
The first axis of the PCA associated with the second set of variables explains 31% of the variations. The sign of the correlations between the first axis and the associated variables corresponds to the signs assumed in the ideal point estimation.
In our context, delegated management refers to the decision of a public authority to contract out the management of a public service to a private company for a given period a time.
The first dimension is positively correlated with all variables except Legitimate Expropriation, Nationalization, and Necessary Good.
The first axis of the PCA is indeed positively correlated with all dimensions except Legitimate Expropriation, Nationalization and Necessary Good.
The PCA's first dimension is positively correlated with all answers (National Assembly: ρ = 0.3558; Senate: ρ = 0.3663; Legal System: ρ = 0.4101; Police: ρ = 0.3103; Judges: ρ = 0.3811; Constitutional Council: ρ = 0.3981; Supreme Administrative Court: ρ = 0.4136. The first dimension explains 54.23% of the total variations.
More precisely, it provides quantitative indicators on regulation for starting a business, dealing with construction permits, getting electricity, registering property, obtaining loans, protecting minority investors, paying taxes, trading across borders, enforcing contracts, and resolving insolvency. Doing Business also measures features of labor market regulations.
For comparison, Germany ranks
th and the UK is 7 th . Source: http://www.doingbusiness.org/data/exploreeconomies/france 20 For debates about judges' preferences in French labor courts, see[START_REF] Espinosa | Constitutional Judicial Behavior: Exploring the Determinants of the Decisions of the French Constitutional Council[END_REF];Desrieux and Espinosa (2017, 2018).
http://blogs.worldbank.org/ppps/good-decisions-successful-ppps
Researchers anticipated the possibility that some questions could be correctly answered by low-skilled students and wrongly answered by high-skilled students. |
01697097 | en | [
"info.info-im"
] | 2024/03/05 22:32:10 | 2013 | https://hal.univ-reims.fr/hal-01697097/file/healthcom.pdf | Sylvia Piotin
Aassif Benassarou
Frédéric Blanchard
Olivier Nocent
Eric Bertin
email: ebertin@chu-reims.fr
Éric Bertin
Abdominal Morphometric Data Acquisition Using Depth Sensors
come
I. INTRODUCTION
Nutrition and eating disorders have become a national public health priority. In 2001, France launched the National Health and Nutrition Programme (PNNS 1 ) which aims to improve the health status of the population by acting on one of its major determinants: nutrition. At the regional level, Champagne-Ardenne particularly suffers from obesity. According to the 2012 study ObEpi [START_REF]ObÉpi -enquête épidémiologique nationale sur le surpoids et l'obésité[END_REF], the Champagne Ardenne region has experienced the highest increase in prevalence of obesity between 1997 and 2012 (+145,9%) and became the second region most affected behind the Nord Pas-de-Calais region, with an obesity rate of 20,9% (the national average is 15%). Within this context, the study of eating behaviors is an important issue for the understanding and prevention of disorders and diseases related to food. Building typologies of patients with eating disorders would help to better understand these diseases, thus allowing their prevention.
We plan to develop a new acquisition pipeline in order to identify new objective variables based on morphological parameters like abdominal diameter, body surface area, etc. While a recent report by the French Academy of Medicine on unnecessarily prescribed tests2 highlights the generalized use of heavy and expensive imaging, the novelty of our Figure 1. Presentation of our mobile system approach lies in the use of a consumer electronics device (the Microsoft R Kinect TM sensor was initially dedicated to the Microsoft R Xbox 360 TM game console). This device has the advantage of being inexpensive, lightweight and less intrusive than conventional medical equipments. Even if these devices have been used in eHealth projects, their use has often been limited to the adaptation of successful video games to the medical environment: physical training programs to action against obesity [START_REF] Nitz | Is the Wii Fit TM a newgeneration tool for improving balance, health and well-being? A pilot study[END_REF], rehabilitation programs [START_REF] Huang | Kinerehab: a kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities[END_REF]. Within our project, the device is used as a measurement tool to collect morphological information in order to enrich or to confront the information extracted from surveys filled in by patients. Beyond its cost, this device can also be easily deployed in patient's homes or in medical practitioners offices, allowing monitoring on a regular basis (Figure 1)
This paper presents the acquisition and analysis methodology that we have implemented. The results presented were obtained on a sample of seventeen healthy patients without any medical context. The main objective was to establish a proof of concept.
After an overview of the uses of Kinect TM like sensors in a medical context, we present our abdominal morphology acquisition system. In the next section, the computation of quantitative indicators from raw data is exposed. Finally, we propose a first statistical analysis before concluding and presenting our future works.
II. RELATED WORK
The Microsoft R Kinect TM peripheral, as well as its recent competitor ASUS R Xtion TM , are made of an RGB camera, an IR emitter and an IR sensor. The latter is able to evaluate a depth map aligned with the video frame. Based on these two sources of data, it is possible to reconstruct people facing the device in 3D [START_REF] Shotton | Real-time human pose recognition in parts from single depth images[END_REF], [START_REF] Weiss | Home 3D body scans from noisy image and range data[END_REF].
the affordable cost of these new peripherals is probably the reason why they are inspiring so many projects. For instance, this technology is growingly involved in healthcare: apart from their use in fall risk assessment in elderly population [START_REF] Stone | Evaluation of an inexpensive depth camera for passive in-home fall risk assessment[END_REF], these cameras are also employed in motor rehabilitation programs [START_REF] Chang | A Kinect-based system for physical rehabilitation: A pilot study for young adults with motor disabilities[END_REF], [START_REF] Da Gama | Poster: Improving motor rehabilitation process through a natural interaction based system using Kinect sensor[END_REF], or for the improvement of workplace ergonomics [START_REF] Dutta | Evaluation of the Kinect TM sensor for 3-D kinematic measurement in the workplace[END_REF].
Not necessarily calling upon motion capture techniques, others use these cameras to quickly collect anthropometric data. Thanks to usual statistical tools such as principal component analysis, it is possible to extrapolate different morphological dimensions from a few measurements [START_REF] Samejima | A body dimensions estimation method of subject from a few measurement items using KINECT[END_REF]. One can also deduce precisely the position of the center of mass of an individual if one combines a Kinect and a Wii Balance Board, popularized by the video game Wii Fit [START_REF] Gonzalez | Estimation of the center of mass with Kinect and Wii balance board[END_REF]. Finally, Velardo and Dugelay have developed an automated system capable of providing nutritional advice depending on the body mass index and basal metabolism rate calculated from parameters either measured or statistically induced from measurements [START_REF] Velardo | What can computer vision tell you about your weight?[END_REF].
III. ACQUISITION ACCURACY
Since the popularization of RGB-D devices (RGB + depth), many metrological studies have dealt with the quality of acquisition [START_REF] Gonzalez-Jorge | Metrological evaluation of Microsoft Kinect and Asus Xtion sensors[END_REF]: accuracy (difference between the measured and the real value) ranges from 5 mm to 15 mm at a distance of 1 m, from 5 mm to 25 mm at a distance of 2 m. Even if these lightweight tracking systems do not really compete with more cumbersome ones (such as Vicon's3 ), they can be a low cost alternative solution in many applications where high accuracy (less than 1 mm) is not a matter of concern.
We also performed our own experiments. We scanned a wood panel with a 700 mm long diagonal at three different distances (800 mm, 1600 mm and 2400 mm). The values shown in tables I and II represent the mean value of ten consecutive measures. The improved accuracy observed, in comparison with the aforementioned results, can be explained by the use of the mean value of several measurements. According to a framerate of 30 frames per second, a measure can be performed in a third of a second following our method. This is not really an issue since we are measuring static poses of patients. In our study, we use the 3D reconstruction and pose detection capabilities of RGB-D devices to measure in real time several morphological characteristics of a patient (size, bust waist and hip measurements, shape of the abdomen, . . . ). These data produced by the acquisition system can be compared to the patient's responses provided during the Stunkard's test: this test is to ask the patient about his perception (often subjective) of his morphology and ask him to lie within a range of silhouettes [START_REF] Stunkard | Use of the Danish Adoption Register for the study of obesity and thinness[END_REF].
With the OpenNI library [START_REF] Openni | OpenNI framework[END_REF] and the NiTE middleware [START_REF] Nite | NiTE middleware[END_REF], we are able to identify the pixels corresponding to each user located in the sensor field. With this information, we reconstruct the visible body surface in 3D space. Since NiTE also allows us to track the skeleton of each individual, we can position planes on the abdomen: the sagittal plane and the transverse plane. By calculating the intersection between the surface and each plane we get a sagittal profile and a transverse profile (Figure 2). We can record these profiles in the patient's medical file to establish a follow-up. These profiles are also stored in an anonymized database to conduct a statistical analysis.
V. PARAMETER EXTRACTION
The previous step of acquisition provides two profiles from the intersection between the body surface and the sagittal and transverse planes. These profiles, composed of segments joining points from the reconstruction, are relatively noisy. A first step of smoothing using spline interpolation can adjust these geometric data and make them more "understandable" and exploitable (Figure 3).
After the first treatment we already have, for each individual, a first visual "signature" of the abdominal morphology (Figure 4). The interest of these visual signatures is to provide a simplified graphical representation to enable a fast and synthetic visualization. On the other hand, these visual descriptions facilitate description and interpretation of clusters when creating typologies by clustering. Finally, they have a major interest in monitoring the evolution of the patient by the doctor.
The profiles obtained during the acquisition are also used to extract features and more "conventional" measurements. It is possible, for example, from the smoothed cross-section, to calculate various lengths that are good estimations of the abdominal dimensions of the subject. In this study, we have limited ourselves, for example, to calculate the diameter (d H and d V ), the height (h H and h V ) and the length (l H and l V ) of each profile (transverse and sagittal) (Figure 5). Other calculations of lengths or surfaces are possible [START_REF] Lu | Automated anthropometric data collection using 3d whole body scanners[END_REF], [START_REF] Yu | The 3d scanner for measuring body surface area: a simplified calculation in the chinese adult[END_REF], [START_REF] Lin | Comparison of three-dimensional anthropometric body surface scanning to waist-hip ratio and body mass index in correlation with metabolic risk factors[END_REF]. The choice of these numerical descriptors of the abdominal morphology defines a parameter space in which subjects are represented. In this study, individuals who have lent to the experience are represented in a space of dimension 6 defined by the variables
d H , d V , h H , h V , l H et l V (Table III). id d H h H l H d V h V l V 1
VI. TOWARDS A TYPOLOGY OF ABDOMINAL
MORPHOLOGIES
After describing individuals in this parameter space, our goal is to automatically extract groups of abdominal morpholo- gies by clustering.
We first proceed to a principal component analysis to reduce the dimensionality of the problem and to project people in a subspace whose components are uncorrelated. The representation of subjects in the factorial subspace (related to the first two principal components) allows us to visualize the similarities between individuals (Figure 6).
First, clustering is achieved, using the k-medoids algorithm. The algorithm is used in the space of the first 3 factors (consideration of the eigenvalues shows that the first 3 principal components explain 94% of the inertia) and it is set to search for 3 clusters (after reading the dendrogram obtained by hierarchical clustering).
The statistical description of obtained clusters is presented in the table IV. The medoids produced by the clustering algorithm provide representative individuals of each cluster (Table V). For a more detailed extraction of representative individuals (or exemplars) we can use the method described in [START_REF] Blanchard | Data representativeness based on fuzzy set theory[END_REF].
VII. CONCLUSION
In this paper, we presented a software prototype able to acquire abdominal morphological parameters using a consumer electronics depth sensor. This prototype is lightweight and inexpensive, and the acquisition is quite insensitive to the capture conditions (this type of sensor is designed to operate in most environments, in private homes).
We also proposed an algorithmic solution to analyze collected data. The results presented in this paper allowed us to validate the principle and the calculation methods of our tool. The aim was not to draw medical conclusions but to establish a proof of concept. Our prototype will now be used in a clinical context dealing with obesity and eating behaviors. The perspectives of this work are multiple. In order to provide medical practitioners the ability to make a diagnosis, we started to combine this tool with the web framework that we developed [START_REF] Nocent | Toward an immersion platform for the World Wide Web using 3D displays and tracking devices[END_REF]. It already supports RGB-D cameras and can transmit the skeletons of users to a distant browser. Provided we also stream the acquired profiles and the descriptors extracted from the analysis, the practitioner may remotely have all the information he needs. So far, our system performs various measurements from a single capture. Using 3D scanning techniques [START_REF] Curless | A volumetric method for building complex models from range images[END_REF], we could capture the entire body surface of a subject, either by moving the camera around, like KinectFusion [START_REF] Izadi | KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera[END_REF], or by combining several sources simultaneously, as is the case with OmniKinect [START_REF] Kainz | Om-niKinect: real-time dense volumetric data acquisition and applications[END_REF]. Finally, the data analysis should be extended to enable the construction of typologies adapted to the studied diseases, and a much larger amount of data.
Figure 2 .Figure 3 . 1 Patient 2 Figure 4 .
23124 Figure 2. Acquisition of the sagittal and transverse profiles. The individual is highlighted and the two profiles are overprinted on his abdomen.
Figure 5 .
5 Figure 5. Examples of morphometric indicators calculated from the sagittal and transverse profiles
Figure 6 .
6 Figure 6. Representation, in the factorial design, of individuals and clusters obtained by the k-medoids algorithm.
Table V MEDOIDS
V : REPRESENTATIVE INDIVIDUALS OF OBTAINED CLUSTERS (INDIVIDUALS WHOSE IDENTIFIERS ARE 14, 8 AND 16 ARE RESPECTIVELY THE MEDOIDS OF CLUSTERS 1, 2 AND 3)
param. cluster mean sd median min max
d H 1 324.76 16.72 323.99 304.31 348.24
d H 2 275.24 23.53 272.49 253.21 300.02
d H 3 280.81 25.16 289.61 229.57 308.40
h H 1 84.39 7.55 82.27 75.24 95.75
h H 2 56.43 7.32 54.46 50.29 64.53
h H 3 68.30 14.86 66.71 53.23 100.68
l H 1 382.20 19.19 383.02 362.60 412.56
l H 2 301.91 23.29 308.42 276.06 321.26
l H 3 323.83 18.97 331.72 283.72 344.00
d V 1 259.31 64.15 268.89 156.98 330.10
d V 2 222.77 13.79 224.42 208.23 235.66
d V 3 320.27 37.07 320.30 274.25 384.82
h V 1 20.91 15.58 15.62 9.29 46.73
h V 2 7.75 4.89 9.56 2.21 11.49
h V 3 26.36 18.62 16.55 11.87 64.84
l V 1 266.88 65.20 273.02 159.15 333.59
l V 2 226.12 10.57 227.48 214.94 235.95
l V 3 329.85 38.60 340.40 278.51 386.62
Table IV
STATISTICAL DESCRIPTION OF CLUSTERS
Cluster id d H h H l H d V h V l V
1 14 332.15 82.27 383.46 288.23 23.58 296.57
2 8 272.49 64.53 308.42 208.23 11.49 214.94
3 16 302.74 62.13 331.72 278.60 30.04 289.58
http://www.mangerbouger.fr/pnns/
http://www.academie-medecine.fr
http://www.vicon.com
ACKNOWLEDGEMENTS
The authors would like to warmly thank the volunteers who helped them test and improve their prototype. |
01759560 | en | [
"info.info-gr",
"info.info-hc"
] | 2024/03/05 22:32:10 | 2012 | https://hal.univ-reims.fr/hal-01759560/file/paper.pdf | Olivier Nocent
email: olivier.nocent@univ-reims.fr
Crestic Sic
Sylvia Piotin
Jaisson Maxime
Laurent Grespi
Crestic Lucas
Sic
Toward an immersion platform for the World Wide Web using autostereoscopic displays and tracking devices
Keywords: I.3.1 [Computer Graphics]: Hardware Architecture-Three-dimensional displays, I.3.2 [Graphics Systems]: Distributed/network graphics-, I.3.3 [Computer Graphics]: Picture/Image Generation-Display algorithms, I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Virtual reality, J.3 [Life and Medical Sciences]: Medical information systems-, Web graphics, 3D perception, autostereoscopic displays, natural interaction
Figure 1: Autostereoscopic technology allows 3D images popping out of the screen while natural interaction provides a seamless manipulation of 3D contents.
Introduction
A recurrent and key issue for 3D technologies resides in immersion. 3D web technologies try to reach the same goal in order to enhance the user experience. Interaction and depth perception are two factors that significantly improve the feeling of immersion. But these factors rely on dedicated hardware that can not be addressed through JavaScript for security reasons. In this paper, we present an original way to interact with hardware via a web browser, using web protocols by providing an easy-to-use immersion platform for the World Wide Web. This plugin-free solution leveraging the brand new features of HTML5 (WebGL, WebSockets) allows to handle autostereoscopic displays for immersion and different type of tracking devices for natural interaction (Figure 2). Because We-bGL is a low-level API, we decided to develop the WebGLUT (We-bGL Utility Toolkit) API on top of WebGL. WebGLUT enhances WebGL by providing extra features like linear algebra data structures (vectors, matrices, quaternions), triangular meshes, materials, multiview cameras to handle autosteroscopic displays, controllers to address different tracking devices, etc. WebGLUT was created at the same time (and with the same philosophy) as WebGLU [START_REF] Delillo | WebGLU development library for WebGL[END_REF]SpiderGL [Di Benedetto et al. 2010].
Our contribution is written as follows: Section 2 presents related work to 3D content on the web and on 3D displays. Section 3 describes the autostereoscopic display technology by providing equations and algorithms to generate multiple views. Section 4 is dedicated to our network-based tracking system. Finally, we mention very shortly in Section 5 a case study related to medical imaging using our immersion platform.
Related work
Web browsers have acquired over the past years the ability to efficiently incorporate and deliver different kinds of media. 3D content can be considered as the next evolution to these additions although requirements of 3D graphics in terms of computational power and unifying standard are more restrictive than still images or videos.
Several technologies have been developed to achieve this integration. The Virtual Reality Markup Language (VRML) [START_REF] Raggett | Extending WWW to support platform independent virtual reality[END_REF]] replaced afterward by X3D [START_REF] Brutzman | X3D: Extensible 3D Graphics for Web Authors[END_REF] was proposed as a text-based format for specifying 3D scenes in terms of geometry and material properties and for the definition of basic user interaction. Even if the format itself is a standard, the rendering within the web browser usually relies on proprietary plugins. But the promising X3DOM [START_REF] Behr | X3DOM: a DOM-based HTML5/X3D integration model[END_REF]] initiative aims to include X3D elements as part of the HTML5 DOM tree. More recently, the WebGL [Khronos Group 2009] API was introduced to provide imperative programming mechanisms to display 3D contents in a more flexible way. As its name suggests, WebGL is the JavaScript analogous to the OpenGL ES 2.0 API for C/C++. It provides capabilities for displaying 3D content within a web browser which was previously the exclusive domain of desktop environments. Leung and Salga [START_REF] Leung | Enabling WebGL[END_REF] emphasize the fact that WebGL gives the chance to not just replicate desktop 3D contents and applications, but rather to exploit other web features to develop richer content and applications. In this context, web browsers could become the default visualization interface [START_REF] Mouton | Collaborative visualization: current systems and future trends[END_REF]].
Even if real-time 3D rendering has become a common feature in many applications, the resulting images are still bidimensional. Nowadays, this limitation can be partly overcome by the use of 3D displays that significantly improve depth perception and the ability to estimate distances between objects. Therefore, the content creation process needs to be reconsidered For computer generated imagery, the rendering system can seamlessly render one or more related views depending on the application [START_REF] Abildgaard | An autostereoscopic 3D display can improve visualization of 3D models from intracranial MR angiography[END_REF][START_REF] Benassarou | Autostereoscopic visualization of 3D time-varying complex objects in volumetric image sequences[END_REF]]. 3D contents are obviously clearer and more usable than 2D images because they doe not involve any inference step. Finally, 3D contents can also address new emerging devices like 3D smartphones [START_REF] Harrold | Autostereoscopic display technology for mobile 3DTV applications[END_REF] and mobile 3DTV, offering new viable platforms for developing 3D web applications.
Autostereoscopic technology
The term stereoscopy denotes techniques where a separate view is presented to the right and left eye, these separate views inducing a better depth perception. Different solutions exist for the production of these images as well as for their restitution. For image restitution with non time-based techniques, one can use anaglyph and colored filters, polarizing sheets with polarized glasses or autostereoscopic displays [START_REF] Halle | Autostereoscopic displays and computer graphics[END_REF]]. The technology of autostereoscopic displays presents the great advantage to allow multiscopic rendering without the use of glasses. Therefore, the spectator can benefit from a stereoscopic rendering more naturally, and this is especially true for 3D applications in multimedia.
Multiple view computation
The geometry of a single camera is usually defined by its position, orientation and viewing frustum as shown in Figure 3. But in stereo rendering environments, we need two virtual cameras, one for each left/right eye. And for multiview autostereoscopic displays [Prévoteau et al. 2010], we need up to N virtual cameras.
Each virtual camera has a given offset position and its own off-axis asymmetric sheared viewing frustum, the view direction remaining unchanged. The near/far planes are preserved, and a focus plane has to be manually defined where the viewing zones converge. The choice of the focus distance will determine if the objects appear either behind or in front of the screen, providing or not a pop-out effect. Given a perspective projection matrix P, the following calculations allow to identify six parameters l, r, b, t, n, f as defined in the OpenGL glFrustum command: l and r are the left and right coordinates of the vertical clipping planes, b and t are the bottom and top coordinates of the horizontal clipping planes, n and f are the distances to the near and far depth clipping planes. First, the distances n and f (refer to Figure 3 for the geometric signification of these terms) are given by Equation 1.
n = 1 -k 2k P34 f = nk where k = P33 -1 P33 + 1 (1)
In order to compute l, r, b and t, we need to define the half-width wF (respectively wn) of the image at the focus distance F (respectively at the near distance n) from the horizontal field of view α:
wF = tan(α/2)F wn = tan(α/2)n (2)
where tan(α/2) = P -1 11 according to the definition of the projection matrix P.
The viewing frustum shift for the camera j for j ∈ {1, . . . , N } in the near plane, denoted as s j n is given by Equation 3where d is the interocular distance and ws the physical screen width.
s j n = dwn ws (j -1) - N -1 2 (3)
Finally,
l = -wn + s j n r = wn + s j n t = ρwn b = -t (4)
where ρ is the viewport aspect ratio. The camera position offset along the horizontal axis is given by s j F , the viewing frustum shift in the focus plane.
s j F = dwF ws (j -1) - N -1 2 (5)
Autostereoscopic image rendering
Thanks to the computations presented in the previous section we are able to produce N separate perspective views from N different virtual cameras. The use of these images depends on the chosen stereoscopic technology. One of the simplest cases relies on quadbuffering, where the images are rendered into left and right buffers independently, the stereo images being then swapped in sync with shutter glasses. Other techniques, which are not time-based, need the different images to be combined in one single image.
Let I j for j ∈ {1, . . . , N } be the image generated by the virtual camera j, the color components I f inal c (x, y) for c ∈ {R, G, B} of the pixel (x, y) in the final image are given by:
I f inal c (x, y) = I M(x,y,c) c (x, y) c ∈ {R, G, B} (6)
where M is a mask function and R, G, B stand for red, green and blue channels. As Lenticular sheet displays consist of long cylindrical lenses that focus on the underlying image plane so that each vertical pixel stripe corresponds to a given viewing zone, the function M does not depend on the color component c and is simply given by Equation 7.
I f inal c (x, y) = I x mod N c (x, y) (7)
It is worth noticing that Equation 7clearly shows that the horizontal resolution of the restituted image is reduced by a factor 1/N compared to the native resolution m of the display. This resolution loss is one of the main drawbacks of autostereoscopic displays, even if it is only limited to 3m/N while using wavelength-selective filters because each pixel's RGB components correspond to three different view zones [Hübner et al. 2006]. Technically speaking, the WebGL implementation of the autostereoscopic image rendering is a twostep rendering process using deferred shading techniques.
Pass #1 (multiple images rendering) renders N low-resolution images by shifting the viewport along the y-axis. The N images are vertically stored in a single texture via a FrameBuffer Object (FBO).
Pass #2 (image post-processing) renders a window-aligned quad. Within the fragment shader, each color component of the output fragment is computed according to Equation 7where the N images are read from an input 2D texture.
These two rendering passes are encapsulated in the method shoot() of the MultiViewCamera object.
Tracking system
Another characteristic of our immersion platform for the World Wide Web resides in the use of tracking devices in order to interact with a 3D scene in a straightforward way. Our aim is to address a large range of tracking devices like mouses, 3D mouses, flysticks or even more recent devices like IR depth sensors (Microsoft R Kinect, ASUS R Xtion Pro). In the same fashion we handle autostereoscopic displays, we propose a plugin-free solution to interact with ad-hoc tracking devices within a web browser by using HTML5 WebSockets [W3C R 2012]. The proprietary ART R DTrack tracking system is an optical tracking system which delivers visual information to a PC in order to be processed. The resulting information, 3D position and orientation of the flystick is then broadcast on every chosen IP address, using the UDP protocol. A PHP server-side script, which can be seen as a WebSocket Server, is running on the web server. The WebSocket server is waiting for UDP datagrams, containing locations and orientations, from the DTrack system. At receipt, the data is parsed and sent via WebSocket to the client using the JSON format. Thanks to this networked architecture, we are able to stream JSON encoded data coming from the tracking system to the web browser. The location and the orientation of the flystick are then used to control the WebGLUT virtual camera.
Using the same approach, we have also imagined a more affordable solution to interact with a 3D scene in an even more straightforward way. Indeed, we use recent IR depth sensors like Microsoft R Kinect and ASUS R Xtion Pro to perform Natural Interaction. Just like the ART R DTrack system, our C++ program acts as a UDP server and, at the same time, collects information about the location and the pose of a user facing an IR depth sensor. Thanks to the OpenNI framework [OpenNI TM 2010], we are able to track the user's body and detect characteristic gestures. This information, streamed over the network using our hardware/software architecture can be used to interact with a 3D scene: move the virtual camera, trigger events, etc. These aspects are exposed within the WebGLUT API through the concept of Controller. A Controller can be attached to any type of Camera. This controller is responsible for updating the properties of the camera (position, orientation, field of view, etc.) depending on its state change. At the time of writing, we manage three types of controllers: MouseController, DTrackController and KinectController.
5 Case study: ModJaw R
The ModJaw R (Modeling the Human Jaw) project has been developed by our research team and Maxime Jaisson who is preparing a PhD thesis in odontology related to the impact of Information Technology on dentistry practice and teaching. ModJaw R [START_REF] Jaisson | ModJaw R : a kind of magic[END_REF] aims to provide 3D interactive educational materials for dentistry tutors for teaching mandibular kinematics. There exist several similarities with a former project called MAJA (Modeling and Animation of JAw movements) [START_REF] Reiberg | MAJA: Modeling and Animating the Human Jaw[END_REF]]. One main difference between ModJaw R and MAJA consists in the data nature. Indeed, we use real-world data obtained by motion capture and CT scanners. In this way, students are able to study a wide variety of mandible motions according to specific diseases or malformations. Among the future works, Jörg Reiber wanted to add an internet connection to MAJA. In this way, ModJaw R can be seen as an enhanced up-to-date version of MAJA relying on real-world data sets and cutting edge web technologies exposed by the brand new features of HTML5 (WebGL, WebSockets, etc.). The choice of web technologies to develop this project was mainly dictated by the following constraints:
Easy-to-use: users just have to open a web browser to access to the software and its user-friendly graphical interface. As it is fully web-based, ModJaw R also incorporates online documentation related to anatomy, mandibular kinematics, etc.
Easy-to-deploy: the framework does not require any install, it can be used from all the computers within the faculty or even from your home computer if you use a HTML5 compatible web browser. Since the software is hosted on a single web server, it is really easy to upgrade it.
Conclusion
In this paper, we have presented an original solution for providing immersion and natural interaction within a web browser. The main benefits of our contribution rely on the seamless interaction between a web browser and autostereoscopic displays and tracking devices through new HTML5 features like WebGL and WebSockets. This plugin-free framework allows to enhance the user experience by leveraging dedicated hardware via JavaScript. Thanks to its network-based approach, this framework can easily be extended to handle other devices in the same fashion. For instance, we began to explore the possibility to manage haptic devices. As our tracking system is completely plugin-free and fully network-based, it could be seamlessly integrated in web-based collaborative environments allowing users to remotely interact with shared 3D contents displayed in a web browser.
Figure 2 :
2 Figure 2: Global structure of our immersion platform for the World Wide Web
Figure 3 :
3 Figure 3: Geometry of a single camera (left) and multiple axis-aligned cameras (right).
Acknowledgements
The authors would like to thank Romain Guillemot, research engineer at CReSTIC SIC, for his expertise in autostereoscopic displays and his precious help for porting the source code of multiview cameras from OpenGL to WebGL. |
01759578 | en | [
"info.info-gr",
"info.info-mo"
] | 2024/03/05 22:32:10 | 2008 | https://hal.univ-reims.fr/hal-01759578/file/4200715.pdf | Antoine Jonquet
email: jonquet@leri.univ-reims.fr
Olivier Nocent
Yannick Remion
The art to keep in touch The "good use" of Lagrange multipliers
Keywords: Physically-based animation, constraints, contact simulation
Physically-based modeling for computer animation allows to produce more realistic motions in less time without requiring the expertise of skilled animators. But, a computer animation is not only a numerical simulation based on classical mechanics since it follows a precise story-line. One common way to define aims in an animation is to add geometric constraints. There are several methods to manage these constraints within a physically-based framework. In this paper, we present an algorithm for constraints handling based on Lagrange multipliers. After few remarks on the equations of motion that we use, we present a first algorithm proposed by Platt. We show with a simple example that this method is not reliable. Our contribution consists in improving this algorithm to provide an efficient and robust method to handle simultaneous active constraints.
Introduction
For about two decades, the computer graphics community has investigated the field of physics in order to produce more and more realistic computer animations. In fact, physically-based modeling in animation allows to generate stunning visual effects that would be extremely complex to reproduce manually. On one hand, the addition of physical properties to 3D objects automates the generation of motion just by specifying initial external forces. On the other hand, physicallybased animations are even more realistic than traditional key-framed animations that require the expertise of many skilled animators. As a consequence, the introduction of physically-based methods in modeling and animation significantly reduced the cost and production time of computer generated movies. But, one main drawback of this kind of framework is that it relies on heavy mathematics usually hard to tackle for a computer scientist. A second main disadvantage concerns the input of a physically-based animation: in fact, forces and torques are not really user-friendly since it is really difficult to anticipate a complex motion just by specifying an initial set of external forces.
A computer animation is definitely not a numerical simulation because it follows a story-line. According to Demetri Terzopoulos [TPB + 89], an animation is simulation plus control. One way to ensure that the objects fulfill the goals defined by the animator is to use geometric constraints. A constraint is an equality or an inequality that gathers different parameters of the animation like the total time elapsed, the positions or the orientations of the moving objects. In a less general way, mechanical simulations also benefit from the use of constraints in order to prevent interpenetration between physical objects for example.
There are several methods to handle constraints, summarized in a survey paper by Baraff [Bar93]. But, since our research work is mostly devoted to mechanical simulation, we decided to focus on the use of Lagrange multipliers to manage geometric constraints. In fact, numerical simulations require robust and reliable techniques to ensure that the constraints are never violated. Moreover, with this method we are also able to measure the amount of strain that is necessary to fulfill a given constraint. In this paper, we present a novel algorithm to manage efficiently several simultaneous active geometric constraints. We begin by detailing the physical equations that we use before presenting Platt's algorithm [START_REF] Platt | A Generalization of Dynamic Constraints[END_REF] that is the only algorithm of this type based on Lagrange multipliers. With a simple example, we demonstrate that this algorithm is not suitable for handling simultaneous active constraints. We then introduce our own contribution in order to show how to improve Platt's algorithm to make it reliable, robust and efficient.
Lagrange equations of motion
Lagragian dynamics consist in an extension of newtonian dynamics allowing to generate a wide range of animations in a more efficient way. In fact, Lagrange equations of motion rely on a set of unknowns, denoted as a state vector x of generalized coordinates, that identifies the real degrees of freedom (DOF) of the mechanical systems involved. Within this formalism, the DOF are not only restricted to rotations or translations. For example, a parameter u ∈ [0, 1] which gives the relative position of a point along a 3D parametric curve can be considered as a single generalized coordinate.
Unconstrained motion
The evolution of a free mechanical system only subject to a set of external forces is ruled by the Lagrange equations of motion (1).
M ẍ = f
(1)
M is the mass matrix. ẍ is the second time derivative of the state vector. Finally, the vector f corresponds to the sum of external forces. For more details concerning this formalism, we suggest to read [START_REF] Goldstein | Classical Mechanics[END_REF] and [START_REF] Arnold | Mathematical Methods of Classical Mechanics[END_REF].
Constrained motion
By convention, an equality constraint will always be defined as in equation ( 2) where E is the set of indices of all the equality constraints.
g k (x) = 0 ∀k ∈ E (2)
Constraints restrict the set of reachable configurations to a subspace of R n where n is the total number of degrees of freedom. As mentioned before, there exists three main methods to integrate constraints in equation (1)
The projection method consists in modifying the state vector x and its first time derivative ẋ in order to fulfill the constraint. This modification can be performed with an iterative method like the Newton-Raphson method [START_REF] Vetterling | The Art to keep in touch: The "Good Use of Lagrange Multipliers[END_REF]. Even if this method is very simple and seems to ensure an instantaneous constraint fulfillment, it is not robust enough: indeed it can not guarantee that the process converges in the case of simultaneous active constraints.
The penalty method adds new external forces, acting like virtual springs, in order to minimize the square of the constraint equation, considered as a positive energy function. The main advantage of this method is its compatibility with any dynamic engine since it only relies on forces. But this method leads to inexact constraint fulfillment, allowing interpenetration between the physical objects. In order to diminish this interpenetration, the stiffness of the virtual springs must be significantly increased, making the numerical system unstable.
The Lagrange method consists in calculating the exact amount of strain, denoted as the Lagrange multiplier, needed to fulfill the constraint. This method guarantees that constraints are always exactly fulfilled. Since the use of Lagrange multipliers introduces a set of new unknowns, equation (1) must be completed by a set of new equations, increasing the size of the initial linear system to solve. But we consider that this method is most suitable for efficiently managing geometric constraints.
For all the reasons mentioned above, we chose the Lagrange method to manage our geometric constraints. According to the principle of virtual work, each constraint g k adds a new force perpendicular to the tangent space of the surface g k (x) = 0. The Lagrange multiplier λ k corresponds to the intensity of the force related to the constraint g k . With these new forces, equation ( 1) is modified as follows:
M ẍ = f + k∈E λ k ∂g k ∂x (3)
We add new equations to our system by calculating the second time derivative of equation ( 2), leading to equation (4).
n i=1 ∂g k ∂x i ẍi = - n i,j=1 ∂ 2 g k ∂x i ∂x j ẋi ẋj ∀k ∈ E (4)
In order to correct the numerical deviation due to round-off errors, Baumgarte proposed in [START_REF] Baumgarte | Stabilization of constraints and integrals of motion in dynamical systems[END_REF] a constraint stabilization scheme illustrated by equation ( 5). The parameter τ -1 can be seen as the speed of constraint fulfillment.
n i=1 ∂g k ∂x i ẍi = - n i,j=1 ∂ 2 g k ∂x i ∂x j ẋi ẋj - 2 τ n i=1 ∂g k ∂x i ẋi - 1 τ 2 g k (5)
When we mix equations (1) and (5), we obtain a linear system where the second time derivative of the state vector x and the vector of Lagrange multipliers Λ are the unknowns.
M -J T -J 0 ẍ Λ = f -d (6)
J is the jacobian matrix of all the geometric constraints and d corresponds to the right term of equation (5).
Inequality constraints management
By convention, an inequality constraint will always be defined as in equation ( 7) where F is the set of indices of all the inequality constraints.
g k (x) ≥ 0 ∀k ∈ F (7)
For a given state vector x, we recall the following definitions:
• the constraint is said to be violated by x when g k (x) < 0. This means that the state vector x corresponds to a non allowed configuration.
• the constraint is said to be satisfied by x when g k (x) ≥ 0.
• the constraint is said to be active when g k (x) = 0. In this case, the state vector x belongs to the boundary of the subspace defined by the inequality constraint g k .
The management of inequality constraints is more difficult than the management of equality constraints. An inequality constraint must be handled only if it is violated or active. In fact, the algorithm is a little more complicated as we explain in the next sections.
That is why we define two subsets within F: F + is the set of indices of all handled inequality constraints and F -is the set of indices of ignored inequality constraints. Finally, we have F = F -∪F + . The jacobian matrix of constraints J of equation ( 6) is built from all the constraints g k where k ∈ E ∪ F + .
Previous work
Within the computer graphics community, the main published method devoted to inequality constraints management using Lagrange multipliers, known as "Generalized Dynamic Constraints", was proposed by Platt in [START_REF] Platt | A Generalization of Dynamic Constraints[END_REF]. In his paper, he describes how to use Lagrange multipliers to assemble and simulate collisions between numerical models. This method is an extension of the work of Barzel and Barr [START_REF] Barzel | A modeling system based on dynamic constraints[END_REF] that specifies how constraints must be satisfied. Moreover, Platt proposes a method to update F + (the set of handled inequality constraints) during the animation. This algorithm can be compared to classical active set methods [START_REF] Björck | Numerical Methods for Least Squares Problems[END_REF][START_REF] Nocedal | Numerical Optimization[END_REF].
We do not focus on collision detection that is a problem by itself. We are aware that this difficult problem can be solved in many ways, we encourage the reader to refer to the survey paper by Teschner et al.
[TKZ + 05]. During the collision detection stage, we assume that the dynamic engine may rewind time until the first constraint activation is detected. This assumption can produce an important computational overhead that can restrict our method to off-line animations production depending on the complexity of the scene simulated. In any case, this stage ensures that constraints urn:nbn:de:0009-6-12767, ISSN 1860-2037 are never violated. But, it is possible that several constraints are activated simultaneously. The main topic of this paper is to provide a reliable algorithm to handle these multiple active constraints in an efficient way.
At the beginning of the animation, Platt populates the set F + with all the active constraints.
Algorithm 1-Platt's algorithm Solve equation (6) to get ẍ and Λ 1 Update x and ẋ (numerical integration)
2 for each k ∈ F do 3 if k ∈ F + then 4 if λ k ≤ 0 then 5 k is moved to F - 6 else 7 if g k (x) < 0 then 8 g k is moved to F + 9
For each time step, according to algorithm 1, we solve equation ( 6) and update the state vector in order to retrieve new positions and velocities at the end of the current time step. We then check the status of each inequality constraint. If a constraint g k is active, it is still handled until its Lagrange multiplier is negative or null, that is to say that the Lagrange multiplier corresponds to a force that prevents from deactivation. According to the new values of the state vector x, if the previously inactive constraint g k is now violated (g k (x) < 0), the constraint must be added to F + in order to prevent the system to enter in such a configuration.
Figure 1: A simple example with two simultaneous active constraints Even if this algorithm seems to give a reliable solution for inequality constraints handling, some prob-lems remain. We set up a simple scene as in figure 1 to illustrate the insufficiencies of Platt's method. A particle of mass m is constrained to slide on a 2D plane. It starts from an acute-angle corner modeled by two linear inequality constraints g 1 (x) ≥ 0 and g 2 (x) ≥ 0 where g 1 (x) = x -y and g 2 (x) = y. Finally, this particle is subject to a single external force f = (2, -1). In this particular case, the state vector x is composed of the 2D coordinates (x, y) of the particle. According to equation (1), the generalized mass matrix for this system is defined by:
M = m 0 0 m (8)
As the geometric constraints g 1 (x) and g 2 (x) are linear, their first and second time derivative do not produce any deviation term defined in equation ( 5):
d = 0 0 (9)
According to the initial value of the state vector x = (0, 0), the two constraints g 1 (x) and g 2 (x) are active, so their indices are inserted in F + and J, the jacobian matrix of constraints, is defined as follows:
J = 1 -1 0 1 (10)
From equations (6) (8) (9) and (10), we obtain a linear system whose unknowns are the second time derivative of the state vector x and the two Lagrange multipliers λ 1 and λ 2 associated with g 1 and g 2 :
m 0 -1 0 0 m 1 -1 -1 1 0 0 0 -1 0 0 ẍ ÿ λ 1 λ 2 = 2 -1 0 0 (11)
The solutions are ẍ = (0, 0) and Λ = (-2, -1). The particle does not move during this time step because ẍ and ÿ are null. But, since λ 1 and λ 2 are both negative, their corresponding constraints are moved to F -. This means that, for the next time step, the system will be free of any constraints. As the force remains constant, the next value of ẍ will be equal to (2m -1 , -m -1 ). These values will lead to an illegal position of the particle, under the line y = 0. These computations are illustrated by the figure 2.
The amount of violation of the constraint g 2 (x) = y mainly depends on the ratio between the mass m of the particle and the intensity of the external force f . urn:nbn:de:0009-6-12767, ISSN 1860-2037 Section 5 of this paper presents the different results and comparisons.
Our contribution 4.1 A first approach
The problem of Platt's method relies on the fact that it keeps some inequality constraints in F + that should be ignored. In fact, the condition g k (x) < 0 used to populate F + with inequality constraints is not well suited and an alternative approach is proposed. A solution would be to replace the condition g k (x) < 0 by a violation tendency condition expressed as J k ẍ < d k . An active constraint that does not fulfill the violation tendency condition will be satisfied but inactive during the next time step and does not have to be handled.
At the beginning of the animation, we solve equation (1) to get ẍ and we then populate the set F + with the active constraints that fulfill the violation tendency condition. It is clear that we handle less constraints than Platt because our criteria is more restrictive.
We briefly verify that this algorithm gives a correct solution to our example illustrated in figure 1. According to equation (1), ẍ = (2m -1 , -m -1 ). The two constraints g 1 and g 2 are active because x = (0, 0) but only g 2 fulfills the violation tendency condition as mentioned in equation (12).
J 1 ẍ = 3m -1 ⇒ 1 ∈ F - J 2 ẍ = -m -1 ⇒ 2 ∈ F + (12)
In this special case, equation ( 6) becomes:
Algorithm 2-Platt's improved algorithm Solve equation ( 6) to get ẍ and Λ 1 Update x and ẋ (numerical integration)
2 for each k ∈ F do 3 if k ∈ F + then 4 if λ k ≤ 0 then 5 k is moved to F - 6 else 7 if J k ẍ < d k then 8 k is moved to F + 9 m 0 0 0 m -1 0 -1 0 ẍ ÿ λ 2 = 2 -1 0 ( 13
)
The solutions of the linear system (13) are ẍ = (2m -1 , 0) and λ 2 = 1. Finally, the particle will slide along the x-axis without crossing the line y = 0 because the constraint g 1 that was not handled did not introduce a false response.
This new algorithm seems to manage multiple inequality constraints in a good way, but we could highlight a problem with this method by using the same example illustrated in figure 1 with a new external force f = (-1, -2).
At the beginning, since x = (0, 0), the constraints g 1 and g 2 are active. From equation (1), we obtain that ẍ = (-m -1 , -2m -1 ), and from equation ( 14) that only the constraint g 2 is handled. According to equation ( 14), we build the linear system (15).
J 1 ẍ = m -1 ⇒ 1 ∈ F - J 2 ẍ = -2m -1 ⇒ 2 ∈ F + (14) (a) (b) (c)
m 0 0 0 m -1 0 -1 0 ẍ ÿ λ 2 = -1 -2 0 (15)
The solutions are ẍ = (-m -1 , 0) and λ 2 = 2. After the update of x and ẋ, the particle slides through the plane defined by the constraint g 1 and reaches an illegal state. This is due to the fact that the Lagrange multiplier λ 2 pushes the system in an illegal state according to the constraint g 1 , which was not previously inserted in equation (6) as it did not satisfy the violation tendency criterion. These computations are again illustrated by the figure 3.
The "right" algorithm
The use of the violation tendency condition J k ẍ < d k improves simultaneous active constraints management, since only the appropriate inequality constraints are handled by equation ( 6). But we have seen, from the second example, that it is not sufficient to produce a consistent configuration. In fact, the constraints from F + that fulfill the violation tendency condition will produce a vector Λ of Lagrange multipliers that prevent the system from being in an illegal configuration according to these handled constraints. In the meantime, the constrained accelerations ẍ of the system could lead to an illegal configuration according to some constraints in F -. The only way to deal with this problem is to use the newly computed constrained accelerations to test if the active inequality constraints g k (where k ∈ F -) fulfill the violation tendency condition and have to be handled. We then need to introduce an iterative process that computes the accelerations and checks if a previously ignored constrained must be handled or not according to the violation tendency condition evaluated with the newly computed constrained accelerations. This process is repeated until the sytem reaches the appropriate state.
We propose a simple and efficient solution to the inequality constraints handling problem. At the beginning of each time step, all active inequality constraints g k are detected, and F + is emptied. We then begin an iterative process that runs until there is no new insertion in F + . The constrained accelerations ẍ are computed from equation ( 6) and the violation tendency condition J k ẍ < d k is tested on each active inequality constraint. For any inequality constraint g k that fulfills the condition, we insert its index k in F + and start another iterative step. In a recent communication [START_REF] Raghupathi | QP-Collide: A New Approach to Collision Treatment[END_REF], Raghupathi presented a method also based on Lagrange multipliers. For realtime considerations, they do not allow the dynamic engine to rewind time to get back to the first constraint activation. They have to manage constraints at the end of the time step, trying to find the right accelerations to ensure constraints fulfillement. They also confess that this process is not guaranteed to converge for a given situation.
Results and Comparisons
We will now compare the results obtained with Platt's algorithm and our method, using the example illustrated in figure 1. Figure 4 and 5 illustrate a compari-son of the positions and accelerations along y-axis of a particle of mass m = 2 and m = 3. We recall that the inequality constraint g 2 forbids negative values for y and that the constant force f applied to the particle is equal to (2, -1).
As shown on figure 4, Platt's algorithm holds the particle in the corner at the first time step, and releases it at the next time step. As a consequence, the particle evolves in an illegal state during the following steps. With a mass m = 2, the error related to the position is less than 10 -5 with an oscillating acceleration (right column). But if we set the mass m to 3, as shown in the figure 5, errors are much more important, and the particle crosses the line y = 0 modeled by the constraint g 2 .
As illustated, our algorithm keeps the particle along the x-axis within a controlled numerical error value, that is less than 10 -8 in these examples. To illustrate multiple contact constraints, we have set a billiard scene composed of 10 fixed balls placed in a corner and a moving ball that slides towards them. For each ball, we define two inequality constraints according to the corner and one inequality constraint for each pair of balls based on their in-between distance. This example is finally composed of 11 balls and 77 inequality constraints (figure 6).
It is rather difficult to compare the computation times of Platt's algorithm and ours since the simulations made of simultaneous active constraints are not well handled by Platt's algorithm and produce corrupted numerical values that can lead to infinite loops. But it is quite clear that in the worst case, our method may solve n linear systems of increasing size where n is the total number of inequality constraints. The complexity of our solution is then higher than Platt's algorithm. But we recall that our main contribution is not to speed up an existing method but to propose a reliable algorithm mainly dedicated to off-line simulations.
Conclusion
In this paper, we presented a novel algorithm to manage simultaneous active inequality constraints. Among all the existing methods to handle constraints within a physically-based animation, we focused on the Lagrange method which provides a reliable way to ensure that constraints are always exactly fulfilled. But, in the special case of several active inequality constraints, we have to take care on how to handle these simultaneous constraints. Platt proposed an algorithm based on Lagrange multipliers but we showed that this method is unable to solve even simple examples. We then explained how to improve this algorithm in order to propose a new reliable and efficient method for inequality constraints handling. Beyond the example illustrated in figure 1, we produced a short movie simulating a billiard game. Some snapshots are gathered in figure 6.
Figure 2 :
2 Figure 2: (a) Since the two constraints are active, they handle by Platt's algorithm (b) The related Lagrange multipliers are negative, the constraints are then ignored (c) The new unconstrained acceleration leads to an illegal position
Algorithm 3 -
3 Figure 3: (a) The two constraints are handled since they are active (b) According to the violation tendency condition, only the constraint g 2 still handled (c) The newly computed constrained acceleration leads to an illegal position
Figure 4 :
4 Figure 4: Comparison of Platt's algorithm and our method using the example illustrated in figure 1 with a mass m = 2. The numerical values correspond respectively to position and acceleration along the y-axis |
01759644 | en | [
"math.math-ag"
] | 2024/03/05 22:32:10 | 2018 | https://hal.science/hal-01759644/file/stcic.pdf | Michel Granger
email: granger@univ-angers.fr
Mathias Schulze
email: mschulze@mathematik.uni-kl.de
DEFORMING MONOMIAL SPACE CURVES INTO SET-THEORETIC COMPLETE INTERSECTION SINGULARITIES
Keywords: 2010 Mathematics Subject Classification. Primary 32S30; Secondary 14H50, 20M25 Set-theoretic complete intersection, space curve, singularity, deformation, lattice ideal, determinantal variety
We deform monomial space curves in order to construct examples of set-theoretical complete intersection space curve singularities. As a by-product we describe an inverse to Herzog's construction of minimal generators of non-complete intersection numerical semigroups with three generators.
Introduction
It is a classical problem in algebraic geometry to determine the minimal number of equations that define a variety. A lower bound for this number is the codimension and it is reached in case of set-theoretic complete intersections. Let I be an ideal in a polynomial ring or a regular analytic algebra over a field K. Then I is called a set-theoretic complete intersection if √ I = √ I for some ideal I admitting height of I many generators. The subscheme or analytic subgerm X defined by I is also called a set-theoretic complete intersection in this case. It is hard to determine whether a given X is a set-theoretic complete intersection. We address this problem in the case I ∈ Spec K{x, y, z} of irreducible analytic space curve singularities X over an algebraically closed (complete non-discretely valued) field K.
Cowsik and Nori (see [START_REF] Cowsik | Affine curves in characteristic p are set theoretic complete intersections[END_REF]) showed that over a perfect field K of positive characteristic any algebroid curve and, if K is infinite, any affine curve is a set-theoretic complete intersection. To our knowledge there is no example of an algebroid curve that is not a set-theoretic complete intersection. Over an algebraically closed field K of characteristic zero, Moh (see [START_REF] Moh | A result on the set-theoretic complete intersection problem[END_REF]) showed that an irreducible algebroid curve
K[[ξ, η, ζ]] ⊂ K[[t]
] is a set-theoretic complete intersection if the valuations , m, n = υ(ξ), υ(η), υ(ζ) satisfy (0.1) gcd( , m) = 1, < m, ( -2)m < n.
We deform monomial space curves in order to find new examples of set-theoretic complete intersection space curve singularities. Our main result in Proposition 3.2 gives sufficient numerical conditions for the deformation to preserve both the value semigroup and the set-theoretic complete intersection property. As a consequence we obtain Corollary 0.1. Let C be the irreducible curve germ defined by O C = K t , t m + t p , t n + t q ⊂ K{t} where gcd( , m) = 1, p > m, q > n and there are a, b ≥ 2 such that = b + 2, m = 2a + 1, n = ab + b + 1.
Let γ be the conductor of the semigroup Γ = , m, n and set d 1 = (a + 1)(b + 2), δ = min {p -m, q -n}. In the setup of Corollary 0.1 Moh's third condition in (0.1) becomes ab < 1 and is trivially false. Corollary 0.1 thus yields an infinite list of new examples of non-monomial set-theoretic complete intersection curve germs.
Let us explain our approach and its context in more detail. Let Γ be a numerical semigroup. Delorme (see [START_REF] Delorme | Sous-monoïdes d'intersection complète de N[END_REF]) characterized the complete intersection property of Γ by a recursive condition. The complete intersection property holds equivalently for Γ and its associated monomial curve Spec(K[Γ]) (see [START_REF] Herzog | Generators and relations of abelian semigroups and semigroup rings[END_REF]Cor. 1.13]) and is preserved under flat deformations. For this reason we deform only non-complete intersection Γ. A curve singularity inherits the complete intersection property from its value semigroup since it is a flat deformation of the corresponding monomial curve (see Proposition 2.3). The converse fails as shown by a counter-example of Herzog and Kunz (see [START_REF] Herzog | Die Wertehalbgruppe eines lokalen Rings der Dimension 1[END_REF]).
In case Γ = , m, n , Herzog (see [START_REF] Herzog | Generators and relations of abelian semigroups and semigroup rings[END_REF]) described minimal relations of the generators , m, n. There are two cases (H1) and (H2) (see §1) with 3 and 2 minimal relations respectively. In the non-complete intersection case (H2) we describe an inverse to Herzog's construction (see Proposition 1.4). Bresinsky (see [Bre79b]) showed (for arbitrary K) by an explicit calculation based on Herzog's case (H2) that any monomial space curve is a complete intersection. Our results are obtained by lifting his equations to a (flat) deformation with constant value semigroup. In section §2 we construct such deformations (see Proposition 2.3) following an approach using Rees algebras described by Teissier (see [Zar06, Appendix, Ch. I, §1]). In §3 we prove Proposition 3.2 by lifting Bresinsky's equations under the given numerical conditions. In §4 we derive Corollary 0.1 and give some explicit examples (see Example 4.2).
It is worth mentioning that Bresinsky (see [Bre79b]) showed (for arbitrary K) that all monomial Gorenstein curves in 4-space are settheoretic complete intersections.
Ideals of monomial space curves
Let , m, n ∈ N generate a semigroup Γ = , m, n ⊂ N.
d = gcd( , m).
We assume that Γ is numerical, that is, gcd( , m, n) = 1.
Let K be a field and consider the map
ϕ : K[x, y, z] → K[t], (x, y, z) → (t , t m , t n ) whose image K[Γ] = K[t , t m , t n ] is the semigroup ring of Γ. Pick a, b, c ∈ N minimal such that a = b 1 m + c 2 n, bm = a 2 + c 1 n, cn = a 1 + b 2 m for some a 1 , a 2 , b 2 , b 2 , c 1 , c 2 ∈ N.
/ ∈ {a 1 , a 2 , b 1 , b 2 , c 1 , c 2 }. Then (1.1) a = a 1 + a 2 , b = b 1 + b 2 , c = c 1 + c 2
and the unique minimal relations of , m, n read
a -b 1 m -c 2 n = 0, (1.2) -a 2 + bm -c 1 n = 0, (1.3) -a 1 -b 2 m + cn = 0. (1.4)
Their coefficients form the rows of the matrix
(1.5) a -b 1 -c 2 -a 2 b -c 1 -a 1 -b 2 c . Accordingly the ideal I = f 1 , f 2 , f 3 of maximal minors (1.6) f 1 = x a -y b 1 z c 2 , f 2 = y b -x a 2 z c 1 , f 3 = x a 1 y b 2 -z c of the matrix (1.7) M 0 = z c 1 x a 1 y b 1 y b 2 z c 2 x a 2 .
equals ker ϕ, and the rows of this matrix generate the module of relations between f 1 , f 2 , f 3 . Here K[Γ] is not a complete intersection.
(H2) 0 ∈ {a 1 , a 2 , b 1 , b 2 , c 1 , c 2 }. One of the relations (a, -b, 0), (a, 0, -c), or (0, b, -c) is minimal relation of , m, n and, up to a permutation of the variables, the minimal relations are a = bm, (1.8)
a 1 + b 2 m = cn.
(1.9) Their coefficients form the rows of the matrix
(1.10) a -b 0 -a 1 -b 2 c
.
It is unique up to adding multiples of the first row to the second. Overall there are 3 cases and an overlap case described equivalently by 3 matrices
(1.11) a -b 0 a 0 c , a -b 0 0 -b c , a 0 -c 0 b -c .
Here K[Γ] is a complete intersection. In the following we describe the image of Herzog's construction and give a left inverse: Proof. The first statement holds due to minimality. By Buchberger's criterion the generators 1.6 form a Gröbner basis with respect to the reverse lexicographical ordering on x, y, z. Let g denote a normal form of g = x ˜ -z ñ with respect to 1.6. Then g ∈ I if and only if g = 0. By (1.1) reductions by f 2 can be avoided in the calculation of g. If r 2 and r 1 many reductions by f 1 and f 3 respectively are applied then Proof.
(H1') Given a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ∈ N \ {0}, define a, b, c by (1.1) and set = b 1 c 1 + b 1 c 2 + b 2 c 2 = b 1 c + b 2 c 2 = b 1 c 1 + bc 2 , (1.12) m = a 1 c 1 + a 2 c 1 + a 2 c 2 = ac 1 + a 2 c 2 = a 1 c 1 + a 2 c, (1.13) n = a 1 b 1 + a 1 b 2 + a 2 b 2 = a 1 b + a 2 b 2 = a 1 b 1 + ab 2 , (1.
g = x ñ-a 1 r 1 -ar 2 y b 1 r 2 -r 1 b 2 z r 1 c+r 2 c 2 -z ˜ . and g = 0 is equivalent to ˜ = r 1 c + r 2 c 2 , b 1 r 2 = r 1 b 2 , ñ = a 1 r 1 + ar 2 . Then r i = b i gcd(b 1 ,b 2 ) for i = 1,
(a) Consider ñ, ˜ ∈ N as in Lemma 1.2. Then x ñ -z ˜ ∈ I = ker ϕ means that (t ) ñ = (t n ) ˜ and hence ñ = ˜ n. So the pair ( , n) is pro- portional to ( ˜ , ñ) which in turn is propotional to ( , n ) by Lemma 1.2.
Then the two triples ( , m, n) and ( , m , n ) are proportional by symmetry. Since gcd( , m, n) = 1 by hypothesis ( , m , n ) = q •( , m, n) for some q ∈ N. By Lemma 1.2 q divides gcd(b 1 , b 2 ) and by symmetry also gcd(a 1 , a 2 ) and gcd(c 1 , c 2 ). By minimality of the relations
(1.2)-(1.4) gcd(a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ) =
( , m , n ) is in the corresponding subcase of (H2), gcd(a, b) = 1, (1.18) ∀q ∈ ∩[-b 2 /b, a 1 /a] ∩ N : gcd(-a 1 + qa, -b 2 -qb, c) = 1. (1.19) In this case, ( , m, n) = ( , m , n ).
Proof.
(a) By Lemma 1.3.(a) e = 1 is a necessary condition. Conversely let e = 1. By definition (1.5) is a matrix of relations of ( , m , n ). Assume that ( , m , n ) is in case (H2). By symmetry we may assume that ( , m , n ) admits a matrix of minimal relations
In particular c ≥ d . Then b 1 ≥ b contradicts (1.12) since = b 1 c + b 2 c 2 ≥ b c + b 2 c 2 > b c ≥ b d = .
We may thus assume that b 1 < b . The difference of first rows of (1.20) and (1.5) is then a relation
a -a b 1 -b c 2 of ( , m , n ) with a -a < 0, b 1 -b < 0 and c 2 > 0. Then c 2 ≥ c ≥ d by choice of c . This contradicts (1.12) since = b 1 c 1 + bc 2 ≥ b 1 c 1 + b d > b d = .
We may thus assume that ( , m , n ) is in case (H1) with a matrix of unique minimal relations
(1.21) a -b 1 -c 2 -a 2 b -c 1 -a 1 -b 2 c of type (1.5) where a = a 1 + a 2 , b = b 1 + b 2 , c = c 1 + c 2 .
as in (1.1). Then (a, b, c) ≥ (a , b , c ) by choice of the latter and
= b 1 c + b 2 c 2 = b 1 c 1 + b c 2 by Lemma 1.3.(a). If (a i , b i , c i ) ≥ (a i , b i , c i ) for i = 1, 2, then = b 1 c + b 2 c 2 ≥ b 1 c + b 2 c 2 = implies c = c and hence (a, b, c) = (a , b , c ) by symmetry. By unique- ness of (1.21) then (a 1 , a 2 , b 1 , b 2 , c 1 , c 2 ) = (a 1 , a 2 , b 1 , b 2 , c 1 , c 2 )
and hence the claim. By symmetry it remains to exclude the case c 2 > c 2 . The difference of first rows of (1.21) and (1.5) is then a relation
a -a b 1 -b 1 c 2 -c 2 of ( , m , n ) with a -a ≤ 0, c 2 -c 2 < 0 and hence b 1 -b 1 ≥ b by choice of the latter. This leads to the contradiction = b 2 c 2 + b 1 c > b 1 c ≥ b c + b 1 c > b 2 c 2 + b 1 c = .
= d = b , a = m d = a .
Writing the second row of (1.10) as a linear combination of (1.20) yields
-a 1 + qa -b 2 -qb c = p -a 1 -b 2 c .
with p ∈ N and q ∩ [-b 2 /b, a 1 /a] ∩ N and hence p = 1 by (1.19). The claim follows.
The following examples show some issues that prevent us from formulating stronger statement in Proposition 1.4.(b).
Example 1.5.
(a) Take (a, -b,
0) = (3, -2, 0) and (-a 1 , -b 2 , c) = (-1, -4, 4). Then ( , m , n ) = (4, 6, 7) which is in case (H2). The second minimal rela- tion is (-2, -1, 2) = 1 2 ((-a 1 , -b 2 , c) -(a, -b, 0)).
The same ( , m , n ) is obtained from (a, 0, -c) = (7, 0, -4) and (-a 2 , b, -c 1 ) = (-1, 3, -2). This latter satisfies (1.18) and (1.19) but (a, 0, -c) is not minimal.
(b) Take (a, -b, 0) = (4, -3, 0) and (-a 1 , -b 2 , c) = (-2, -1, 2). Then ( , m , n ) = (3, 4, 5) but (a, -b, 0) is not a minimal relation. In fact the corresponding complete intersection K[Γ] defined by the ideal x 3 -y 4 , z 2 -x 2 y is the union of two branches x = t 3 , y = t 4 , z = ±t 5 .
Deformation with constant semigroup
Let O = (O, m) be a local K-algebra with O/m ∼ = K. Let F • = {F i | i ∈ Z}
A = i∈Z F i s -i ⊂ O[s ±1 ].
It is a finite type graded O[s]-algebra and flat (torsion free) K[s]-algebra with retraction
A A/A ∩ m[s ±1 ] ∼ = K[s].
For u ∈ O * there are isomorphisms
(2.1) A/(s -u)A ∼ = O, A/sA ∼ = gr F O.
Geometrically A defines a flat morphism with section
Spec(A) π / / A 1 K ι i i
with fibers over K-valued points
π -1 (x) ∼ = Spec(O), ι(x) = m, 0 = x ∈ A 1 K , π -1 (0) ∼ = Spec(gr F O), ι(0) = gr F m.
F • = m • W O W , F • = F •,w = m • = υ -1 [•, ∞] O.
Setting t = t /s and identifying K ∼ = O W /m W this yields a finite extension of finite type graded O W -and flat (torsion free) K[s]-algebras
(2.2) A = i∈Z (F i ∩ O W )s -i ⊂ i∈Z F i s -i = O W [s, t] = B ⊂ O W [s ±1 ]
with retraction defined by K[s] ∼ = B/(B <0 + Bm W ). The stalk at w is
A = A w = i∈Z (F i ∩ O)s -i ⊂ i∈Z F i s -i = O[s, t] = B ⊂ O[s ±1 ].
At w = w ∈ W the filtration F w is trivial and the stalk becomes
A w = O W,w [s ± 1].
The graded sheaves gr F O W ⊂ gr F O W are thus supported at w and the isomorphism gr
F (O W ) w = gr F O ∼ = K[t ] ∼ = K[N] identifies (2.3) (gr F O W ) w = gr F O ∼ = K[Γ ], Γ = υ(O \ {0}) with the semigroup ring K[Γ ] of O,
The analytic spectrum Spec an W (-) → W applied to finite type O Walgebras represents the functor T → Hom O T (-T , O T ) from K-analytic spaces over W to sets (see [START_REF] Henri | Familles d'espaces complexes et fondements de la géométrie analytique[END_REF]Exp. 19]). Note that
Spec an W (K[s]) = Spec an {w} (K[s]) = L is the K-analytic line. The normalization of W is ν : W = Spec an W (O W ) → W and B = ν * B where B = O W [s, t]. Applying Spec an W to (2.2) yields a diagram of K-analytic spaces (see [Zar06, Appendix]) (2.4) X = Spec an W (A) π & & Spec an W (B) = Y ρ o o L ι 8 8
where π is flat with π • ρ • ι = id and
π -1 (x) ∼ = Spec an W (O W ) = W, ι(x) = w, 0 = x ∈ L, π -1 (0) ∼ = Spec an W (gr F O W ), ι(0) ↔ gr F m W .
Remark 2.1. Teissier defines X as the analytic spectrum of A over W × L (see [Zar06, Appendix, Ch. I, §1]). This requires to interpret the O W -algebra A as an O W ×L -algebra.
Remark 2.2. In order to describe (2.4) in explicit terms, embed
L ⊃ W ν / / W ⊂ L n
with coordinates t and x = x 1 , . . . , x n and
X = {(x, s) | (s 1 x 1 , . . . , s n x n ) ∈ W, s = 0} ⊂ L n × L, Y = (t, s) t = st ∈ W ∪ L × {0} ⊂ L × L.
This yields the maps X → W ← Y . The map ρ in (2.4) becomes ρ(t, s) = (x 1 (t )/s 1 , . . . , x n (t )/s n ) for s = 0 and the fiber π -1 (0) is the image of the map
ρ(t, 0) = ((ξ 1 (t), . . . , ξ n (t)), 0), ξ k (t) = lim s→0 x k (st)/s k = σ(x k )(t).
Taking germs in (2.4) this yields the following.
Proposition 2.3. There is a flat morphism with section
S = (X, ι(0)) π / / (L, 0) ι k k with fibers π -1 (x) ∼ = (W, w) = C, ι(x) = w, 0 = x ∈ L, π -1 (0) ∼ = Spec an (K[Γ ]) = C 0 , ι(0) ↔ K[Γ + ].
The structure morphism factorizes through a flat morphism
X = Spec an W (A) f 3 3 f / / (|W |, A) / / W and f # ι(0) : A → O X,ι(0) induces an isomorphism of completions (see [Car62, Exp. 19, §2, Prop. 4]) A ι(0) ∼ = O X,ι(0) .
This yields the finite extension of K-analytic domains
O S = O X,ι(0) ⊂ O Y,ι(0) .
We aim to describe O Y,ι(0) and K-analytic algebra generators of O S . In explicit terms O S is obtained from a presentation
I → O[x] → A → 0 mapping x = x 1 , . . . , x n to ι(0) = A ∩ m[s ±1 ] + As as (2.5) O S = O{x}/O{x}I = O{x} ⊗ O[x] A, O{x} = O ⊗K{x}.
The graded K-algebra A/sA is thus generated by ξ. Extend F • to the graded filtration F
• [s ±1 ] on O[s ±1 ]. For i ≥ j, (A/As) i = gr F i A i •s i-j ∼ = / / gr F i A j .
Thus finitely many monomials in ξ, s generate any A j /F i A j ∼ = F j /F i over K. With γ the conductor of Γ and i = γ + j, F γ ⊂ m ∩ O = m and hence F i = F γ F j ⊂ mF j . Therefore these monomials generate A j as O-module by Nakayama's lemma. It follows
A = O[ξ, s] as graded K-algebra. Using O = K ξ and ξ = ξs then O S = K ξ , ξ, s = K ξ, s (see (2.5)).
We now reverse the above construction to deform generators of a semigroup ring. Let Γ be a numerical semigroup with conductor γ generated by = 1 , . . . , n . Pick corresponding indeterminates x = x 1 , . . . , x n . The weighted degree deg(-) defined by deg(x) = makes K[x] a graded K-algebra and induces on K{x} a weighted order ord(-) and initial part inp(-) . The assignment x i → i defines a presentation of the semigroup ring of Γ (see (2.3))
K[x]/I ∼ = K[Γ] ⊂ K[t ] ⊂ K{t } = O.
The defining ideal I is generated by homogeneous binomials f = f 1 , . . . , f m of weighted degrees deg(f ) = d. Consider elements ξ = ξ 1 , . . . , ξ n defined by (2.7)
ξ j = t j + i≥ j +∆ j ξ j,i t i s i-j ∈ K[t, s] ⊂ O[t, s] = B with ∆ i ∈ N \ {0} ∪ {∞} minimal. Set δ = min {∆ }, ∆ = ∆ 1 , . . . , ∆ n .
With deg(t) = 1 = -deg(s) ξ defines a map of graded K-algebras K[x, s] → K[t, s] and a map of analytically graded K-analytic domains K{x, s} → K{t, s} (see [SW73] for analytic gradings).
Remark 2.6. Converse to (2.6), any homogeneous ξ ∈ K{t, s} of weighted degree can be written as ξ = ξ /s for some ξ ∈ K{t }. It follows that ξ(t, 1) = ξ (t) ∈ K{t}.
Consider the curve germ C with K-analytic ring
(2.8) O = O C = K ξ , ξ = ξ(t,
O S = K ξ, s = K{x, s}/ F , F = f -f s.
Proof. First let Γ = Γ. Then Lemma 2.5 yields the first equality in (2.10). By flatness of π in Proposition 2.3, the relations f of ξ(t, 0) = t lift to relations F ∈ K{x, s} m of ξ. That is, F (x, 0) = f and F (ξ, s) = 0. Since f and ξ have homogeneous components of weighted degrees d and , F can be written as F = f -f s where f ∈ K{x, s} m has homogeneous components of weighted degrees d + 1. This proves in particular the last claim. Since f i (t ) = 0, any term in f i (ξ, s)s = f i (ξ) involves a term of the tail of ξ j for some j. Such a term is divisible by t d i +∆ j which yields the bound for ord(f i (x, 1)).
Conversely let f with homogeneous components satisfy (2.9). Suppose that there is a k ∈ Γ \ Γ. Take h ∈ K{x} of maximal weighted order k such that υ(h(ξ )) = k . In particular, k < k and inp h(t
) = 0. Then inp h ∈ I = f and inp h = m i=1 q i f i for some q ∈ K[x] m . Set h = h - m i=1 q i F i (x, 1) = h -inp h + m i=1 q i f i (x, 1).
Then h (ξ ) = h(ξ ) by (2.9) and hence υ(h (ξ )) = k . With (2.9) and homogeneity of f it follows that ord(h ) > k contradicting the maximality of k.
Remark 2.8. The proof of Proposition 2.7 shows in fact that the condition Γ = Γ is equivalent to the flatness of a homogeneous deformation of the parametrization as in (2.7). These Γ-constant deformations are a particular case of δ-constant deformations of germs of complex analytic curves (see [Tei77, §3, Cor. 1]).
The following numerical condition yields the hypothesis of Proposition 2.7. Lemma 2.9.
If min {d} + δ ≥ γ then Γ = Γ. Proof. Any k ∈ Γ is of the form k = υ(p(ξ )) for some p ∈ K{x} with p 0 = inp(p) ∈ K[x]. If p 0 (t ) = 0, then k ∈ Γ. Otherwise, p 0 ∈ f and hence k ≥ min {d} + min { }.
The second claim follows.
Set-theoretic complete intersections
We return to the special case Γ = , m, n of §1. Recall Bresinsky's method to show that Spec(K[Γ]) is a set-theoretic complete intersection (see [Bre79a]). Starting from the defining equations (1.6) in case (H1) he computes
f c 1 = (x a -y b 1 z c 2 ) c = x a g 1 ± y b 1 c z c 2 c = x a g 1 ± y b 1 c z (c 2 -1)c (x a 1 y b 2 -f 3 ) = x a 1 g 2 ∓ y b 1 c z (c 2 -1)c f 3 ≡ x a 1 g 2 mod f 3
where g 1 ∈ x, z and
g 2 = x a-a 1 g 1 ± y b 1 c+b 2 z (c 2 -1)c .
He shows that, if c 2 ≥ 2, then further reducing g 2 by f 3 yields
g 2 = x a-a 1 g 1 ± y b 1 c+b 2 z (c 2 -2)c (x a 1 y b 2 -f 3 ) ≡ x a-a 1 g 1 ± x a 1 y b 1 c+2b 2 z (c 2 -2)c mod f 3 ≡ x a 1 g1 + y b 1 c+2b 2 z (c 2 -2)c mod f 3 ≡ x a 1 g 3 mod f 3 for some g1 ∈ K[x, y, z]. Iterating c 2 many times yields a relation (3.1) f c 1 = qf 3 + x k g, k = a 1 c 2
, where g ≡ y mod x, z with from (1.12). One computes that
x a 1 f 2 = y b 1 f 3 -z c 1 f 1 , z c 2 f 2 = x a 2 f 3 -y b 2 f 1 . Bresinsky concludes that (3.2) Z(x, z) ⊂ Z(g, f 3 ) ⊂ Z(f 1 , f 3 ) = Z(f 1 , f 2 , f 3 ) ∪ Z(x, z) making Spec(K[Γ]) = Z(g, f 3 ) a set-theoretic complete intersection.
As a particular case of (2.7) consider three elements
ξ = t + i≥ +∆ ξ i s i-t i , (3.3) η = t m + i≥m+∆m η i s i-m t i , ζ = t n + i≥n+∆n ζ i s i-n t i ∈ K[t, s].
(3.5) F c 1 = qF 3 + x k G, G(x, y, z, 0) = g, then C = S ∩ Z(s -1) = Z(G, F 3 , s -1)
is a set-theoretic complete intersection.
Proof. Consider a matrix of indeterminates
M = Z 1 X 1 Y 1 Y 2 Z 2 X 2
and the system of equations defined by its maximal minors
F 1 = X 1 X 2 -Y 1 Z 2 , F 2 = Y 1 Y 2 -X 2 Z 1 , F 3 = X 1 Y 2 -Z 1 Z 2 .
By Schap's theorem (see [START_REF] Schaps | Deformations of Cohen-Macaulay schemes of codimension 2 and non-singular deformations of space curves[END_REF]) there is a solution with coefficients in K{x, y, z} [[s]] that satisfies M (x, y, z, 0) = M 0 . Grauert's approximation theorem (see [Gra72]) coefficients can be taken in K{x, y, z, s}.
Using the fact that M is a matrix of relations, we imitate in Bresinsky's argument in (3.2),
Z(G, F 3 ) ⊂ Z(F 1 , F 3 ) = Z(F 1 , F 2 , F 3 ) ∪ Z(X 1 , Z 2 ).
The K-analytic germs Z(G, F 3 ) and Z(G, X 1 , Z 2 ) are deformations of the complete intersections Z(g, f 3 ) and Z(g, x a 1 , z c 2 ), and are thus of pure dimensions 2 and 1 respectively. It follows that Z(G, F 3 ) does not contain any component of Z(X 1 , Z 2 ) and must hence equal Z(F 1 , F 2 , F 3 ) = S. The claim follows.
Proposition 3.2. Set δ = min(∆ , ∆m, ∆n) and k = a 1 c 2 . Then the curve germ C defined by (3.3) is a set-theoretic complete intersection if
min(d 1 , d 2 , d 3 ) + δ ≥ γ, min(d 1 , d 3 ) + δ ≥ γ + k ,
or, equivalently,
min(d 1 , d 2 + k , d 3 ) + δ ≥ γ + k .
Proof. By Lemma 2.9 the first inequality yields the assumption Γ = Γ on (3.3). The conductor of ξ k O equals γ + k and contains (F if i )(ξ , η , ζ ), i = 1, 3, by the second inequality. This makes F i -f i , i = 1, 3, divisible by x k . Substituting into (3.1) yields (3.5) and by Lemma 3.1 the claim.
Remark 3.3. We can permute the roles of the f i in Bresinsky's method.
If the role of (f 1 , f 3 ) is played by (f 1 , f 2 ), we obtain a formula similar to (3.1), f b 1 = qf 2 + x k g with k = a 2 b 1 . Instead of x k , there is a power of y if we use instead (f 2 , f 1 ) or (f 2 , f 3 ) and a power of z if we use (f 3 , f 1 ) or (f 3 , f 1 ). The calculations are the same. In the examples we favor powers of x in order to minimize the conductor γ + k . We assume that a, b ≥ 2 and b + 2 < 2a + 1 so that < m < n. The maximal minors (1.6) of M 0 are then
Series of examples
f 1 = x a+1 -yz, f 2 = y b+1 -x a z, f 3 = z 2 -xy b
with respective weighted degrees
d 1 = (a + 1)(b + 2), d 2 = (2a + 1)(b + 1), d 3 = 2ab + 2b + 2
where d 1 < d 3 < d 2 . In Bresinsky's method (3.1) with k = 1 reads f 2 1 -y 2 f 3 = xg, g = x 2a+1 -2x a yz + y b+2 . We reduce the inequality in Proposition 3.2 to a condition on d 1 .
Lemma 4.1. The conductor of ξO is bounded by
γ + ≤ d 2 - m < d 3 .
In particular, d 2 ≥ γ + 2 and d 3 > γ + .
Proof. The subsemigroup Γ 1 = , m ⊂ Γ has conductor
γ 1 = ( -1)(m -1) = 2a(b + 1) = n + (a -1) + 1 ≥ γ.
To obtain a sharper upper bound for γ we think of Γ as obtained from Γ 1 by filling gaps of Γ
1 . Since 2n ≥ γ 1 , Γ \ Γ 1 = (n + Γ 1 ) \ Γ 1 .
The smallest elements of Γ 1 are i where i = 0, . . . , m . By symmetry of Γ 1 (see [Kun70]) the largest elements of N \ Γ 1 are
γ 1 -1 -i = n + (a -1 -i) , i = 0, . . . , m ,
and contained in n + Γ 1 since the minimal coefficient a -1 -i is nonnegative by
a -1 - m ≥ a -1 - m = (a -1)b -3 b + 2 > -1.
They are thus the largest elements of Γ \ Γ 1 . Their minimum attained at i = m then bounds
γ ≤ γ 1 -1 - m .
Substituting γ 1 + -1 = d 2 yields the first particular inequality. The second one follows from
d 2 -d 3 = 2a -b -1 = m -< m .
Proof of Corollary 0.1.
(a) This follows from Lemma 2.9. (b) By Lemma 4.1, the inequality in Proposition 3.2 simplifies to d 1 + δ ≥ γ + . The claim follows.
(c) Suppose that d 1 + q -n ≥ γ + for some q > n and a, b ≥ 3. Set p = γ -1 -. Then n > m + and Γ ∩ (m + , m + 2 ) can include at most n and some multiple of . Since ≥ 4 it follows that (m + , m + 2 ) contains a gap of Γ and hence γ -1 > + m and p > m. Moreover (a -1)b ≥ 4 is equivalent to Example 4.2. We discuss a list of special cases of Corollary 0.1.
d 1 + p -m ≥ γ + .
(a) a = b = 2. The monomial curve C 0 defined by (x, y, z) = (t 4 , t 5 , t 7 ) has conductor γ = 7. Its only admissible deformation is (x, y, z) = (t 4 , t 5 + st 6 , t 7 ). However this deformation is trivial and our method does not yield a new example. To see this, we adapt a method of Zariski (see [Zar06, Ch. III, (2.5), (2.6)]). Consider the change of coordinates x = x + 4s 5 y = t 4 + 4s 5 t 5 + 4s 2 5 t 6
and the change of parameters of the form τ = t+O(t 2 ) such that x = τ 4 . Then τ = t + s 5 t 2 + O(t 3 ) and hence y = τ 5 + O(t 7 ) and z = τ 7 + O(t 8 ). Since O(t 7 ) lies in the conductor, it follows that C ∼ = C 0 . In all other cases, Corollary 0.1 yields an infinite list of new examples. (b) a = 3, b = 2. Consider the monomial curve C 0 defined by (x, y, z) = (t 4 , t 7 , t 9 ). By Zariski's method from (a) we reduce to considering the deformation (x, y, z) = (t 4 , t 7 , t 9 + st 10 ). (c) a = b = 3. The monomial curve C 0 defined by (x, y, z) = (t 5 , t 7 , t 13 ) has conductor γ = 17. We want to satisfy p ≥ γ+ -d 1 +m = 9. The most general deformation of y thus reads y = t 7 + s 1 t 9 + s 2 t 11 + s 3 t 16 .
The parameter s 1 can be again eliminated by Zariski's method as in (a). This leaves us with the deformation (x, y, z) = (t 5 , t 7 + s 2 t 11 + s 3 t 16 , t 13 + s 4 t 16 , t 13 ) which is non-trivial due to part (c) of Corollary 0.1 with p = 11.
(d) a = 8, b = 3. The monomial curve C 0 defined by (x, y, z) = (t 5 , t 17 , t 28 ) has conductor γ = 47. The condition in part (b) of Corollary 0.1 requires p ≥ γ -d 1 + m = 19. In fact, the deformation (x, y, z) = (t 5 , t 17 + st 18 , t 28 ) is not flat since C has value semigroup Γ = Γ ∪ {46}. However C is isomorphic to the general fiber of the flat deformation in 4-space (x, y, z, w) = (t 5 , t 17 + st 18 , t 28 , t 46 ).
(a) If d 1 + δ ≥ γ, then Γ is the value semigroup of C. (b) If d 1 + δ ≥ γ + , then C is a set-theoretic complete intersection. (c) If a, b ≥ 3 and d 1 + q -n ≥ γ + , then C defined by p := γ -1 -> mis a non-monomial set-theoretic complete intersection.
14) and e = gcd( , m , n ). Note that , m , n are the submaximal minors of the matrix in (1.5). (H2') Given a, b, c ∈ N \ {0} and a 1 , b 2 ∈ N, define , m , n , d by = bd , (1.15) m = ad , (1.16) n d = a 1 b + ab 2 c , gcd(n , d ) = 1. (1.17) Remark 1.1. In the overlap case (1.11) the formulas (1.15)-(1.16) yield ( , m , n ) = (bc, ac, ab). Lemma 1.2. In case (H1), let ñ ∈ N be minimal with x ñ -z ˜ ∈ I for some ˜ ∈ N. Then gcd( ˜ , ñ) = 1 and (ñ, ˜ ) • gcd(b 1 , b 2 ) = (n , ).
2 and the claim follows. Lemma 1.3. (a) In case (H1), equations (1.12)-(1.14) recover , m, n. (b) In case (H2), equations (1.15)-(1.17) recover , m, n, d.
1 and hence q = 1. The claim follows. (b) By the minimal relation (1.8) gcd(a, b) = 1 and hence ( , m) = d • (b, a). Substitution into equation (1.9) and comparison with (1.17) gives n d = a 1 b+ab 2 c = n d with gcd(n, d) = gcd( , m, n) = 1 by hypothesis. We deduce that (n, d) = (n , d ) and then ( , m) = ( , m ). Proposition 1.4. (a) In case (H1'), a 1 , a 2 , b 1 , b 2 , c 1 , c 2 arise through (H1) from some numerical semigroup Γ = , m, n if and only if e = 1. In this case, ( , m, n) = ( , m , n ). (b) In case (H2'), a, b, c, a 1 , b 2 arise through (H2) from some from some numerical semigroup Γ = , m, n if and only if
-b 2 c of type (1.10). By choice of a , b , c it follows that a > a , b > b , c ≥ c . By Lemma 1.3.(b) d is the denominator of a 1 b +a b 2 c and = b d .
(b) By Lemma 1.3.(b) the conditions are necessary. Conversely assume that the conditions hold true. By definition (1.10) is a matrix of relations of ( , m , n ). By hypothesis (1.20) is a matrix of minimal relations of ( , m , n ). By (1.18) gcd( , m ) = d and hence by Lemma 1.3.(b) b
be a decreasing filtration by ideals such that F i = O for all i ≤ 0 and F 1 ⊂ m. Consider the Rees ring
Let
K be an algebraically closed complete non-discretely valued field. Let C be an irreducible K-analytic curve germ. Its ring O = O C is a one-dimensional K-analytic domain. Denote by Γ its value semigroup. Pick a representative W such that C = (W, w). We allow to shrink W suitably without explicit mention. Let O W be the normalization of O W . Then O W,w = (O, m) ∼ = (K{t }, t ) υ / / N ∪ {∞} is a discrete valuation ring. Denote by m W and m W the ideal sheaves corresponding to m and m. There are decreasing filtrations by ideal (sheaves)
Consider the curve germ C in (2.8) with K-analytic ring(3.4) O = O C = K{ξ , η , ζ }, (ξ , η , ζ ) = (ξ, η, ζ)(t,1), and value semigroup Γ ⊃ Γ. We aim to describe situations where C is a set-theoretic complete intersection under the hypothesis that Γ = Γ. By Proposition 2.7, (ξ, η, ζ) then generate the flat deformation of C 0 = Spec an (K[Γ]) in Proposition 2.3. Let F 1 , F 2 , F 3 be the defining equations from Proposition 2.7. Lemma 3.1. If g in (3.1) deforms to G ∈ K{x, y, z, s} such that
Redefining a, b suitably, we specialize to the case where the matrix in (1.7) is of the formM 0 = z xy y b z x a . By Proposition 1.4.(a) these define Spec(K[ , m, n ]) if and only if = b+2, m = 2a+1, n = ab+b+1(= (a+1) -m), gcd( , m) = 1.
By (b), C is a set-theoretic complete intersection.It remains to show that C ∼ = C 0 . This follows from the fact thatΩ 1 C 0 → K{t}dt has valuations Γ \ {0} whereas the 1-form ω = mydx -xdy = (m -p)t p+ -1 dt ∈ Ω 1 C → K{t}dt has valuation p + = γ -1 ∈ Γ.
While part (c) of Corollary 0.1 does not apply, C ∼ = C 0 remains valid. To see assume that C 0 ∼ = C induced by an automorphism ϕ of C{t}. Then ϕ(x) ∈ O C shows that ϕ has no quadratic term. This however contradicts ϕ(z) ∈ O C .
The deformation (2.7) satisfies Γ = Γ if and only if there is a f ∈ K{x, s} m with homogeneous components such that
and ord(f i (x, 1)) ≥ d i + min {∆ }. The flat deformation in Proposi-
tion 2.3 is then defined by
(2.10)
1),
and value semigroup Γ ⊃ Γ.
We now describe when (2.7) generate the flat deformation in Propo-
sition 2.3.
Proposition 2.7. (2.9) f (ξ) = f (ξ, s)s
Any O W -module M gives rise to an O X -module
With M = M w , its stalk at ι(0) becomes
Lemma 2.4. Spec an W (B) = Spec an W (B) and hence O Y,ι(0) = K{s, t}. Proof. By finiteness of ν (see [START_REF] Henri | Familles d'espaces complexes et fondements de la géométrie analytique[END_REF]Exp. 19, §3, Prop. 9]),
By the universal property of Spec an it follows that (see [Con06, Thm. 2.2.5.(2)]) Proof. By choice of F • there is a cartesian square
By hypothesis and (2.3) the symbols σ(ξ ) generate the graded Kalgebra gr F O. Then σ(ξ ) = σ(ξ ) generate gr F m/ gr F m 2 = gr F (m/m 2 ) and hence ξ generate m/m 2 over K. Then m = ξ O by Nakayama's lemma and hence O = K ξ by the analytic inverse function theorem.
Under the graded isomorphism (2.1) with ξ as in (2.6) (A/As) |
01759690 | en | [
"info.info-hc",
"info.info-ir"
] | 2024/03/05 22:32:10 | 2017 | https://hal.science/hal-01759690/file/KES2017.pdf | Jean-Baptiste Louvet
email: jeanbaptiste.louvet@insa-rouen.fr
Guillaume Dubuisson Duplessis
Nathalie Chaignaud
Laurent Vercouter
Jean-Philippe Kotowicz
Modeling a collaborative task with social commitments
Keywords: human-machine interaction, collaborative document retrieval, social commitments
Our goal is to design software agents able to collaborate with a user on a document retrieval task. To this end, we studied a corpus of human-human collaborative document retrieval task involving a user and an expert. Starting with a scenario built from the analysis of this corpus, we adapt it for a human-machine collaborative task. We propose a model based on social commitments to link the task itself (collaborative document retrieval) and the interaction with the user that our assistant agent has to manage. Then, we specify some steps of the scenario with our model. The notion of triggers in our model implements the deliberative process of the assistant agent. .
Introduction
Document retrieval (DR), which takes place in a closed database indexing pre-selected documents from reliable information resources, is a complex task for non expert users. To find relevant documents, interfaces allowing formulating more specific queries are hardly used because an expertise about the domain terminology is needed. It may require an external assistance to carry out this task according to the users information need. Thus, we propose to design software agents able to collaborate with a user on a document retrieval task.
To this end, we adopt a cognitive approach by studying a corpus of human-human (h-h) collaborative document retrieval task in the quality-controlled health portal CISMeF (www.cismef.org) [START_REF] Darmoni | CISMeF : a structured health resource guide[END_REF] , which involves a user and an expert. In previous work [START_REF] Dubuisson Duplessis | Empirical Specification of Dialogue Games for an Interactive Agent[END_REF][START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] , extraction of dialogue patterns from the corpus has been done with their formalization into dialogue games [START_REF] Maudet | Modéliser l'aspect conventionnel des interactions langagières: la contribution des jeux de dialogue[END_REF] , which can be fruitfully exploited during the dialogue management process [START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] . This formalization uses the notion of social commitments introduced by Singh [START_REF] Singh | Social and psychological commitments in multiagent systems[END_REF] .
In this article, we are interested in linking the task itself (collaborative DR) and the interaction with the user that our assistant agent has to manage. We show that the formalism 3 used to model the dialogue games through social commitments can be enhanced to describe the task. Our model makes the link between a high level structure (the task) and low level interaction (dialogue games). Starting with a scenario built from the analysis of the corpus of h-h interaction, we adapt it for a human-machine (h-m) interaction. Then, we specify each step of this scenario in terms of social commitments.
This article consists of 5 parts: Section 2 gives a short state of the art on dialogue models. Section 3 describes the model we used to specify a collaborative task. Section 4 presents the scenario modeling the h-h collaborative document retrieval process and a discussion on its transposition in a h-m context. In Section 5, some steps of this scenario are detailed in terms of commitments. Finally, Section 6 gives some conclusions and future work.
Related work on reactive/deliberative dialogue model
To model dialogue, plan-based approaches and conventional approaches are often viewed as opposite, although some researchers argue that they are complementary [START_REF] Hulstijn | Dialogue games are recipes for joint action[END_REF][START_REF] Yuan | Informal logic dialogue games in human-computer dialogue[END_REF][START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] : Communication processes are joint actions between participants that require coordination. Nevertheless, coordination must stand on conventions reflected by interaction patterns. Thus, dialogue can be considered as a shared and dynamic activity that requires both high-level deliberative reasoning processes and low-level reactive responses.
Dubuisson Duplessis 3 proposes to use a hybrid reactive/deliberative architecture where a theory of joint actions can be a "semantics" to the interaction patterns described as dialogue games. These dialogue games are modeled through the notions of social commitment and commitment store described below.
Social Commitments
Social commitments are commitments that bind a speaker to a community [START_REF] Singh | Social and psychological commitments in multiagent systems[END_REF] . They are public (unlike mental states such as belief, desire, intention), and are stored in a commitment store. Our formalization classically distinguishes a propositional commitment from an action commitment.
Propositional commitment. A propositional commitment involves that an emitter (x) commits itself at the present on a proposition towards a receiver (y). Such a commitment is written C(x, y, p, s), meaning "x is committed towards y on the proposition p" is in state s. We only consider propositions describing present, which leads us to consider only two states for a propositional commitment: a propositional commitment is initially inactive (Ina). After its creation, it enters the state created (Crt). A created commitment can be canceled by its emitter. In this case it goes back in an inactive state.
Action commitment. An action commitment involves that an emitter (x) commits itself at the present on the happening of an action in the future, towards a receiver (y). Such a commitment is written C(x, y, α, s), meaning "x is committed towards y on the happening of the action α" is in state s. An action commitment is initially inactive (Ina). In this state, it can be created. The creation attempt can fail (Fal) or succeed (Crt). An action commitment in Crt state is active. An active commitment can be violated, leading it to the Vio state. It corresponds to a situation in which the satisfaction conditions of the content of the commitment can not be fulfilled anymore. An active commitment can be fulfilled, leading it to the Ful state. An action commitment is satisfied if its content has been completed.
In order to simplify the writing of the commitments, as in our case the interaction is between two interlocutors, we omit the receiver of the commitments. Consequently, a propositional commitment will be written C(x, p, s) and an action commitment will be written C(x, α, s).
Conversational gameboard
The conversational gameboard describes the state of the dialogue between the interlocutors at a given time. The conversational gameboard describes the public part of the dialogic context supposed strictly shared. T i stands for the conversational gameboard at a time i (the current time). In the framework of this article, we use a simple theory of instants where "<" is the relationship of precedence. The occurrence of an external event increments the time and makes the table evolve. An external event can be dialogic (e.g. an event of enunciation of a dialog act) or extra-dialogic (e.g. an event like light_on showing the occurrence of the action of turning the light on).
The conversational gameboard includes a commitment store, which is a partially ordered set of commitments. It is possible to query the gameboard on the belonging (or non-belonging) of a commitment. This is formalized in equation 1a for belonging and 1b for non-belonging (c being a commitment).
T i c, true if c ∈ T i , false otherwise (1a) T i c, equivalent to ¬(T i c) (1b)
Dialogue games
A dialogue game is a conventional bounded joint activity between an initiator and a partner. Rules of the dialogue game specify the expected moves for each participant, which are supposed to play their roles by making moves according to the current stage of the game. This activity is temporarily activated during the dialogue for a specific goal.
A dialogue game is a couple type, subject , where type belongs to the set of existing dialogue games and subject is the goal of the game in the language of the subject of the game. We usually write a game under the form type(subject). A game is defined with social commitments. It's a quintuplet characterized for the initiator and the partner by [START_REF] Dubuisson Duplessis | Empirical Specification of Dialogue Games for an Interactive Agent[END_REF][START_REF] Dubuisson Duplessis | A Conventional Dialogue Model Based on Dialogue Patterns[END_REF] entry conditions describing the conditions the conversational gameboard must fulfill to enter the game, termination conditions, separated into two categories: Success conditions and failure conditions, rules expressed in terms of dialogic commitments, specifying the expected sequencing of expected or forbidden acts, and effects specifying the contextualized effects of dialogical actions in terms of generation of extra-dialogic commitments (i.e. related to the task).
A sub-dialogue game is a child dialogue game played in an opened parent game. The emitter of the sub-game can be different from the one of the parent game. Conditions for playing a sub-dialogue game can be hard to specify [START_REF] Maudet | Modéliser l'aspect conventionnel des interactions langagières: la contribution des jeux de dialogue[END_REF] .
Dialogical action commitment.
A dialogical action commitment is an action commitment contextualized in a dialogue game. It means that in a dialogue game, a participant is committed to produce dialogical actions conventionally expected relatively to an opened dialogue game. For example, in the context of the offer dialogue game, if x plays offer(x, α), the dialogical action commitment C(y, acceptOffer(y, α)|declineOffer(y, α), Crt) will be created, showing that the receiver of the dialogical action can accept or decline the offer.
Model to specify a collaborative task
This section describes the model we use to specify the task using commitments, conversational gameboard and dialogue games. First of all, we consider that a task can be split into subtasks that we call steps. Each step of the task is described by a table (named step table) divided in three parts: The name of the step, the access conditions to this step and a list of expected behaviors of each participant to the dialogue. Expected behaviors are alternatives and can be played in any order. An expected behavior is a description of:
• A conventionally expected dialogue game, with its emitter and content (action or proposition); • The possible outputs of this dialogue game;
• Trigger (optional) that is conditions that must be fulfilled to play the expected game.
To define the trigger ϕ that emits the predicate E, we use the notation 2a, ϕ being a formula. To express that the conversational gameboard T i fulfills the conditions for a trigger to emit E, we use the notation 2b.
E : T i → ϕ (2a) T i E (2b)
Prior to play an expected game, the emitter must respect the entry conditions of this game. To shorten the writing of a step table, we do not repeat these conditions. 1. Instances of this this model for our specific task can be found in Tables 3 and4. It describes the access conditions and the expected dialogue games of the step and the modifications they bring to the conversational gameboard T i . In our example, access conditions are that T i contains C(z, {α, p}, s) and triggers the predicate E 0 we give an example of expected behavior with DG(z, {β 1 , p 1 }) as dialogue game and T i+1 C(z, {β 1 , p 1 }, s) as a modification to the conversational gameboard brought by DG. For games played by a software agent, the emission of a predicate (e.g. T i E 1 for the first dialogue game of our example) has to be triggered in order to play the expected dialogue game. For games played by a human, there's no trigger row in the table as the decision to play a dialogue game only depends on his own cognitive process. Some expected dialogue games can be played several times, noted with * . This symbol is propagated in the output to show the commitments that can be generated several times. This is shown with the first dialogue game of our example. Sub-dialogue games (like SDG(z', {γ, p 2 }) in the Table 1) that can be played are indicated under their parent game (played by any of the participants), headed by a " ".
The step table is a complete description of what can conventionally happen. A generic example of step table is given in Table
This model gives a clear definition of what is conventionally expected from each participant in one step of the task. It is also possible to see triggers as defining the intentions of the agent. As a matter of fact, most of the agent's decisions are done thanks to the triggers, and the agent's behavior can be adapted by modifying these triggers.
Name of the step
Access ∧ T i C(z, {α, p}, s) T i E 0 Expected game DG(z, {β 1 , p 1 }) * Trigger T i E 1 (only if z is a software agent) Output T i+1 C(z, {β 1 , p 1 }, s) * Expected game DG(-, {β 2 , p 2 }) SDG(z, {γ, p 2 }) Trigger T i E 2 (only if z is a software agent) Output T i+1 C(-, {β 2 , p 2 }, Crt) T i+1 C(z, {γ, p 2 }, s) Expected game DG(z, δ) Trigger T i E 3 (only if z is a software agent) Output ∨ DA1(z', δ) ⇒ T j C(z', δ, s 1 ) DA2(z', δ) ⇒ T j C(z', δ, s 2 )
Expected behaviors
We use a specific syntax for dialogue games implying dialogical action commitments. To reduce the size of the tables, we directly map the expected dialogical action commitments to the expected dialogical action of the dialogue game. For example, the third part of Table 1 shows the writing for the dialogue game DG(z, δ): Playing the dialogue game DG(z, δ) creates the dialogical action commitment C(z', DA1(z', δ)|DA2(z', δ), Crt) (DA1 and DA2 being dialogical actions), playing the dialogical action DA1(z', δ) creates the commitment C(z', δ, s 1 ) and playing the dialogical action DA2(z', δ) creates the commitment C(z', δ, s 2 ).
Analysis of the h-h collaborative document retrieval process
Information Retrieval
This section introduces some models of information retrieval (IR) processed by an isolated person and in a collaborative framework. These models can be applied to DR.
IR is generally considered as a problem solving process 8 implying a searcher having an identified information need. The problem is then to fulfill this lack of information. Once the information need is specified, the searcher chooses a plan he will execute during the search itself. He evaluates the results found to possibly repeat the whole process. IR is considered as an iterative process that can be split into a series of steps [START_REF] Broder | A taxonomy of Web search[END_REF][START_REF] Marchionini | Find What You Need, Understand What You Find[END_REF][START_REF] Sutcliffe | Towards a cognitive theory of information retrieval[END_REF] : (i) information need identification, (ii) query specification (information need formulation and expression in the search engine, etc.), (iii) query launch, (iv) results evaluation, (v) if needed, query reformulation and repetition of the cycle until obtaining satisfying results or abandoning the search.
The standard model is limited by two aspects. On the one hand, the information need of this process is seen as static. On the other hand, the searcher refines repeatedly his query until finding a set of documents fitting his initial information need. Some studies showed that, on the contrary, the information need is not static and that the goal is not to determine a unique query returning a set of documents matching with the information need [START_REF] Bates | The Design of Browsing and Berrypicking Techniques for the Online Search Interface[END_REF][START_REF] O'day | Orienteering in an information landscape: how information seekers get from here to there[END_REF] . Bates proposes the model of "berrypicking" [START_REF] Bates | The Design of Browsing and Berrypicking Techniques for the Online Search Interface[END_REF] which lays the emphasis on two points. The first one is that the information need of the searcher evolves thanks to the resources found during the search. Encountered information can lead the search in a new and unforeseen direction. The second one is that the information need not satisfied by a unique set of documents obtained at the end of the search, but by a selection of resources collected all along the process. To sum up, the IR process is opportunistic and its progression influences the final result.
Study of the h-h collaborative document retrieval process
To understand the collaborative aspect of the DR process of a user assisted by a human expert, we carried out a study on a h-h interaction. This study is based on the analysis of the corpus collected during the Cogni-CISMeF project [START_REF] Loisel | A conversational agent for information retrieval based on a study of human dialogues[END_REF][START_REF] Loisel | An Issue-Based Approach to Information Search Modelling: Analysis of a Human Dialog Corpus[END_REF] .
The Cogni-CISMeF corpus
The corpus consists in assistance dialogues about the DR task between an expert and a user in a co-presence situation. The user expresses his information need. The expert has access to the CISMeF portal and has to lead the search cooperating with the user. The CISMeF portal has a graphical user interface and a query language enabling to decompose a query into MeSH ("Medical Subject Headings") lexicon elements. The CISMeF terminology contains keywords, qualifiers (symptoms, treatments. . . ), meta-terms (medical specialties) and resources types (databases, periodicals, images. . . ). The system also allows for extended queries, although many users are not comfortable with them.
The experiment was carried out with 21 participants (e.g., researchers, students, secretaries of the laboratory) submitting a query to one of the two CISMeF experts (researchers of the project who learned to use the CISMeF terminology). The corpus includes the transcript of the 21 dialogues (12 for the first expert and 9 for the second) and contains around 37 000 words. Yes normally it's a diagnostic / ok / let's try like this A10 We will try like this otherwise we will remove extra things to have some / so I launch the search again with the "cancerology" thematic access, the CISMeF keyword "colon" and the qualifier "diagnostic" without specifying the type of searched resources Table 2. Translated extract of a dialogue from the corpus (VD06). A is the expert and B the searcher.
Figure 1. Scenario presenting the phases (squares) and the steps (ellipses) in a collaborative document retrieval task.
The collaborative document retrieval scenario
The analysis of the corpus enabled us to identify and characterize the different phases of the dialogues of the Cogni-CISMeF corpus playing a role in the task progress. Five phases were distinguished:
• Verbalization: It is the establishment of the search subject between both participants. It always starts by a request formulation from the user and can be followed by spontaneous precision. The expert can then start the query construction if he considers that the verbalization contains enough keywords, ask precision if not or try to reformulate the user's verbalization; • Query construction: It is the alignment of the terms of the user's verbalization with the CISMeF terminology in order to fill in the query form; • Query launch: It is the execution of the current query by the expert. This phase is often implicit;
• Results evaluation: The expert evaluates the results of the query. If they are not satisfying, he decides to directly repair the query. Otherwise he presents them to the user. If the latter finds them satisfying, the goal is reached and the search is over; If he finds them partially satisfying (not adapted to his profile, or not related totally to the information need) or not satisfying, the query must be repaired. If the results are rejected by the user, it is also possible to abandon the search; • Query repair: The expert and the user try to use tactics to modify the query while respecting the information need. Three tactics were observed: Precision (to refine the query), reformulation (using synonyms for example) and generalization (to simplify the query). However, these tactics are not mutually exclusive: It is possible to combine precision or generalization with reformulation.
In addition to these phases, an opening and a closing phases were observed. The opening phase is optional and consists simply in greetings (information demand about the user's name, age. . . ). At last, the closing phase may give ideas for a new search. The analysis of this corpus showed that the DR task fulfilled by the participants is iterative, opportunistic, strategic and interactive [START_REF] Bates | Information search tactics[END_REF][START_REF] Bates | Where should the person stop and the information search interface start?[END_REF] . The iterative aspect of this process is illustrated by the systematic repetition of the pattern launch/evaluation/repair. On top of that, we remarked that it is clearly lead by the expert.
The dialogue in Table 2 is an example of a query repair showing the iterative, opportunistic, strategic and interactive aspects. The expert suggests to widen (generalization) the query (utterance A1). The partners elaborate jointly a plan to modify the query. In this case, it is mainly the user who suggests the moves to carry out (utterances B4 and B6) and the expert agrees (utterances A5 and A7). Then, the expert suggests to add (precision) the qualifier "diagnostic" (utterance A8). The user accepts and suggests the plan execution (utterance B9). The plan execution is accepted and done by the expert (utterance A10), who eventually launches the query.
The scenario presented in Figure 1 synthesizes the phases (squares) split into several steps (ellipses) and the possible runs. The dashed ellipses correspond to actions that can be carried out implicitly by the participants of the interaction.
Discussion
It is possible to take inspiration from the h-h interaction to the h-m interaction. Thus, the presented scenario in the context of the h-h interaction (Figure 1) can be transposed to a h-m context. However, it has to be adapted in order to fit the constrains of the h-m interaction.
The h-m interaction framework changes the collaboration situation as far as it gives to the user the ability to lead the search and to modify the query, without requiring the system's agreement. It is an important change which gives to the user new privileges and more control over the interaction. It implies a restriction of the software assistant's permissions when compared to the human expert. As a matter of fact, the system can take initiatives to modify the query by suggesting modifications that will be applied only if they are accepted by the user. However, this inversion of the query modification rights allows each participant to take part in the interaction: As far as the opportunistic aspect are concerned, the user and the system can participate to the interaction at any moment.
Despite the lack of cognitive abilities that has a human expert, the software agent has some edges that can be beneficial for a collaborative DR task. The system can access online dictionaries of synonyms, hyponyms, hypernyms and "see also" links, allowing it to find terms related to the user's verbalization. For the reformulation, the system can make use of the lexical resources on the query terms [START_REF] Audeh | Semantic query expansion for fuzzy proximity information retrieval model[END_REF] . In the context of CISMeF, Soualmia [START_REF] Soualmia | Strategies for Health Information Retrieval[END_REF] offers tools to correct, precise and enrich queries. On top of that, the assistant agent can store the previous DR sessions and take advantage of them (by linking information needs, queries and documents) to find terms related to the current search [START_REF] Guedria | Customized Document Research by a Stigmergic Approach Using Agents and Artifacts[END_REF] . It also has the ability to launch queries "in background" (i.e. without notifying the user), beforehand suggesting any query modification or launch. It makes possible, before suggesting a modification to the user, to check if it brings interesting results.
Application to the Cogni-CISMeF project
We described a h-h collaboration for DR process from which we want to draw in a h-m framework. The model described in Section 3 can be used for a h-m collaboration. As a matter of fact, this model makes possible to express the different characteristics of a h-m collaborative DR:
• Iterative: It is possible to give a circular sequencing to step tables using their access conditions (to represent the launch/evaluation/repair loop, for example); • Opportunistic: Depending on the current state of the conversational gameboard, the most relevant dialogue games can be played; • Interactive: Each participant can take part to the interaction at any moment; • Strategic: It is possible to describe dialogue games combinations in order to reach given states of the conversational gameboard.
An interesting aspect of our model is that the assistant agent can behave according different cognition levels. This can be done thanks to the triggers that capture the deliberative process of the agent. The agent's reasoning can be very reactive with simple rules or, on the contrary, it can entail a more high level decision process.
In this section, we describe the steps of the DR scenario (see Section 4) in terms of our model (see Section 3). Our goal is to show the expressiveness of our model applied to a h-m collaborative DR task drawn from an h-h one. Only some relevant step tables are presented to illustrate the interest of the model.
Verbalization precision step
This step takes place in the verbalization phase. The user has done a first verbalization of his information need but he has to give more details about it, so he is committed to perform the action of verbalization precision (T i C(x, preciseVerbalization, Crt) present in the "Access" part).
In this situation, it is conventionally expected from the user to add verbalization expressions e to his verbalization with an "inform" dialogue game (inform(x, verbalizationExpression(e))). This action can be repeated as often as the user needs (expressed by the * following the dialogue game). The consequences of this dialogue game on the conversational gameboard are: (i) Every time the dialogue game is played, a new commitment on a verbalization expression is created (Crt) and (ii) The commitment on the action of verbalization precision is fulfilled (Ful). The * in the output is propagated for the creation of the commitment on a verbalization expression but not on the action fulfillment commitment (it is performed only the first time the dialogue game is played).
It is also expected that the user tells the agent when his precision is finished. This can be performed by playing the dialogue game inform(x, verbalizationComplete) that creates the commitment C(x, verbalizationComplete, Crt) in the conversational gameboard. The model of this step is shown in Table 3.
Query precision step
This step takes place in the query repair phase. Query q were launched (T i lastQueryLaunched(q)) and its results evaluated by the user, who turned them down (T i C(x, ¬queryResultsSatisfying(q), Crt)). We place ourselves in the context where query is too general (T i queryTooGeneral(q)) and must be precised. This implies some conventionally expected behavior from each participants: The user is expected to add keywords to the query or request a query launch. The agent is expected to offer the user to add a keyword or to specify one, the goal being to reduce the number of results. It can inform the user that the keyword it is offering to specify the query is actually a specification of a keyword of the query. It can also offer the user to launch the query. All these expected behaviors are explained in the following of this section.
The addition of keywords to the query by the user corresponds to the inform(x, queryKeyWord(kw)) * dialogue game. The user requests to the agent to launch the query with a request(x, launchQuery(q)) dialogue game. In this case, our collaborative agent will always accept the requests formulated by the user (that's why we only put the acceptRequest(y, launchQuery(q)) dialogical action commitment in the output row).
The agent offers to add keywords to the query with an offer(y, addKeyWord(kw)) * and to specify the query with an offer(y, specifyKeyWord(kw, skw)) * . In each case, the user can accept or decline the offer. If he accepts the offer, the conversational gameboard is updated with the commitments generated by execution of the action (keyword addition that generates C(y, queryKeyWord(kw), Crt) * or query specification that generates C(y, queryKeyWord(skw), Crt) * and C(y, queryKeyWord(kw), Ina) * , as skw replaces kw). If the user declines the offer, the action fails (the commitment on the action becomes Fal: C(y, addKeyWord(kw), Fal) * or C(y, specifyKeyWord(kw, skw), Fal) * ).
In the case where the agent proposes to specify a keyword, it can play a sub-dialogue game (inform(y, isSpecification(kw, skw))) to inform the user that the new keyword is a specification of one keyword. The trigger of this sub-dialogue game is empty, because all the conditions needed to play this dialogue game (i.e. checking that skw is actually a specification of kw) are already reached in the parent dialogue game.
When the query has been specified enough, the agent can offer to launch it, if the user did not already decline the offer to launch it.
For illustrative purposes, we defined arbitrarily in Equations 3 the trigger predicates used in Table 4. These definitions depend on the expected behavior we want our agent to adopt. Trigger 3a makes sure that the query q is the last launched. Trigger 3b points that the query q is too general (i.e. the results are too numerous). The predicate queryResultsNb(q) gives the number of results returned by the search engine when launching query q. Trigger 3c
Query precision
Access ∧ T i C(x, ¬queryResultsSatisfying(q), Crt) T i lastQueryLaunched(q) T i queryTooGeneral(q) Expected game inform(x, queryKeyWord(kw)) * Output T j C(x, queryKeyWord(kw), Crt) * Expected game request(x, launchQuery(q)) Output acceptRequest(y, launchQuery(q)) ⇒ T j C(y, launchQuery(q), Crt) Expected game offer(y, launchQuery(q)) * Trigger T j C(y, launchQuery(q), Fal) ∧ T i queryPreciseEnough(q) Output ∨ acceptOffer(x, launchQuery(q)) ⇒ T j C(y, launchQuery(q), Crt) declineOffer(x, launchQuery(q)) ⇒ T j C(y, launchQuery(q), Fal) * means that a keyword kw, related to the verbalization of the user (relatedToVerbalization(kw)), brings precision to the current query (currentQuery(q)). Trigger 3d gives a keyword (skw) more specific than a current one (kw). The predicate isMeSHHyponym(kw, skw) gives the information that skw is an hyponym (a more specific word) of kw according to the MeSH lexicon. Trigger 3e expresses that query q is precise enough to be proposed to the user for launch. lastQueryLaunched(q) : T i → ∃n ≤ i, ∧ T n C(y, queryLaunched(q), Crt) T n-1 C(y, queryLaunched(q), Crt)
∧ m > n, ∀q' q, ∧ T m C(y, queryLaunched(q'), Crt) T m-1 C(y, queryLaunched(q'), Crt)
queryTooGeneral(q) : T i → queryResultsNb(q) ≥ 75 (3b) relevantForPrecision(kw) : T i → ∃q, currentQuery(q), queryResultsNb(q) > queryResultsNb(q + kw), relatedToVerbalization(kw) (3c) specification(kw, skw) : T i → isMeSHHyponym(kw, skw) (3d) queryPreciseEnough(q) : T i → ∃q, currentQuery(q), queryResultsNb(q) < 75 (3e)
Conclusion and future work
Our work is based on a cognitive study of a corpus of h-h collaborative DR task for the quality-controlled health portal CISMeF. Starting with a scenario of a collaborative DR task, built from the analysis of this corpus, we adapt it in a h-m context, where an assistant agent (software) helps a user in his task.
In this article, we described a model to specify a collaborative task in terms of social commitments. We shown how these social commitments link the task itself to the interaction with the user. This model has been applied to the CISMeF portal to specify each steps of the scenario in terms of social commitments. The notion of triggers in our model implements the deliberative process of the assistant agent. The agent's reasoning can be very reactive with simple rules or, on the contrary, it can entail a more high level decision process.
We currently work on these triggers to express the decision process of our assistant agent. As a matter of fact, it concerns the reasons of the agent both to enter a step of the scenario and to choose the dialogue game to play.
The validation of our system consists in evaluating the added value brought to CISMeF. The idea is to compare the queries made by the user with and without our assistant agent. This comparison would be made by calculating queries precision and recall.
Finally, to prove the genericity of our approach (the scenario, the model, the dialogue games, ...), we have started to investigate collaborative document retrieval on a transport law database (http://www.idit.asso.fr).
Table 1 .
1 Generic step table. DG is a dialogue game and SDG is a sub-dialogue game. α, β k , γ are actions, p, p k and p k ' are propositions, DA1 and DA2 are dialogical actions and E k are predicates. z and z' stand for the participants to the interaction.s, s 1 and s 2 are states.
Table 2
2 presents a translated extract of a dialogue (explained in Section 4.2.2).
A1 [. . . ] Perhaps we can try to widen the search in our case if we consider the words we used
B2 We didn't use that much already
A3 Ummm, forget it then
B4 Why remove / we can remove "analysis"
A5 So let's remove "analysis"
B6 And "diagnostic"
A7 Yes
[. . . ]
A8 [. . . ] I am almost tempted to put diagnostic anyway because / because we will see what it yields
B9
Table 3 .
3 Verbalization precision step, i < j.
Verbalization precision
Access T i C(x, preciseVerbalization, Crt)
Expected game inform(x, verbalizationExpression(e)) *
Output ∧ T j C(x, verbalizationExpression(e), Crt) * T j C(x, preciseVerbalization, Ful)
Expected game inform(x, verbalizationComplete)
Output T j C(x, verbalizationComplete, Crt)
Table 4 .
4 Query precision step, i < j. x stands for the expert agent, y for the user and z either for the user or the expert agent
declineOffer(x, specifyKeyWord(kw, skw)) ⇒ T j C(y, specifyKeyWord(kw, skw), Fal) *
Expected game offer(y, addKeyWord(kw)) *
Trigger T i relevantForPrecision(kw)
Output ∨ acceptOffer(x, addKeyWord(kw)) ⇒ T j C(y, queryKeyWord(kw), Crt) * declineOffer(x, addKeyWord(kw)) ⇒ T j C(y, addKeyWord(kw), Fal) *
offer(y, specifyKeyWord(kw, skw)) Expected game offer(y, specifyKeyWord(kw, skw)) Expected game inform(y, isSpecification(kw, skw))
Trigger ∅
Output T
* Trigger T i C(z, queryKeyWord(kw), Crt) ∧ T i specification(kw, skw) Output ∨ acceptOffer(x, specifyKeyWord(kw, skw)) ⇒ ∧ T j C(y, queryKeyWord(skw), Crt) * T j C(y, queryKeyWord(kw), Ina) * j C(y, isSpecification(kw, s), Crt) |
01759804 | en | [
"shs",
"sdv",
"sdv.mhep"
] | 2024/03/05 22:32:10 | 2016 | https://hal.science/hal-01759804/file/NonAdherenceControl_Preprint.pdf | Caroline Huyard
email: caroline.huyard@univ-lille2.fr
Luc Derijks
Louis Lieverse
Intentional Nonadherence as a Means to Exert Control
Keywords: adherence, compliance, nonadherence, noncompliance, chronic, illness and disease, experiences, motivation, power, empowerment, self-care, self, Western Europe, Europe, Europeans, interviews, research strategies, coding, qualitative, failure modes and effects analysis (FMEA)
Medication adherence is a major issue for patients with a chronic illness, who sometimes rationally choose temporary nonadherence. This study aims at better understanding intentional nonadherence and especially why it seems to fluctuate over time. It is based on 48 semi-structured interviews conducted in a hospital in the Netherlands with patients who had been prescribed a medication for a chronic disease for at least 1 year, and who had either type 2 diabetes, hypertension, Parkinson's disease, inflammatory bowel disease, or chronic myeloid leukemia. The analysis uses a simplified version of the failure modes and effects analysis (FMEA) method. Intentional nonadherence appeared to be the result of the respondents' desire (a) to exert control over the treatment and its effects on their body, and (b) to control the hold of the treatment on their daily life. This result provides a rationale for the fluctuation of intentional nonadherence behavior.
Introduction
Nonadherence, or poor adherence, is one of the major contemporary challenges in the field of health. Adherence can be defined as "the extent to which a person's behavior-taking medication, following a diet, and/or executing lifestyle changes, corresponds with agreed recommendations from a healthcare provider" (World Health Organization, 2003, p.4). According to the World Health Organization (2003), nonadherence affects 50% of chronic patients in developed countries and leads to poor health outcomes and increased health costs, so that increasing the effectiveness of adherence interventions may have a greater impact on the health of the population than any improvement in specific medical treatments. Understanding the reasons for nonadherence is therefore essential.
As chronic diseases were expanding in the early 1980s, some authors have suggested that it could be useful to distinguish between intentional and unintentional adherence [START_REF] Cooper | Intentional prescription nonadherence (noncompliance) by the elderly[END_REF] arguing that if nonadherence was intentional, efforts aimed at reducing forgetting to take the medications would have little chance to help the patients. In line with this insight, many studies have tried to evidence what factors could lead patients to intentional nonadherence and what factors could lead them to unintentional nonadherence. Various definitions of intentional nonadherence were crafted for this purpose, ranging from the rather neutral views that intentional nonadherence involves deliberate decisions to adjust medication use [START_REF] Laba | Understanding rational non-adherence to medications. A discrete choice experiment in a community sample in Australia[END_REF] and occurs when patients actively choose not to follow treatment recommendations [START_REF] Daleboudt | Intentional and unintentional treatment nonadherence in patients with systemic lupus erythematosus. A r t h r i t i s C a r e & R e s e a r c h[END_REF] or discontinue, skip or alter the dose of medication they had been prescribed [START_REF] Iihara | Comparing patient dissatisfaction and rational judgment in intentional medication non-adherence versus unintentional nona d h e r e n c e[END_REF] to more hypothesis-ladden views, such as the ones that nonadherence occurs when patients miss or alter doses to suit their needs [START_REF] Wroe | Intentional and unintentional nonadherence: A study of decision making[END_REF] and it is associated with motivation and patients' beliefs about taking medications [START_REF] Clifford | Understanding different beliefs held by adherers, unintentional nonadherers, and intentional nonadherers: Application of the Necessity-Concerns Framework[END_REF].
There is now a large body of studies that investigated why patients would rationally choose nonadherence. Some major factors have been identified. Most studies agree that medications' side effects strongly contribute to patients' intentional nonadherence [START_REF] Khan | Barriers to and determinants of medication adherence among hypertensive patients attended National Health Service Hospital, Sunderland[END_REF][START_REF] Laba | Patient preferences for adherence to treatment for osteoarthritis: The Medication Decisions in Osteoarthritis Study (MEDOS)[END_REF][START_REF] Laba | Understanding if, how and why non-adherent decisions are made in an Australian community sample: A key to sustaining medication adherence in chronic disease[END_REF][START_REF] Noens | Measurement of adherence to BCR-ABL inhibitor therapy in chronic myeloid leukemia: Current situation and future c h a l l e n g e s . H a e m a t o l o g i c a[END_REF][START_REF] Thunander Sundbom | Women and men report different behaviours in, and reasons for medication non-adherence: A nationwide Swedish survey[END_REF]. A large number of studies have also highlighted the role of patients' beliefs about the benefits and the necessity of their treatment in their decision not to take it as prescribed [START_REF] Chambers | Adherence to medication in stroke survivors: A qualitative comparison of low and high adherers[END_REF][START_REF] Iihara | Beliefs of chronically ill Japanese patients that lead to intentional non-adherence to m e di c a t i on[END_REF][START_REF] Laba | Understanding rational non-adherence to medications. A discrete choice experiment in a community sample in Australia[END_REF][START_REF] Wileman | Choosing not to take phosphate binders: The role of dialysis patients' medication beliefs[END_REF], and in line with this, studies showed that patients' judgments on the absence of symptoms [START_REF] Ulrik | The patient's perspective: Adherence or non-adherence to asthma controller therapy? The Journal of Asthma[END_REF], their concern beliefs [START_REF] De Vries | Medication beliefs, treatment complexity, and nonadherence to different drug classes in patients with type 2 diabetes[END_REF], perceived risks [START_REF] Laba | Understanding rational non-adherence to medications. A discrete choice experiment in a community sample in Australia[END_REF][START_REF] Ulrik | The patient's perspective: Adherence or non-adherence to asthma controller therapy? The Journal of Asthma[END_REF][START_REF] Wileman | Choosing not to take phosphate binders: The role of dialysis patients' medication beliefs[END_REF], or their perception that the treatment would be ineffective [START_REF] Laba | Understanding if, how and why non-adherent decisions are made in an Australian community sample: A key to sustaining medication adherence in chronic disease[END_REF] foster nonadherence behaviors. This is in line with many qualitative studies pointing to patients' broader concerns about medications as potentially harmful products and their preference to take as little as possible [START_REF] Pound | Resisting medicines: A synthesis of qualitative studies of medicine t a k i n g[END_REF]. Last, many studies agree on the role of health care payment systems: When patients must contribute significantly to the cost of their treatment, they tend to be less adherent [START_REF] Laba | Patient preferences for adherence to treatment for osteoarthritis: The Medication Decisions in Osteoarthritis Study (MEDOS)[END_REF][START_REF] Laba | Understanding if, how and why non-adherent decisions are made in an Australian community sample: A key to sustaining medication adherence in chronic disease[END_REF][START_REF] Wileman | Choosing not to take phosphate binders: The role of dialysis patients' medication beliefs[END_REF]. Other factors have also been identified, such as treatment schedule [START_REF] Laba | Understanding rational non-adherence to medications. A discrete choice experiment in a community sample in Australia[END_REF][START_REF] Laba | Patient preferences for adherence to treatment for osteoarthritis: The Medication Decisions in Osteoarthritis Study (MEDOS)[END_REF][START_REF] Wileman | Choosing not to take phosphate binders: The role of dialysis patients' medication beliefs[END_REF], patient-practitioner relationships [START_REF] Laba | Understanding rational non-adherence to medications. A discrete choice experiment in a community sample in Australia[END_REF], patients' behavioral skills regarding illness and treatment [START_REF] Norton | Informationmotivation-behavioral skills barriers associated with intentional versus unintentional ARV non-adherence behavior among HIV+ patients in clinical care[END_REF], age, and selfefficacy [START_REF] Ostini | Investigating the association between health literacy and non-adherence[END_REF]. Some reviews ruled out two potential explanatory factors, namely, a linear relationship between intentional nonadherence and health literacy [START_REF] Ostini | Investigating the association between health literacy and non-adherence[END_REF] and psychosocial predictors such as coping styles, social influences and social support, personality traits, and psychosocial wellbeing [START_REF] Zwikker | Psychosocial predictors of non-adherence to chronic medication: Systematic review of longitudinal studies[END_REF].
Recently, some studies have focused on the temporal dimension of adherence and intentional nonadherence behavior. Several studies point out that intentional nonadherence is usually a temporary and reversible phenomenon and not a complete discontinuation [START_REF] Murdoch | Challenging social cognition models of adherence: Cycles of discourse, historical bodies, and interactional order[END_REF][START_REF] Laba | Understanding if, how and why non-adherent decisions are made in an Australian community sample: A key to sustaining medication adherence in chronic disease[END_REF][START_REF] Wroe | Intentional and unintentional nonadherence in patients prescribed HAART treatment regimens[END_REF]. Patients report that they sometimes "take a break" [START_REF] Norton | Informationmotivation-behavioral skills barriers associated with intentional versus unintentional ARV non-adherence behavior among HIV+ patients in clinical care[END_REF] This is important, because the factors and reasons for intentional nonadherence listed above do not adequately account for these fluctuations. Existing studies indeed investigated intentional nonadherence as the rational decision to take or not to take the treatment [START_REF] Herrera | Understanding non-adherence from the inside: Hypertensive patients motivations for adhering and not adhering[END_REF][START_REF] Wroe | Intentional and unintentional nonadherence: A study of decision making[END_REF] and considered some parameters that could be involved in weighing the decision, such as information about the disease, about the treatment, and about the benefits and the expected risks; beliefs based on personal experience and personal values; and some constraints, especially economic and practical ones. But for nonadherence to be temporary, this would imply that the information, beliefs, and constraints fluctuate over time and result in similar fluctuations in treatment adherence behavior. This may happen over a time span of several years, but does not seem plausible over the much shorter time span of a few weeks or days evidenced in the studies. So why does intentional nonadherence fluctuate over time?
In this study, we aimed to better understand the reasons for fluctuations in intentional nonadherence behaviors. For this purpose, we draw on a classical idea in the sociology of health, namely, that chronic patients have to perform a certain amount of work [START_REF] Corbin | Unending work and care: Managing chronic illness at home[END_REF][START_REF] Senteio | Trying to make things right: Adherence work in high-poverty, African American neighborhoods[END_REF]. A key dimension of treatment behavior in chronic diseases lays in its repetitive and active nature over long periods of time. Indeed, adherence in chronic diseases means that the patients succeed in performing medication taking, disease monitoring, or carrying on an appropriate lifestyle every single day for decades. In particular, medication adherence behavior is the concrete task of taking the right amount of the right medication at the right time and in the right circumstances. In this perspective, adherence is similar to a prescribed work for which the worker has to use particular tools, at particular stages, to perform a task in a particular way to reach a particular goal. The reasons why workers do not perform their tasks as prescribed can refer to a variety of reasons that are hypothesized not to be only a matter of information, knowledge, or belief. Thus, what must be understood is why the patients sometimes perform treatment adherence as prescribed and sometimes decide to perform this task in their own way.
Method
This study is based on 48 semi-structured interviews conducted in a public hospital in the Netherlands in 2014 with patients suffering from type 2 diabetes , hypertension, Parkinson's disease, inflammatory bowel disease, or chronic myeloid leukemia who had been prescribed a medication for a chronic disease for at least 1 year. The conditions were selected to have a sample of contrasting situations with respect to treatment and medication practice and to factors that influence them. Some often involve doctor-prescribed lifestyle changes in addition to medication taking (diabetes and hypertension), others often involve patient-chosen lifestyle changes (inflammatory bowel disease), others involve none (Parkinson's disease and chronic myeloid leukemia), and it is known that whether a treatment regimen is limited or pervasive is important [START_REF] Dimatteo | Variations in patients' adherence to medical recommendations: A quantitative review of 50 years of research[END_REF]. Some conditions react to treatment in a way that cannot be monitored by the patients themselves (chronic myeloid leukemia), others can be monitored by the patients themselves on the basis of their own bodily perceptions (Parkinson's disease, inflammatory bowel disease), and others can be monitored by the patients themselves on the basis of their own bodily perceptions or of specific devices (diabetes and hypertension). Some conditions have acute phases (inflammatory bowel disease) or an acute onset (chronic myeloid leukemia, sometimes hypertension), and it is known that both the modes of perception of the illness and its critical phases play a role in medication practices [START_REF] Conrad | The meaning of medications: Another look at compliance[END_REF]. The Institutional Review Board/Independent Ethics Committee of the Máxima Medical Center declared that this study did not have to be reviewed by a medical ethics board according to the Dutch Medical Research Involving Human Subjects Act (WMO). Each interviewee's informed written consent was recorded. Outpatients were contacted by a research nurse or a physician. They were selected according to a simple systematic procedure: The patients were asked some time prior to a scheduled appointment if they would accept an interview in addition to their medical appointment. The first patients who did accept to participate were included. No patient declined to take part. The interview guide addressed the patient's medical history (asking, for instance, "Could you describe how your illness began?") and medical treatment (e.g., "What do you personally expect from this treatment?"), how the medical treatment was integrated into daily life (e.g., "Have you found out particular tricks to help you deal with your illness? What tricks? Did you mention them to your doctor?"), patient's personal experience of the illness (e.g., "Do you sometimes feel weary/discouraged/angry about your treatment? What are the consequences of these feelings?"), patient's knowledge and needs about the disease (e.g., "Where do you find the information you need about your illness and your treatment?"), and illness and treatment disclosure (e.g., "Do your relatives and your friends know about your illness? And about your treatment? How have they been informed about it? How did they react?"). The interviews were recorded and fully transcribed.
The interviewees have a range of backgrounds and family situations that make the interviews diverse enough, while these sociodemographic characteristics are not different from those of the patients' population. Their sociodemographic characteristics are given in Table 1.
The research was informed by the method of failure modes and effects analysis (FMEA; [START_REF] Franklin | Failure mode and effects analysis: Too little for too much?[END_REF]. This method is used in engineering and more specifically quality engineering, including quality engineering with respect to safety hazards [START_REF] Lux | FMEA and consideration of real work situations for safer design of production systems[END_REF] or with respect to human mistakes [START_REF] Hertig | Development and assessment of a medication safety measurement program in a long-term care pharmacy[END_REF]. It is used for identifying all possible failures in a design, a manufacturing or assembly process, or a product or service. It can be used for different purposes, including analyzing failures of an existing process. It addresses a particular function in a process, a product, or a service and aims to identify potential failure modes, potential effects of the failure, potential causes of failure and their occurrence rate, the current process controls and their detection rate, recommended actions to lower occurrence or severity of failures, and the results of these actions. For instance, FMEA can be used to improve safety in a radiation oncology unit [START_REF] Ford | Evaluation of safety in a radiation oncology setting using failure mode and effects analysis[END_REF], where a safe and effective radiotherapy treatment has to treat the correct tissue in the correct patient with the correct dose. Used in a prospective way, FMEA starts with describing the work flow in the unit from the patient's point of view and recording each event that happens to that patient or that patient's medical record. This first part of the analysis helps better visualize what should happen in the unit. It allows for description of subprocesses and identification of possible failures in these processes. On this basis, it is possible to score the severity, frequency, or detectability of failures, and to identify improvements that are feasible and effective. This second part of the analysis highlights what could happen and should be prevented.
A key characteristic of this method is thus its focus on the discrepancy between what is prescribed and how and why things could be different from the prescription and go awry. It is very well suited to the analysis of work flows, and this is precisely how we chose to understand and analyze nonadherence. What has been prescribed is for the patient to take the right amount of the right medication prescribed by the medical professional at the right time. What can happen and have negative consequences is any departure from this pattern of tasks. We focused on intentional departures and intentional nonadherence, that is, when patients actively choose not to follow treatment recommendations [START_REF] Daleboudt | Intentional and unintentional treatment nonadherence in patients with systemic lupus erythematosus. A r t h r i t i s C a r e & R e s e a r c h[END_REF]. FMEA then allows a rigorous and complete understanding of the tasks patients have to perform when taking their medication and of why they sometimes rationally choose to act differently. We would suggest that this makes FMEA patient-centered.
We adapted and used FMEA in a very simplified and retrospective way, as follows: First, all the instances of intentional nonadherence reported in the interviews were identified (i.e., each instance of an actual or potential intentional departure from the prescription, either regarding the medication itself, the amount of the medication, the time when it was taken, or the way to take it); second, these instances were coded and sorted out according to the reason for nonadherence; and third, the labels were blind cross-checked by a second researcher, and differences were discussed until agreement was reached on the final coding. Table 2 illustrates the result of this procedure for two instances.
Results
Among the 48 interviews, eight contained no instance of nonadherent behavior at all, 21 interviews contained at least one instance of intentional nonadherent behavior, and 28 interviews contained at least one instance of unintentional nonadherent behavior (10 interviews contained both instances of intentional and unintentional nonadherent behavior, and are therefore counted in both groups). In the 21 interviews containing instances of intentional nonadherence, there was a total of 30 such instances (this includes instances of a strong temptation of nonadherence, even if there was no actual nonadherence behavior, because the explanations given by the respondents were interesting).
The analysis of the interviews showed that intentional nonadherence behavior had two main causes: The desire of the respondents to exert personal control over the treatment and its effects on their body, especially to avoid side effects, but also to assess the treatment's effectiveness, or to adapt the doses or the schedule to their case (19 instances in 12 interviews). The desire of the respondents to control the hold of the treatment on their life, their time, and their activities (11 instances in 10 interviews with only one respondent in both groups).
Patients' Desire to Exert Personal Control Over the Treatment and its Effects on Their Body
In accordance with the results of previous studies, side effects were often reported by the respondents as the trigger for temporarily modifying or interrupting their treatment. However, it was never merely to remove an unpleasant effect of the treatment. The respondents' concerns were wider and deeper, relating to the need to exert personal control over the treatment and its effects on their body. Faced with possible side effects, the respondents explained that they first needed to assess whether their complaints were actually related to their treatment. For this purpose, they performed an experiment. They interrupted the treatment to see whether this resulted in a removal of the complaints, as reported by a 63-year-old woman who has had hypertension for more than 10 years: Eh . . . For a long time, it was fine with those medications, but now, I happen to have some problems with micardis [one of the prescribed medications], I think. Because, on the leaflet, it says that throat complaints can appear. And according to the doctor, it is very infrequent. But, he said, it might be possible. And actually I tried it this week. I have stopped the medication for one week . . . Yes, and I should not do that, because then "psssst" the [tension] values go up. But I'm in a choir, and I can in fact hardly sing, and I find it a real pity. And there's something with temperature; when I change rooms, it completely blocks my throat. And I never had that before. So, I guess that it has to do with the medication. Yes.
Similarly, the respondents reported that they modify their treatment to be able to control when side effects would occur. This describes what this 47-year-old woman who has had chronic myeloid leukemia for 7 years does:
Yes, I have sometimes spent one day without taking it. . . . When we go out for dinner and I have eaten so much, I know that I won't have a good night's rest. I've experienced it a couple of times, and then I thought that it was because of the medication, so I thought, I'm not taking it today. But just once, then (laughs).
This line of reasoning helps in understanding why intentional nonadherent behaviors may be temporary and fluctuating. When personal circumstances make the burden of side effects heavier, patients may try to contain them and thus be deliberately nonadherent. On the contrary, when they can withstand the side effects, they may be more adherent.
In the absence of side effects, some respondents explained that they remained cautious because they consider medications as potentially harmful. They try to keep the dose they take as low as possible, as reported by this 68-year-old man who has had Crohn's disease for 34 years:
Yes, it was about my question: what does the medication do to my body after nearly 30 years of use? And so he (the doctor) said it himself. He had the idea: now we may perhaps reduce it a bit. See what happens. And I must say, a couple of years ago, I did that too; I did that on my own initiative. At least, I had told him: now, I want to reduce it by half. At that time, I had four pills to take, and I reduced from four to two. And after half a year, it became more difficult to empty my bowel. . . . And then, I simply decided to take four again on my own initiative. So . . . And of course I told him that when we met on the next appointment. That's how I had done that. So.
Treatment control
The desire to assess the effects of the treatment is not limited to identification or to the prevention of side effects. This concern is only the most visible part of the assessment and adjustment practices of the respondents. They also want to assess their treatment's effectiveness and sometimes do so by assessing the results of their medical examinations against their past adherence behavior. This is the case with a 52-year-old man who has had type 2 diabetes for 22 years and who reported that he had unintentionally interrupted his treatment for a few days:
And in fact, it (the blood glucose level) was not sky-high at all. And so I thought something like: hey, when I don't take the medication, the values are not very different from what they are when I take it. And then, I acted in a very rigorous manner, eh . . . differently, sort of: I didn't take the medication, but I also almost hardly ate. In the long run, I knew that this was not good. And then, my values were even better than they had been with the medication. And from that moment, yes, I lost confidence in the treatment.
These assessments and the conclusions that the patients draw from them thus offer a better understanding of the nature of temporary intentional nonadherence. Indeed, if the results of medical examinations are satisfactory, the patients are encouraged to carry on their treatment behavior. Conversely, if the medical examinations deliver disappointing results, the patients will be more prone to modify their behavior. This is what happened with this 67-year-old man who has had type 2 diabetes for 50 years:
When you start changing this and that, it's fine, but then, you notice, oh God, it's not so good. And then at some point, you notice that you have more headaches, or that you don't see so well, and then you check your blood sugar, and you see that it is too high again. Or you experiment a bit with the insulin. No, I don't do that anymore. Some intentionally nonadherent treatment behaviors thus result from the desire to exert a form of personal control over the treatment and its positive as well as negative, and long-term as well as short-term, effects on the disease and the body. Among the 21 respondents who reported instances of intentional nonadherence, 12 reported that they had deliberately modified their treatment to check for side effects, prevent potentially harmful long-term effects, or improve its effectiveness, by assessing the effects on their body, complaints, and medical data of these behavior changes. These respondents were eight men and four women: four with hypertension, four with type 2 diabetes, two with chronic myeloid leukemia, one with inflammatory bowel syndrome, and one with Parkinson's disease. Among these 12 respondents, five were 66 years old and over, five were between 50 and 65 years old, and two were between 30 and 49 years old; nine were married, one was cohabiting, and two were divorced; and 11 had completed secondary education, and two had completed tertiary education. Compared with all 48 respondents, men are overrepresented in this group but it is not possible to draw conclusions from this observation.
Patients' Desire to Control the Hold of the Treatment on Their Life, Their Time, and Their Activities
To our knowledge, the second cause of nonadherence reported by the respondents had not been identified until now. Some respondents stated that their intentional nonadherence resulted from a desire to control and contain the hold of the treatment on their life, their time, and their activities. They felt that this hold was sometimes too strong for them, on a practical and on a motivational plane. On a practical plane, the respondents tried to limit or reduce the time they have to dedicate daily to health care, by removing parts of the related tasks or by adjusting them. On a motivational plane, they sometimes felt that the efforts did not match the results and searched for treatment modifications that would enable them to overcome demotivation.
Some respondents expressed that their treatment is a burden and that they sometimes feel that this burden is too heavy for them. Then, they have to remove the heaviest parts of this burden. These parts were often related to diet, physical activity, or measuring biological values themselves, and to a lesser extent, the medications. In particular, the most time-consuming treatment-related tasks may appear particularly demanding, to the extent that some respondents refused to perform these tasks. This was the case for this 69year-old man, who has had Parkinson's disease for 12 years. He explained that he rigorously complied with the medical recommendations in general, but he sometimes felt that this went beyond his capacity: At some point, I'd had it with the exercises that I had to do at home. Because I had to exercise every day between 15 minutes to half an hour in the morning. And at some point I said: I can no longer do that. That's just too much for me. Yes, I was busy with it for the whole day. There was nothing else anymore.
A similar feeling of having to perform an impossible task was described by this 76-year-old woman, who has had type 2 diabetes for 15 years:
You know what I find annoying? The food lists. For three days, you must write down precisely what you eat. For each lick of butter, and this is really impossible for me, I have to look at the packet of butter and write down the carbohydrates and count them, for one day. Well, this list, I tend sometimes to, hmm . . . throw it away. These nonadherent behaviors are not limited to physical activities, diet, or measuring. Medication taking can also be experienced as a burdensome task. This was what this 29-year-old man who has had inflammatory bowel disease for 4 years said, And during the summer, I went to Vietnam with my brother, and I had to take one and a half pills per day, one pill in the morning and half a pill in the evening, and yes, in the evening, most of the time, we had had dinner, and we were sitting with a beer and then I thought: oh yes, the half pill? It did not make sense. So, for a while, I took just one pill. . . . It's because this half pill is not practical. So he (the doctor) said, let's simply take one pill every day, and twice in the week, two pills. So.
The amount of work the patients have to perform because of a chronic illness can thus lead them to temporarily remove or modify some tasks and result in intentional nonadherence.
But the nature of this work, and not only its quantity, also elicited such behaviors. Indeed, the respondents expressed they were sometimes downhearted about their treatment, saying, for instance, "I'd had enough" and "it does not make sense." The lack of positive perspective seems to lead patients to question being fully adherent to their treatment and sometimes to consider interrupting it.
This is what one respondent described about measuring. This 68-year-old woman who has had type 2 diabetes for 20 years knows what the consequences of nonadherence would be, but she explained that this does not counterweigh her feeling of uselessness regarding some treatment-related tasks:
A couple of years ago, I said something like: I'm not writing my measurements any longer and . . . it does not make sense. Then the doctor got angry and said: yes, you can stop that, but then you should just let us know. That was it (laughs). He did not understand. No. That this sometimes takes you by the throat, all the things you have to do, all these controls. Yes . . . I know well what the consequences would be of course, hey? Yes I mean . . . But sometimes, you don't mind, hey? There are such moments, but, well.
As previously, medication taking was not reported to be the first task affected by intentional nonadherence. However, some respondents explained that they are sometimes disheartened about it, as did this 60-year-old man who has had type 2 diabetes for 4 years: Sometimes, I'm fed up with it. So, yes, every day, eight pills. The blood pressure and all. At some point, I'm really fed up with it. But, but . . . I simply take them, though, so hm . . . [laughs]. I know . . . I'm not angry, but again and again you have to do these things. Sometimes, yes, I am fed up with it. But . . . I have to keep going.
The temptation of nonadherence seems related to the fact that a daily task does not produce satisfactory or tangible results and therefore becomes disheartening. Respondents said they knew that the consequences of nonadherence would be negative and they had no choice in this respect. But these arguments are not always enough to keep them motivated. Thus, they sometimes try to cope by temporarily interrupting their treatment. This describes the situation with this 38-year-old woman who has had hypertension for 5 years and was prescribed a diet:
Of course [laughs]. There are many things that I am not allowed to eat (laughs). Yes, yes and then . . . That's something I was fed up with, for instance, during last summer holidays. That's something I've stopped. At some point, I'd had enough with it. I thought it simply did not make sense to me . . . to drop this and drop that . . . Of course, this has consequences. This has consequences on my blood sugar values, on the cholesterol . . . Of course, this has an effect.
A similar behavior was reported by this 67-year-old man, who has had type 2 diabetes for 20 years, and who insisted that he needed to be temporarily in control over his life, in spite of the negative consequences he would experience afterwards:
At some point, you rebel; you think, now, it's enough. Now, I do for once something that I have decided myself. But after some days, there is the weighing scale and you are really punished.
This complex feeling of uselessness, weariness, and need for control over one's life thus leads to temporary nonadherence behaviors that aim to put a temporary end to an absurd-seeming task, to store up energy during this treatment break, to do enjoyable things, and to be in control again.
This type of nonadherence behavior was described by 10 respondents. Among them were six men and four women; six had type 2 diabetes, two inflammatory bowel disease, one had hypertension, and one had Parkinson's disease. Among them, five were 66 years old and above, three were between 50 and 65 years old, one was between 30 and 49, and one was below 30 years old; seven were married and three were single; and two had completed primary education only, six had completed secondary education only, and two had completed tertiary education. Contrary to what one might have expected, type 2 diabetes and hypertension are not overrepresented, and young patients are also present in this group. However, it is not possible to draw a conclusion from these observations.
Discussion and Conclusion
This study evidenced two causes of intentional nonadherence that had not been, to our knowledge, identified previously: A desire to control the effects of the treatment, and thus to adjust the treatment through self-designed micro experiments, and a desire to control and contain the hold of the treatment on personal life, which can cause the patients to reject or modify certain treatment-related tasks or to take breaks when they are disheartened. Identifying these two types of control allows a better understanding of the temporary and fluctuating dimension of intentional nonadherence.
The FMEA method was useful to connect both objective and subjective dimensions of intentional nonadherence, that is, the concrete behavior of the interviewees and their interpretation thereof. Indeed, focusing on the instances of intentional discrepancies between prescribed and actual behavior, and not only on patients' subjective experience in general, allowed us to engage in a deeper understanding of the fluctuations of adherence behavior and the reasons for it. The results of our study are thus in line with studies that pointed to the burden of managing a chronic illness, to its different dimensions such as financial burden, time and travel burden, medication burden, health care access burden [START_REF] Sav | Treatment burden among people with chronic illness and their carers in Australia[END_REF] or side effects, cost, time, impact on family and lifestyle [START_REF] Sav | Burden of treatment for chronic illness: A concept analysis and review of the literature[END_REF], and to its dynamic nature [START_REF] Sav | Burden of treatment for chronic illness: A concept analysis and review of the literature[END_REF]. However, the FMEA method made it possible to understand a key underlying dimension of this burden, namely, patients' feeling that there are losing control over their own life. The results of our study are also useful to better understand patients' strategies for adaptation and coping. Those aim not only at reducing the illness burden but also at holding on to control, even if the burden could appear relatively small for external observers.
The FMEA method would be of interest for medical teams wishing to help patients improve their adherence behavior. In line with the FMEA approach, adherence can indeed be viewed as a continual quality control cycle of assessing, adapting, and maintaining behaviors. Medical teams could, for instance, precisely describe the medication adherence process and the corresponding work flow, identify all the causes for possible divergences, and then think of how health practitioners could help patients both maintaining the work flow and exercising control.
These results emerged from an analysis of adherence as an active process, a set of tasks performed by patients with a chronic disease. This perspective is rooted in now classical sociological studies on the work of the patients and their relatives, and more particularly on health care tasks performed at home for chronic illness [START_REF] Corbin | Unending work and care: Managing chronic illness at home[END_REF][START_REF] Senteio | Trying to make things right: Adherence work in high-poverty, African American neighborhoods[END_REF]. Such a perspective can be fruitfully deepened and enriched by more recent psychological research on motivation and commitment in the workplace. Indeed, some results presented here are in line with this literature. The issue of routine jobs and the fact that unenriched routine work can lead to depression has long been known in the sociology and psychology of work [START_REF] Parker | Longitudinal effects of lean production on employee outcomes and the mediating role of work characteristics[END_REF]. The time that patients spend on their daily care is of course not equivalent to a full day of work, but the problem of routine clearly arises, as suggested by the disheartened perspective some respondents expressed about their treatment.
Another major issue in the sociology and psychology of work was also apparent in the interviews: autonomy. Job autonomy is very important for employees, and it has been evidenced that it could improve work performance [START_REF] Cordery | Reinventing work design theory and p r a c t i c e[END_REF][START_REF] Howard | The changing nature of work[END_REF]. Autonomy operates along three dimensions: method, timing, and goals [START_REF] Spector | Perceived control by employees: A metaanalysis of studies concerning autonomy and participation a t w o r k . H u m a n R e l a t i o n s[END_REF]. These dimensions are also present in the experiments reported by respondents: They chose a method for assessing the effects of their treatment on their body and biological parameters, they adjusted their time schedule, and they paid attention to the fit between their life goals and their activities, including health care activities. It is interesting to note that employees' autonomy is affected by the attitude of their superiors, and by how the supervisor (a) provides clear, attainable goals, (b) exerts control over work activities, (c) ensures that the requisite resources are available, and (d) gives timely, accurate feedback on progress toward goal attainment [START_REF] Cordery | Reinventing work design theory and p r a c t i c e[END_REF]. Studying how these results could be transposed to the therapeutic relationship, and how they affect treatment adherence, has the potential to improve our understanding of adherence.
Motivation in the workplace is related to autonomy but also to goal setting [START_REF] Locke | Building a practically useful theory of goal setting and task motivation: A 35year odyssey[END_REF]. This is probably one of the major challenges for the patients. All respondents expressed that they pursued negative goals, to the extent that all they could expect was to avoid getting worse and, at best, remain in their current state. This has a negative impact on their adherence behavior because such goals are not suited to maintain motivation in the long run. They produce no tangible results and may elicit the feeling that the goals are not attainable, precisely because they are not tangible. How these types of negative goals may influence patients' motivation at work would need to be examined closely, as well as how other, more tangible, goals could be devised and pursued.
Our study was limited to a particular cultural and institutional setting, in the Netherlands. Further studies in non-European countries and targeting more specifically other types of patients and/or other types of health services would be needed to fully confirm our results.
Patients need to act in an intentionally nonadherent way to exert control over the treatment and its effects on their body and to control the hold of the treatment on their life, their time, and their activities, and they subsequently modify their adherence behavior in accordance with the impact of these actions either on their condition or their personal life. This result provides a rationale for fluctuating nonadherence behavior and opens new research and intervention perspectives on this issue.
Table 1 .
1 Sociodemographic Characteristics of the Interviewees.
Number of
Parameter Value Interviewees
Sex
Men Women 26 22
Disease Type 2 diabetes 16
Types 2 diabetes + hypertension Hypertension 6 8
Inflammatory bowel disease Chronic myeloid leukemia 11 5
Parkinson's disease Age, years 2
Above 66 50-65 21 17
30-49 Less than 30 7 3
Personal life Married 33
Cohabiting Single 5 4
Divorced Widowed 3 3
Education Primary education 9
Secondary education Higher education 22 17
Table 2 .
2 Outcome of the FMEA Procedure for Two Instances of Nonadherence.
Prescribed Behavior Actual Behavior Reason for the Actual Behavior: Final Coding
To inject insulin following a particular schedule The only problem is the [insulin] injections. That's really hard for me. Yes, phhhh . . . And at one point I Burden control
had enough to inject 4 times a day. But yes, there is no alternative.
Q: And what do you do? Well nothing really, I simply do it and I complain a
bit and I talk a bit with the diabetes nurse. And then they'll come back with a different solution. A
pump or something. Or I search one more time on the Internet. And then we gently start doing the
To inject insulin following a injections again. And we try to keep going. My blood sugar was always low, not too low, but it
particular schedule was always low. So last week, I thought that I would reduce them myself. The injections. Because I had
seen [the doctor] the week before and he had said, well it's perfect. I said yes, but I think it is not
enough, the blood sugar is too low. So I've decided to cut down myself, and he was very satisfied.
Q: But you had decided it on your own? I have done that on my own for one week. Adapting the quantity [of insulin].
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by a Marie Curie grant of the European Commission (Marie Curie CIG 321855), by a grant from the Région Nord Pas-de-Calais and by a grant from Lille Métropole.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. |
00175984 | en | [
"spi.meca.biom",
"phys.meca.biom",
"sdv.neu",
"scco.neur"
] | 2024/03/05 22:32:13 | 2007 | https://hal.science/hal-00175984/file/EBR_Boisgontier_07.pdf | Nicolas Vuillerme
email: nicolas.vuillerme@imag.fr
Matthieu Boisgontier
Olivier Chenu
Jacques Demongeot
Yohan Payan
TONGUE-PLACED TACTILE BIOFEEDBACK SUPPRESSES THE DELETERIOUS EFFECTS OF MUSCLE FATIGUE ON JOINT POSITION SENSE AT THE ANKLE
Keywords: Sensory re-weighting, Biofeedback, Proprioception, Muscle fatigue, Tongue Display Unit, Ankle
Introduction
Proprioceptive afferent from the ankle joint is known to play a significant role in the control of human posture and gait that greatly influence our ability to perform activities of daily living. Impaired ankle proprioception also may be a predisposing factor for chronic ankle instability, balance difficulties, reduced mobility functions, fall, injury and re-injury (e.g. Fu Interestingly, recent studies reporting deleterious effects of localized fatigue of the ankle muscles on joint position sense [START_REF] Forestier | Alteration of the position sense at the ankle induced by muscular fatigue in humans[END_REF]) and balance control (e.g. Ledin et al. 2004; Vuillerme et al. 2006a; Vuillerme and Demetz 2007; Vuillerme et al. 2002aVuillerme et al. ,b, 2003) ) support this view. Accordingly, there is a need to develop techniques for enhancing proprioceptive acuity at the ankle. A promising intervention, using an artificial tongue-placed tactile biofeedback, has recently been shown to improve ankle joint position sense in young healthy adults (Vuillerme et al. 2006b,c). The underlying principle of this system consisted of supplying individuals with supplementary information about the position of their matching ankle position relative to their reference ankle position through a tongue-placed tactile output device generating electrotactile stimulation of the tongue (Tongue Display Unit, TDU) (Bachy-Rita et al. 1998; Bach-y-Rita and Kercel 2003) (Figure 1). Note that the tongue was chosen as a substrate for electrotactile stimulation site according to its neurophysiologic characteristics. Indeed, because of its dense mechanoreceptive innervations [START_REF] Trulsson | Low-threshold mechanoreceptive afferents in the human lingual nerve[END_REF] and large somatosensory cortical representation [START_REF] Picard | Sensory cortical tongue representation in man[END_REF]), the tongue can convey higher-resolution information than the skin can [START_REF] Sampaio | Brain plasticity: 'visual' acuity of blind persons via the tongue[END_REF][START_REF] Van Boven | The limit of tactile spatial resolution in humans: grating orientation discrimination at the lips, tongue, and finger[END_REF]. In addition, due to the excellent conductivity offered by the saliva, electrotactile stimulation of the tongue can be applied with much lower voltage and current than is required for the skin (Bach-y-Rita et al. 1998). At this point however, although the above-mentioned studies evidenced that this artificial tongue-placed tactile biofeedback can be used to improve ankle joint proprioception under normal ankle neuromuscular state (i.e.
with redundant and reliable sensory information) (Vuillerme et al. 2006b,c), its effectiveness under altered ankle proprioceptive conditions, as it is the case following muscle fatigue [START_REF] Forestier | Alteration of the position sense at the ankle induced by muscular fatigue in humans[END_REF], has not been established yet. Within this context, the purpose of the present experiment was to investigate whether this biofeedback could mitigate the deleterious effect of muscle fatigue on joint position sense at the ankle. It was hypothesized that (1) ankle muscles fatigue would decrease ankle joint position sense without the provision of the biofeedback delivered through the TDU, (2) biofeedback would improve ankle joint position sense in the absence of ankle muscles fatigue, and, more originally (3) biofeedback would mitigate the detrimental effect of ankle muscles fatigue on ankle joint position sense.
Methods
Subjects
Sixteen male young healthy university students (age = 22.3 ± 2.4 years; body weight = 72.1 ± 9.5 kg; height = 180.8 ± 6.2 cm) voluntarily participated in the experiment. Criteria for selection and inclusion were: male; aged 20-30 years; negative medical history; normal ankle range of motion. Exclusion criteria were: history of sprain, fracture, injury, surgery of the lower extremities; restricted ankle range of motion of less than 20° of dorsiflexion or 30° of plantarflexion; history of central or peripheral nervous system dysfunctions. Subjects were familiarized with the experimental procedure and apparatus, and they gave their informed consent to the experimental procedure as required by the Helsinki declaration (1964) and the local Ethics Committee. None of the subjects presented any history of injury, surgery or pathology to either lower extremity that could affect their ability to perform the ankle joint position sense test.
Apparatus for measuring ankle joint position sense
Ankle joint position sense measurements were carried out as previously done by Vuillerme et al. (2006b,c). Subjects were seated barefoot in a chair with their right and left foot secured to two rotating footplates with Velcro straps. The knee joints were flexed at about 110°. Movement was restricted to the ankle in the sagittal plane, with no movement occurring at the hip or the knee. The axes of rotation of the footplates were aligned with the axes of rotation of the ankles. Precision linear potentiometers attached on both footplates provided analog voltage signals proportional to the ankles' angles. A press-button was held in the right hand and served to record the matching. Signals from the potentiometers and the press-button were sampled at 100 Hz (12 bit A/D conversion), then processed and stored within the Labview 5.1 data acquisition system. In addition, a panel was placed above the subject's legs to eliminate visual feedback about both ankles position.
Apparatus for providing tactile biofeedback
The underlying principle of the biofeedback device, similar to that recently used by ----------------------------------Insert Figure 1 about here ----------------------------------The following coding scheme for the TDU was used (Vuillerme et al. 2006c): (1) no electrical stimulation when both ankles were in a similar angular position within a range of 0.5°; (2) stimulation of either the anterior or posterior zone of the matrix (2 × 6 electrodes) (i.e. stimulation of front and rear portions of the tongue) depending on whether the matching right ankle was in a too plantarflexed or dorsiflexed position relative to the reference left ankle, respectively. The frequency of the stimulation was maintained constant at 50 Hz across subjects, ensuring the sensation of a continuous stimulation over the tongue surface.
Conversely, the intensity of the electrical stimulating current was adjusted for each subject, and for each of the front and rear portions of the tongue, given that the sensitivity to the electrotactile stimulation was reported to vary between individuals [START_REF] Essick | Lingual tactile acuity, taste perception, and the density and diameter of fungiform papillae in female subjects[END_REF]), but also as a function of location on the tongue in a preliminary experiment (Vuillerme et al. 2006b).
Task and procedure
The experimenter first placed the left reference ankle at a predetermined angle where the position of the foot was placed on a wooden support. By doing so, subjects did not exert any effort to maintain the position of the left reference ankle, preventing the contribution of effort cues coming from the reference ankle to the sense of position during the test (e.g., Vuillerme et al. 2006b,c;[START_REF] Walsh | Human forearm position sense after fatigue elbow flexor muscle[END_REF][START_REF] Winter | Muscle spindle signals combine with the sense of effort to indicate limb position[END_REF]. A matching angular target position of 10° of plantarflexion was selected to avoid the extremes of the ankle range of motion to minimize additional sensory input from joint and cutaneous receptors (e.g., Burke et al. 1988). Once the left foot had been positioned at this test angle, subjects' task was to match its position by voluntary placement of their right ankle. When they felt that they had reached the target angular position (i.e., when the right foot was presumably aligned with the left foot), they were asked to press the button held in their right hand, thereby registering the matched position. Subjects were not given feedback about their performance and were not given any speed constraints other than 10-s delay to perform one trial. Results of a preliminary study using 10 subjects established the "excellent" test-retest reliability of the active ankle-matching task (intra-class correlation coefficient (ICC) > 0.75).
Two sensory conditions were presented, (1) the No-biofeedback serving as a control condition and (2) the Biofeedback condition in which the subjects executed the active matching task using a TDU-biofeedback system, as described above. These two sensory conditions also were executed under two states of ankle muscles fatigue. [START_REF] Allen | Effect of muscle fatigue on the sense of limb position and movement[END_REF] The No-fatigue condition served as a control condition, whereas (2) in the Fatigue condition, the measurements were performed immediately after a fatiguing procedure. Its aim was to induce a muscular fatigue in the ankle plantar-flexor of the right leg until maximal exhaustion.
Subjects were asked to perform toe-lifts with their right leg as many times as possible following the beat of a metronome (40 beats/min). Verbal encouragement was given to ensure that subjects worked maximally. The fatigue level was reached when subjects were no more able to complete the exercise. Immediately on the cessation of exercise, the subjective exertion level was assessed through the Borg CR-10 scale [START_REF] Borg | Psychological scaling with applications in physical work and the perception of exertion[END_REF]). Subjects rated their perceived fatigue in the ankle muscles as almost "extremely strong" (mean Borg ratings of 9.1 ± 0.6). To ensure that ankle joint proprioception measurements in the Fatigue condition were obtained in a real fatigued state, various rules were respected, as described in previous studies
Data analysis
Two dependent variables were used to assess matching performances [START_REF] Schmidt | Motor Control and Learning, 2nd Edition[END_REF]).
(1) The absolute error (AE), the absolute value of the difference between the position of the right matching ankle and the position of the left reference ankle, is a measure of the overall accuracy of positioning.
(2) The variable error (VE), the variance around the mean constant error score, is a measure of the variability of the positioning. Decreased AE and VE scores indicate increased accuracy and consistency of the positioning, respectively [START_REF] Schmidt | Motor Control and Learning, 2nd Edition[END_REF].
Statistical analysis
Data obtained for AE and VE were submitted to separate 2 Biofeedback (Nobiofeedback vs. Biofeedback) × 2 Fatigues (No-fatigue vs. Fatigue) analyses of variances (ANOVAs) with repeated measures of both factors. Post hoc analyses (Newman-Keuls) were performed whenever necessary. Level of significance was set at 0.05.
Results
Positioning accuracy
Analysis of the AE showed a significant interaction of Biofeedback × Fatigue (F(1,15) = 4.72, P < 0.05, Fig. 2A). The decomposition of the interaction into its simple main effects showed that the availability of the biofeedback suppressed the effect of Fatigue (Fig. 2A). The ANOVA also showed main effects of Biofeedback (F(1,15) = 32.65, P < 0.001) and Fatigue (F(1,15) = 23.89, P < 0.001), yielding smaller AE in the Biofeedback than No-biofeedback condition and larger AE in the Fatigue than No-fatigue condition, respectively.
Positioning variability
Analysis of the VE showed a significant interaction of Biofeedback × Fatigue (F(1,15) = 6.41, P < 0.05, Fig. 2B). The decomposition of the interaction into its simple main effects showed that the availability of the biofeedback suppressed the effect of Fatigue (Fig. 2B). The ANOVA also showed main effects of Biofeedback (F(1,15) = 12.70, P < 0.01) and Fatigue (F(1,15) = 12.60, P < 0.01), yielding smaller VE in the Biofeedback than No-biofeedback condition and larger VE in the Fatigue than No-fatigue condition, respectively.
- ---------------------------------Insert Figure 2 about here ----------------------------------
Discussion
Whereas the acuity of the position sense at the ankle can be disturbed by muscle fatigue [START_REF] Forestier | Alteration of the position sense at the ankle induced by muscular fatigue in humans[END_REF], it recently also has been shown to be improved, under normal ankle neuromuscular state, through the use of an artificial tongue-placed tactile biofeedback (Vuillerme et al. 2006b,c). The underlying principle of this biofeedback consisted of supplying individuals with supplementary information about the position of their matching ankle position relative to their reference ankle position through electrotactile stimulation of the tongue. Within this context, the purpose of the present experiment was to investigate whether this biofeedback could mitigate the deleterious effect of muscle fatigue on joint position sense at the ankle.
To address this objective, sixteen young healthy university students were asked to perform an active ankle-matching task in two conditions of No-fatigue and Fatigue of the ankle muscles and two conditions of No-biofeedback and Biofeedback. Measures of the overall accuracy and the variability of the positioning were determined using the absolute error and the variable error, respectively [START_REF] Schmidt | Motor Control and Learning, 2nd Edition[END_REF]).
On the one hand, without the provision of biofeedback (No-biofeedback condition), fatigue yielded less accurate and less consistent matching performances, as indicated by increased absolute (Fig. 2A) and variable errors (Fig. 2B) observed in the Fatigue relative to the No-fatigue condition, respectively. This result, in line with that of [START_REF] Forestier | Alteration of the position sense at the ankle induced by muscular fatigue in humans[END_REF], was expected (hypothesis 1) and also is consistent with the existing literature reporting impaired proprioceptive acuity following muscle fatigue induced at the knee (Lattanzio et al. On the other hand, in the absence of fatigue at the ankle joint (No-fatigue condition), more accurate and more consistent matching performances were observed with than without biofeedback, as indicated by decreased absolute (Fig. 2A) and variable errors (Fig. 2B), respectively. This result also was expected (hypothesis 2), corroborating those of Vuillerme et al. (2006b,c). It confirms that, under "normal" proprioceptive conditions, the central nervous system is able to integrate an artificial biofeedback signal delivered through electrotactile stimulation of the tongue to improve joint position sense at the ankle.
Finally, and more originally, the availability of the biofeedback allowed the subjects to suppress the deleterious effects of muscle fatigue on ankle positioning accuracy and variability, as indicated by the significant interactions Biofeedback × Fatigue observed for absolute (Fig. 2A) and variable errors (Fig. 2B), respectively. Confirming our hypothesis 3, these results suggest an increased reliance on biofeedback delivered through the TDU for the active ankle-matching task in condition of muscle fatigue at the ankle. We could interpret these findings relative to the concept of "sensory re-weighting" (e.g. Horak and Macpherson Oie et al. 2002;Vuillerme et al., 2002aVuillerme et al., ,2005a). Indeed, it has been proposed that, as a sensory input becomes lost or disrupted (as it was the case for the proprioceptive signals from the ankle in the Fatigue condition), the relative contributions of alternative available sensory inputs are adaptively re-weighted by increasing reliance on sensory modalities providing accurate and reliable information (as it was the case when the tactile biofeedback delivered through the TDU was made available in the Biofeedback condition). Interestingly, although the experimental task was different, this interpretation is reminiscent with previous results evidencing the adaptive capabilities of the central nervous system to cope with ankle muscle fatigue for controlling posture during bipedal quiet standing. It was observed that, to some
et al. 2005; Halasi et al. 2005; Madhavan and Shields 2005; Payne et al. 1997).
Vuillerme et al. (2006b,c), consisted of supplying individuals with supplementary information about the position of the matching right ankle relative to the reference left ankle position through a tongue-placed tactile output device. Electrotactile stimuli were delivered to the front part of the tongue dorsum via flexible electrode arrays placed in the mouth, with connection to the stimulator apparatus via a flat cable passing out of the mouth. The system comprised a 2D electrode array (1.5 × 1.5 cm) consisting of 36 gold-plated contacts each with a 1.4 mm diameter, arranged in a 6 × 6 matrix (Vuillerme et al. 2006b,c, 2007a,b,c) (Figure 1).
investigating the effect of ankle muscles fatigue on postural control (Ledin et al. 2004; Vuillerme et al. 2006a; Vuillerme and Demetz 2007; Vuillerme et al. 2003). (1) The fatiguing exercise took place beside the experimental set-up to minimise the time between the exerciseinduced fatiguing activity and the proprioceptive measurements, (2) the duration of the postfatigue conditions was short (1 minute) and (3) the fatiguing exercise was repeated prior to each No-biofeedback and Biofeedback condition.
For
each condition of No-biofeedback and Biofeedback and each condition of Nofatigue and Fatigue, subjects performed five trials, for a total of 20 trials. The order of presentation of the two No-biofeedback and Biofeedback conditions was randomised over subjects.
1997; Skinner et al. 1986), hip (Taimela et al. 1999), neck (Vuillerme et al. 2005b), shoulder (Björklund et al. 2000; Lee et al. 2003; Voight et al. 1996), or elbow (Allen and Proske 2005; Brockett et al. 1997; Walsh et al. 2004). At this point, although the exact mechanism inducing proprioceptive impairment subsequent to the fatiguing exercise is rather difficult to answer and was not within the scope of our study, it is likely that the above-mentioned adverse effects of muscle fatigue on joint position sense stem from alterations of signals from muscle spindles (Björklund et al. 2000; Brockett et al. 1997; Forestier et al. 2002; Vuillerme et al. 2005b) and/or sense of effort (Allen and Proske 2006; Walsh et al. 2004) associated with muscle fatigue.
extent, vision (Ledin et al. 2004; Vuillerme et al. 2006a), ankle foot orthoses (Vuillerme and Demetz 2007) and a light finger touch (Vuillerme and Nougier 2003) could compensate for the postural perturbation induced by muscular fatigue applied to the ankle, hence reflecting a re-weighting of sensory cues in balance control following ankle muscles fatigue by increasing the reliance on visual information, cutaneous inputs from the foot and shank and haptic cues from the finger, respectively. From a fundamental perspective in neuroscience, the abovementioned results and ours collectively argue in favour of a re-weighting of available, accurate and reliable sensory cues as a function of the neuromuscular constraints acting on the individual. At this point, however, while the present experiment was specifically designed to assess the combined effects of muscle fatigue and tongue-placed tactile biofeedback on joint position sense at the ankle, it could also be interesting to investigate these combined effects on the control of ankle movements in more functional tasks such as postural control during quiet standing or gait initiation. Finally, the present findings could also have implications in clinical conditions and rehabilitation practice. Biofeedback delivered through TDU may indeed enable individuals to overcome ankle proprioceptive impairment caused by trauma (Bressel et al. 2004; Halasi et al. 2005; Payne et al. 1997), normal aging (Madhavan and Shields 2005; Verschueren et al. 2002), or disease (Simoneau et al. 1996; van den Bosch et al. 1995; van Deursen et al. 1998). An investigation involving older healthy adults is being conducted to strengthen the potential clinical value of the approach reported in the present experiment.
Figure captions Figure 1 .
captions1 Figure captions
Figure 2 .
2 Figure 2. Mean and standard deviation for the absolute error (A) and the variable error (B) for
Figure 2
2 Figure 2.
Acknowledgements
The authors are indebted to Professor Paul Bach-y-Rita for introducing us to the Tongue Display Unit and for discussions about sensory substitution. Paul has been for us more than a partner or a supervisor: he was a master inspiring numerous new fields of research in many domains of neuroscience, biomedical engineering and physical rehabilitation. The authors would like to thank subject volunteers. Special thanks also are extended to Damien Flammarion and Sylvain Maubleu for technical assistance, Benjamin Bouvier for his help in data collection, Dr. Vince and Zora B. for various contributions. This research was supported by the Fondation Garches and the company IDS. |
01744850 | en | [
"phys.cond.cm-scm",
"phys.phys.phys-bio-ph"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01744850/file/Azar2018.pdf | Elise Azar
Doru Constantin
Dror E Warschawski
The effect of gramicidin inclusions on the local order of membrane components
Keywords: gramicidin, membranes, NMR, WAXS, order parameter
published or not. The documents may come L'archive ouverte pluridisciplinaire
Introduction
The eect on the cell membrane of inclusions (membrane proteins, antimicrobial peptides etc.) is a highly active eld of study in biophysics [1]. A very powerful principle employed in describing the interaction between proteins and membranes is that of hydrophobic matching [2,3]. It states that proteins with a given hydrophobic length insert preferentially into membranes with a similar hydrophobic thickness [4].
Many studies of the interaction used as inclusion the antimicrobial peptide (AMP) Gramicidin A (GramA), which is known [5,6] to deform (stretch or compress) host membranes to bring them closer to its own hydrophobic length, so the hydrophobic matching mechanism is likely relevant. This perturbation of the membrane prole induces between the GramA pores in bilayers with various compositions [7] a repulsive interaction that can be explained based on a complete elastic model [8].
This large-scale description raises however fundamental questions about the microscopic eect of the inclusion, at the scale of the lipid or surfactant molecules composing the membrane. To what extent is their local arrangement perturbed by the inclusion? Is the continuous elastic model employed for bare membranes still valid?
In this paper, our goal is to investigate the inuence of GramA inclusions on the local order of the lipid or surfactant chains. We combine two complementary techniques: wide-angle X-ray scattering (WAXS) gives access to the positional order between neighboring chains, while nuclear magnetic resonance (NMR) is sensitive to the orientational order of chain segments, thus yielding a comprehensive picture of the state of the membrane as a function of the concentration of inclusions.
We study GramA inserted within bilayers composed of lipids with phosphocholine heads and saturated lipid chains: 1,2-dilauroyl-sn -glycero-3-phosphocholine (DLPC) and 1,2-dimyristoyl-sn -glycero-3-phosphocholine (DMPC) or of single-chain surfactants with zwitterionic or nonionic head groups: dodecyl dimethyl amine oxide (DDAO) and tetraethyleneglycol monododecyl ether (C 12 EO 4 ), respectively. the hydrophobic length of DLPC (20.8 Å) [5] DDAO (18.4 Å) [6] and C 12 EO 4 (18.8 Å) [7] is shorter than that of GramA (22 Å) [9], while DMPC (25.3 Å) [5] is longer. Since all these molecules form bilayers, and their hydrophobic length is close to that of GramA, the latter is expected to adopt the native helical dimer conguration described by Ketchem et al. [10],
and not the intertwined double helices observed in methanol [11] or in SDS micelles [12].
As for many molecules containing hydrocarbon chains, the WAXS signal of lipid bilayers exhibits a distinctive peak with position q 0 ∼ 14 nm -1 , indicative of the packing of these chains in the core of the membrane. Although a full description of the scattered intensity would require an involved model based on liquid state theory [13], the width of the peak provides a quantitative measurement for the positional order of the lipid chains: the longer the range of order, the narrower the peak
The eect of peptide inclusions on the chain peak has been studied for decades [14]. Systematic investigations have shown that some AMPs (e. g. magainin) have a very strong disrupting eect on the local order of the chains: the chain signal disappears almost completely for a modest concentration of inclusions [15,16,17]. With other peptides, the changes in peak position and width are more subtle [18] and can even lead to a sharper chain peak (as for the SARS coronavirus E protein [19]).
To our knowledge, however, no WAXS studies of the eect of GramA on the chain signal have been published. NMR can probe global and local order parameters in various lipid phases and along the lipid chain. Deuterium ( 2 H) NMR has been the method of choice since the 1970s and has proven very successful until today [20,21,22,23]. The eect of GramA on the order parameter of the lipid (or surfactant) chains has already been studied by deuterium ( 2 H) NMR in membranes composed of DMPC [21,24,25,26], DLPC [26] and DDAO [6], but not necessarily at the same temperature, concentration or lipid position as studied here.
Here, we use a novel application of solid-state NMR under magic-angle spinning (MAS) and dipolar recoupling, called the Dipolar Recoupling On-Axis with Scaling and Shape Preservation (DROSS) [27]. It provides similar information as 2 H NMR, by recording simultaneously the isotropic 13 C chemical shifts (at natural abundance) and the 13 C- 1 H dipolar couplings at each carbon position along the lipid or surfactant chain and head group regions. The (absolute value of the) 13
C-1 H orientation order parameter S CH = 3 cos 2 θ -1 /2, with θ the angle between the internuclear vector and the motional axis, is extracted from those dipolar couplings, and the variation of order proles with temperature or cholesterol content has already been probed, with lipids that were dicult to deuterate [28,29]. Using the same approach, we monitor the lipid or surfactant order prole when membranes are doped with dierent concentrations of gramicidin.
The main advantages of 13 C over 2 H are: the possibility to study natural lipids, with no isotopic labeling, and the high spectral resolution provided by 13 C-NMR, allowing the observation of all carbons along the lipid in a single 2D experiment. Segmental order parameters are deduced, via a simple equation, from the doublet splittings in the second dimension of the 2D spectra. The data treatment is simple for nonspecialists and the sample preparation is very easy since there is no need for isotopic enrichment. All these facts make this technique ideal to probe and study new molecules and to be able to compare the results with the ones obtained with other similar particles.
The downsides are the reduced precision in the measurement and the impossibility to extract data from lipids in the gel phase. In particular, carbons at the interfacial region of the lipids (at the glycerol backbone and at the top of the acyl chains) are less sensitive to changes in membrane rigidity, and while subtle changes can be detected with 2 H-NMR, they are dicult to interpret with 13 C-NMR at these positions. Furthermore, the ineciency of the DROSS method in the gel phase would theoretically allow measuring the lipid order in uid phases coexisting with gel phases and quantifying the amount of lipids in each phase. In our measurements, lipids in the gel phase were not abundant enough to be detected.
2 Materials and methods
Sample preparation
The samples were prepared from stock solutions of lipid or surfactant and respectively Gram A in isopropanol. We mix the two solutions at the desired concentration and briey stir the vials using a tabletop vortexer. The resulting solutions are then left to dry under vacuum at room temperature until all the solvent evaporates, as veried by repeated weighing. The absence of residual isopropanol was cheked by
1 H NMR.
We then add the desired amount of water and mix the sample thoroughly using the vortexer and then by centrifuging the vials back and forth. For WAXS, we used a microspatula to deposit small amounts of sample in the opening of a glass X-ray capillary (WJM-Glas Müller GmbH, Berlin), 1.5 or 2 mm in diameter and we centrifuged the capillary until the sample moved to the bottom. We repeated the process until reaching a sample height of about 1.5 cm. The capillary was then either ame-sealed or closed using a glue gun. For NMR, approximately 100 mg of GramA/lipid or GramA/surfactant dispersion in deuterated water were introduced in a 4 mm-diameter rotor for solid-state NMR.
NMR
NMR experiments with DMPC, DLPC and C 12 EO 4 were performed with a Bruker AVANCE 400-WB NMR spectrometer (
1 H resonance at 400 MHz, 13 C resonance at 100 MHz) using a Bruker 4-mm MAS probe. NMR experiments with DDAO were performed with a Bruker AVANCE 300-WB NMR spectrometer ( 1 H resonance at 300 MHz, 13 C resonance at 75 MHz) using a Bruker 4-mm MAS probe. All experiments were performed at 30 • C.
The DROSS pulse sequence [27] with a scaling factor χ = 0.393 was used with carefully set pulse lengths and refocused insensitive nuclei enhanced by polarization transfer (RINEPT) with delays set to 1/8 J and 1/4 J and a J value of 125 Hz. The spinning rate was set at 5 kHz, typical pulse lengths were 13 C (90
• ) = 3 µs, 1 H (90 • ) = 2.
5 µs and 1 H two-pulse phase-modulation (TPPM) decoupling was performed at 50 kHz with a phase modulation angle of 15 • .
1D spectra were acquired using the simple 13 C-RINEPT sequence with the same parameters. For the 2D spectra, 64 free induction decays were acquired, with 64 to 512 scans summed, a recycle delay of 3 s, a spectral width of 32 kHz
and 8000 complex points. The total acquisition time was between 2 and 14 h.
The data were treated using the Bruker TopSpin 3.2 software.
Resonance assignments followed that of previously published data [24,27,22,30,31], using the C ω-n convention, where n is the total number of segments, decreasing from the terminal methyl segment, C ω , to the upper carbonyl seg- ment C 1 . This representation permits a segment-by-segment comparison of the chain regions. Backbone regions are assigned according to the stereospecic nomenclature (sn) convention for the glycerol moiety. Phosphocholine head group carbons are given greek (α, β, γ) letter designations. The internal reference was chosen to be the acyl chain terminal 13 CH 3 resonance assigned to 14 ppm for all lipids and surfactants studied here.
Order parameters were extracted from the 2D DROSS spectra by measuring the dipolar splittings of the Pake doublet at each carbon site. This splitting was converted into a dipolar coupling by taking the scaling factor χ into account. The absolute value of the segmental order parameter is an additional scaling factor χ of the static dipolar coupling into the measured dipolar coupling. Since the static dipolar coupling, on the order of 20 kHz, is not known with high precision for each carbon, we have adjusted it empirically in the case of DMPC, by comparing it to previously determined values [27,22,30].
WAXS
We recorded the scattered intensity I as a function of the scattering vector q = 4π λ sin(θ), where λ is the X-ray wavelength and 2θ is the angle between the incident and the scattered beams.
Lipids X-ray scattering measurements on the GramA/DLPC and GramA/DMPC systems were performed at the ID02 beamline (ESRF, Grenoble), in a SAXS+WAXS conguration, at an X-ray energy of 12.4 keV (λ = 1Å). The WAXS range was from 5 to 53 nm -1 . We recorded the integrated intensity I(q) and subtracted the scattering signal of an empty capillary, as well as that of a water sample (weighted by the water volume fraction in the lipid samples). We used nine peptide-to-lipid molar ratios P/L ranging from 0 to 1/5 and three temperature points: 20, 30 and 40 • C.
The chain peak was tted with a Lorentzian function:
I(q) = I 0 q-q0 γ 2 + 1
.
We are mainly interested in the parameter γ, the half-width at half maximum (HWHM) of the peak.
Surfactants The GramA/DDAO and GramA/C 12 EO 4 systems were studied using an in-house setup using as source a molybdenum rotating anode [32].
The X-ray energy is 17.4 keV (λ = 0.71Å) and the sample-to-detector distance is 75 cm, yielding an accessible q-range of 0.3 to 30 nm -1 . We used ve peptideto-surfactant molar ratios (also denoted by P/L) ranging from 0 to 1/5.5 and eight temperature points, from 0 to 60 • C. The best t for the peak was obtained using a Gaussian function:
I(q) = I 0 exp - (q -q 0 ) 2 2σ 2
. For coherence with the measurements on lipid systems, we present the results in terms of the HWHM γ = √ 2 ln 2 σ.
We emphasize that the dierence in peak shape (Lorentzian vs. Gaussian) is intrinsic to the systems (double-chain lipids vs. single-chain surfactants) and not due to the resolution of the experimental setups, which is much better than the typical HWHM values measured.
3 Results and discussion
NMR
We acquired twelve 2D spectra for various surfactants and GramA concentration. Figure 1 shows the 2D DROSS NMR spectrum of C 12 EO 4 with a molar GramA concentration P/L = 0.118. For each 2D spectrum, slices were extracted at each carbon position and order parameters were deduced. Figure 2 shows a set of such representative slices (at the position C ω-2 ). As already explained, carbons at the glycerol backbone and at the rst two positions along the acyl chains were discarded. Figure 3 shows the order proles determined for each lipid and surfactant, with variable amounts of GramA.
As shown in Figure 3, there is hardly any change for the headgroup region (C α , C β and C γ ), which is expected, considering the high mobility of this region, except in DDAO (CH 3 , C 2 and C 3 ). In the aliphatic region, in DMPC (Figure 3(a)), the order parameter increases for a ratio of P/L = 0.06 and then decreases for the P/L = 0.115. In DLPC and C 12 EO 4 mixtures (Figure 3
(b) !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , ) ) ) ) ) ) ')0,3+4'503.5 ') %# 6 7 89: ' ' ! %$ ' ! ;" !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , ) ) ) ) ) ) )0,3+4'503.5 <:8) 89: ' ' ! $" ' !%%# !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , )* ) ) ) ) )
)0,3+4'503.5
<<=> 89: ' ' ! $" ' !%%# !" !#$ !# !%$ !% ! $ ! & ' ( ) * ' & ' + , - . , ' / 0 , 0 1 . 2 . , ) ) ) ) ) )
)0,3+4'503.5 <?8) 89: ' ' ! @ ' !%%$ A'B'#'''''A'B'%''''A A'B'#'''''A'B'%''''A A'B'#'''''A'B'%''''A C'''''''''''D''''''''''E C'''''''''''D''''''''''E A'B'#'''''A'B'%''''A C'''''''''''D'''''''''''''''''''''''''''''''''''% "'''''''''#''''''''''" Overall, we conclude that the order proles signicantly increase along the acyl chains with the concentration of gramicidin, except in the case of DMPC where the order prole globally increases with the addition of P/L = 0.05 of gramicidin and then decreases at P/L = 0.11. for DDAO where we show that gramicidin has the same eect as on the acyl chains.
Consequently, we show that gramicidin generally rigidies the acyl chains of DLPC, C 12 EO 4 and DDAO, as well as the head group region of DDAO. In the case of DMPC, gramicidin rst rigidies the acyl chains, but more peptide tend to return the membrane to its original uidity.
WAXS
The chain peak has long been used as a marker for the ordered or disordered state of the hydrocarbon chains within the bilayer [34]. For lipids, an important parameter is the main transition (or chain melting) temperature, at which the chains go from a gel to a liquid crystalline (in short, liquid) phase [35].
The main transition temperature of pure DLPC is at about -1 For the lipids, in the liquid phase the peak width increases slightly with P/L for all temperatures (Figure 5). In the gel phase of DMPC at 20 • C (Figure 5 right and Figure 4) this disordering eect is very pronounced, in agreement with the results of several dierent techniques, reviewed in Ref [41] ( V-A). The linear increase in HWHM with P/L can be interpreted as a broadening (rather than a shift) of the transition. The liquid crystalline phase value of the HWHM is reached only at the highest investigated P/L, amounting to one GramA molecule per 5 or six lipids.
For surfactants, which we only studied in the liquid crystalline phase, changes to the chain peak are slight. In C 12 EO 4 membranes, the peak po- sition q 0 decreases very slightly with temperature (Figure 7), while the peak width is almost unchanged by temperature or gramicidin content (Figure 9 right). As an example, we observe a small decrease of q 0 with the temperature at P/L = 0.073 (Figure 7 This conclusion is conrmed by the very modest change in the HWHM values presented in Figure 9 on the right. At P/L = 0, the HWHM is very close to 2.6 nm -1 for all temperatures. As the gramicidin content increases, we observe a small gap between the dierent temperatures: the width stays constant or increases for the lower temperatures (up to about 40 • C) and decreases for the higher ones. This gap widens at high gramicidin content (P /L > 0.07). Fig. 9 HWHM as a function of the concentration P/L, for all measured temperatures. DDAO bilayers (left) and C 12 EO 4 bilayers (right).
In the case of DDAO, the inuence of gramicidin content is more notable than for C 12 EO 4 and the behavior is richer, especially in the presence of choles- terol.
Without cholesterol, the DDAO WAXS peaks coincide for the dierent temperatures at a given inclusion concentration (e.g. in Figure 6 left at P/L = 0.178) whereas the proles dier according to the gramicidin concentration for a given temperature (see Figure 6 right).
These observations dier in presence of cholesterol where for one concentration of gramicidin inclusions (e.g: case of P/L = 0.082 in Figure 8 For the DDAO system, the peak occurs at much lower q 0 with cholesterol than without: q 0 = 12.77 nm -1 at 20 • C, 12.62 nm -1 at 30 • C and 12.28 nm -1 at 50 • C. Thus, the cholesterol expands DDAO bilayers, in contrast with the condensing eect observed in lipid membranes [42,43]. More detailed molecularscale studies would be needed to understand this phenomenon.
Without cholesterol, the width of the main peak in DDAO membranes is little aected by a temperature change, at least between 0 and 60 • C. Without gramicidin, we observe two distinct HWHM values: ∼ 2.38 nm -1 at the lower temperatures (between 0 • C and room temperature) and ∼ 2.5 nm -1 for higher temperatures (between 30 and 60 • C), but this gap closes with the addition of gramicidin, and at high P/L only an insignicant dierence of 0.05 nm -1 persists (Figure 9 left).
On the other hand, at a given temperature the HWHM does vary as a function of P/L. This change is sigmoidal, with an average HWHM of ∼ 2.4 nm -1 for P/L < 0.05 and ∼ 2.7 nm -1 for P/L > 0.11. Thus, above this concentration, the gramicidin decreases slightly the positional order of the chains.
An opposite eect is observed in the presence of cholesterol (Figure 10), where at high temperature (40 -60 • C) the HWHM drops with the P/L: for instance, from 2.37 nm -1 to 2.08 nm -1 at 60 • C. At low temperature (0-30 • C) there is no systematic dependence on P/L.
Overall we can conclude that gramicidin addition has an eect that diers according to the membrane composition. The temperature has a signicant inuence only in the presence of cholesterol. In all surfactant systems and over the temperature range from 0 to 60 • C, the peak is broad, indicating that the alkyl chains are in the liquid crystalline state. There are however subtle dierences between the dierent compositions, as detailed below.
In C 12 EO 4 membranes, the peak position q 0 decreases very slightly with temperature, while the HWHM is almost unchanged by temperature or gramicidin content.
For DDAO (without cholesterol), q 0 also decreases with temperature at a given P/L, but increases with P/L at xed temperature. On adding gram- icidin, the HWHM increases slightly with a sigmoidal dependence on P/L. Thus, a high gramicidin concentration P/L ≥ 0.1 reduces the positional order of the chains in DDAO bilayers.
The opposite behavior is measured in DDAO membranes with cholesterol. Adding gramicidin inclusions have two distinct behaviors depending on the temperature. For low temperatures (between 0 • C and 30 • C) we have a small peptide concentration dependence and a clear temperature correlation, whereas at high temperatures (between 40 • C and 60 • C) we have a strong decrease in the HWHM in presence of inclusions depending only with the P/L content without any variation with the temperature rise. Since at P/L = 0 the HWHM value is very close for the dierent temperatures then we can conclude that adding gramicidin to a membrane containing cholesterol helps rigidify it.
Comparing the NMR and WAXS results
Although the orientational and positional order parameters are distinct physical parameters, one would expect them to be correlated (e.g. straighter molecules can be more tightly packed, as in the gel phase with respect to the uid phase.) This tendency is indeed observed in our measurements, with the exception of DDAO.
We measured by NMR that the orientational order parameter for DMPC increases when adding P/L = 0.05 and slightly decreases at P/L = 0.1 (Figure 3(a)). This behavior was also measured by WAXS for the positional order parameter at both P/L values (Figure 5 right). Similarly, we measured for DLPC acyl chains the same orientational and positional order proles where the order increases for P/L = 0.05 and remains the same when adding P/L = 0.1 gramicidin (Figure 3(b) and 5 left).
As for the C 12 EO 4 surfactant acyl chains, we found a modest raise in both the orientational and the positional order parameters when adding the gramicidin peptide with no dependence on the P/L molar ratio (Figure 3(c) and 9 right).
In the case of DDAO we found that adding gramicidin signicantly increases the orientational order (Figure 3(d)) and decreases the positional order (Figure 9 left). Solid-state NMR also shows an abrupt change in the headgroup region when little GramA is added, followed by a more gradual ordering of the acyl chain when more GramA is added. This may imply a particular geometrical reorganisation of DDAO around the GramA inclusion that could be tested with molecular models.
Conclusions
Using solid-state NMR and wide angle X-ray scattering, we showed that inserting Gramicidin A in lipid and surfactant bilayers modies the local order of the constituent acyl chains depending on multiple factors. In particular, we studied the inuence of membrane composition and temperature on the local order.
The behavior of this local order is quite rich, with signicant dierences between lipids, on the one hand, and single-tail surfactants, on the other, but also between DDAO and all the other systems.
We showed that adding gramicidin inuences the orientational order of the acyl chains and we nd a similar behavior for the orientational order and the positional order, except in the particular case of DDAO.
In this system, GramA content seems to notably inuence the DDAO acyl chains by decreasing their positional order and increasing their orientational order. GramA also inuences the orientational order of the head groups. Also in DDAO, we showed by WAXS that the temperature has a signicant inuence on the positional order only in the presence of cholesterol.
In the gel phase of DMPC, GramA addition leads to a linear decrease in positional order, saturating at the liquid phase value for a molar ratio P/L between 1/6 and 1/5. In the liquid phase, we measure relatively small modications in the local order in terms of position and orientation when adding Gramicidin A, especially in the case of DMPC, DLPC and C 12 EO 4 . This is a very signicant result, which allows further elaboration of elastic models in the presence of inclusions by using the same elastic constants obtained for bare membranes.
As seen above for DDAO, in some membranes the presence of inclusions inuences dierently the positional and orientational order of the acyl chains.
Consequently, combining both techniques (NMR and WAXS) on the same system is very useful in obtaining a full image of the local order. A more detailed analysis could be performed by comparing our results with molecular dynamics simulations. The correlation between changes in the chain order and larger-scale parameters of the bilayer (e.g. the elastic properties) could be established by using dynamic techniques, such as Neutron spin echo.
Fig. 1
1 Fig. 1 Example of a 2D 1 H -13 C DROSS spectrum for GramA/C 12 EO 4 with P/L = 0.118.
Fig. 2
2 Fig. 2 Dipolar coupling slices of the C ω- at 30 • C.
FFig. 3
3 Fig. 3 Orientational order parameter |S CH | for DMPC (a), DLPC (b), C 12 EO 4 (c) and DDAO (d) bilayers embedded with GramA pores for dierent P/L at 30 • C. Error bars are smaller than symbol size.
Fig. 4
4 Fig. 4 Chain peak for DMPC in bilayers doped with varying amounts of GramA at 20 • C.
Fig. 5
5 Fig.5 Width of the chain peak for DLPC (left) and DMPC (right) bilayers as a function of the GramA doping at three temperatures.
left), as well as a very slight increase with P/L at 20 • C, as seen in Figure7right. If we take the overall WAXS peak position shift as a function of temperature and for all inclusions concentration (data not shown) we have a small temperature dependence for each P/L. Comparing the value in absence of inclusion, the peak position slightly shifts after adding gramicidin at a P/L = 0.015 but remains almost the same for the dierent gramicidin content, showing no signicant inuence of the inclusions on the C 12 EO 4 membranes.
Fig. 6
6 Fig. 6 Scattered signal I(q) for DDAO bilayers, as a function of temperature for the most concentrated sample, with P/L = 0.178 (left) and for all concentrations at T = 40 • C (right).
Fig. 7
7 Fig. 7 Scattered signal I(q) for C 12 EO 4 bilayers, as a function of temperature for a sample with P/L = 0.073 (left) and as a function of concentration at room temperature: T = 20 • C (right).
8
Scattered signal I(q) for DDAO/Cholesterol bilayers, as a function of temperature for a sample with P/L = 0.082 (left) and as a function of concentration at T = 50 • C (right).
left) at dierent temperatures, we observe two families in which the spectra are quasi identical: one group at low temperatures (0-30 • C) and another distinct group at higher temperatures (40 -60 • C). At 20 • C, the peaks for DDAO Cholesterol tend to superpose for P/L > 0.028 (data not shown), whereas at 50 • C (Figure 8 right) the peak proles dier and vary with P/L.
Fig. 10
10 Fig. 10 HWHM as a function of the concentration P/L, for all measured temperatures in the GramA/DDAO+Cholesterol/ H 2 O system.
• C[36,37,[START_REF] Cevc | Phospholipid bilayers: physical principles and models[END_REF][START_REF] Ku£erka | [END_REF] and that of pure DMPC is between 23 • C and 24 • C[40,36,37,[START_REF] Cevc | Phospholipid bilayers: physical principles and models[END_REF][START_REF] Ku£erka | [END_REF].
65 Gram/DMPC 20°C
60 P/L
55 50 1/5.21 1/6.26 1/7.82
45 1/10.43
I [10 ] -1 mm -3 40 35 30 25 1/14.6 1/21 1/31 1/52 0 (pure DMPC)
20
15
10
5
0
5 10 15 20 q [nm -1 ] 25 30 35
Acknowledgements We thank the CMCP (UPMC, CNRS, Collège de France) for the use of their Bruker AVANCE 300-WB NMR spectrometer. The ESRF is acknowledged for the provision of beamtime (experiment SC-2876) and Jérémie Gummel for his support. This work was supported by the ANR under contract MEMINT (2012-BS04-0023). We also acknowledge B. Abécassis and O. Taché for their support with the WAXS experiment on the MOMAC setup at the LPS. |
01759998 | en | [
"info.info-dc",
"info.info-ni"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01759998/file/paper.pdf | Bastien Confais
Adrien Lebre
Benoît Parrein
Improving locality of an object store working in a Fog environment
Introduction
The Cloud Computing model relying on few datacenters located far from the users cannot satisfy the new constraints of the Internet of Things (IoT), especially in terms of latency and reactivity. Fog and Edge computing infrastructures have been proposed as an alternative [START_REF] Bonomi | Fog Computing and Its Role in the Internet of Things[END_REF]. This new paradigm consists of deploying small data-centers close to the users located at the edge to provide them low-latency computing. Figure 1 shows that the Fog platform is composed of a significant number of that can be geographically spread over a large area. IoT devices and users are mobile and are connected to the closest site of Fog Computing. We consider the network latency between the client and the closest Fog site to be lower than the network latency between the Fog sites.
We work on a storage solution for this infrastructure. Our goal is to create a seamless storage experience towards different sites, capable of working in a disconnected mode by containing the network traffic to the sites solliciting the data. We previously proposed to use the object store Interplanetary FileSystem [START_REF] Benet | IPFS -Content Addressed, Versioned, P2P File System[END_REF][START_REF] Confais | Performance Analysis of Object Store Systems in a Fog and Edge Computing Infrastructure[END_REF] as a Fog storage system because it enables users and IoT devices to write locally on their site but also to automatically relocate the accessed objects on the site they are requested. After improving the locality of IPFS by adding a Scale-Out NAS [START_REF] Confais | An object store service for a Fog/Edge Computing infrastructure based on ipfs and a scale-out NAS[END_REF] on each site and proposing to manage location of object in a tree, we evaluated our proposals on the Grid'5000 system by adding virtual network latencies between Fog nodes and clients. We now want to evaluate it in a more realistic environment.
We propose to evaluate the performance of the storage system using the couple G5K/FIT. The G5K platform emulates the Fog layer whereas the Edge layer is emulated on the FIT one. Locations like Grenoble, Lyon or Lille are appropriate for our experiment because they host both a G5K and a FIT site, so that we can expect the network latency between IoT nodes and Fog nodes to be low. Grid'5000 and the FIT platforms. Figure 2 shows the general topology used for the experiment. We developed a RIOT application to enable IoT nodes to access an IPFS server. The scenario consists in reading a value from a sensor, and storing it in an object of the IPFS server located on the Fog. The challenge for a such scenario is to connect the two platforms. Reaching the G5K platform from the IoT node is not easy, because G5K nodes and FIT nodes are connected to the Internet through IPv4-NAT and the G5K platform does not provide any IPv6 connectivity. Because of the lack of end-to-end IP connectivity between the two platforms, we encapsulate messages in SSH tunnels between a A8-node on the FIT platform and the IPFS node. Instead of accessing directly the IPFS node (plain arrow), the client accesses it from the introduced A8-node that acts like a proxy (dashed arrow). This solution works but is not ideal not only because the tunnel degrades the performance but also because the IP routing is not direct and the path from the two platforms in Grenoble goes through Sophia, increasing the network latency.
Conclusion
This experiment was one of the first involving simultaneously the two platforms and we pointed out the difficulties to interconnect them. Once these problems will be solved, we plan to perform a more advanced scenario involving node mobility or the possibility for a node to choose the Fog site to use.
Fig. 1 :
1 Fig. 1: Overview of a Cloud, Fog and Edge infrastructure.
Fig. 2 :
2 Fig.2: General architecture of our experiment and interconnection between the Grid'5000 and the FIT platforms. Figure2shows the general topology used for the experiment. We developed a RIOT application to enable IoT nodes to access an IPFS server. The scenario consists in reading a value from a sensor, and storing it in an object of the IPFS server located on the Fog. The challenge for a such scenario is to connect the two platforms. Reaching the G5K platform from the IoT node is not easy, because G5K nodes and FIT nodes are connected to the Internet through IPv4-NAT and the G5K platform does not provide any IPv6 connectivity. Because of the lack of end-to-end IP connectivity between the two platforms, we encapsulate messages in SSH tunnels between a A8-node on the FIT platform and the IPFS node. Instead of accessing directly the IPFS node (plain arrow), the client accesses it from the introduced A8-node that acts like a proxy (dashed arrow). This solution works but is not ideal not only because the tunnel degrades the performance but also because the IP routing is not direct and the path from the two platforms in Grenoble goes through Sophia, increasing the network latency. |
01480538 | en | [
"phys.meca.mema"
] | 2024/03/05 22:32:13 | 2016 | https://hal.science/hal-01480538/file/machado2016.pdf | Guilherme Machado
email: guilherme.machado@imag.fr
Arthur Stricher
email: arthur.stricher@insa-lyon.fr
Grégory Chagnon
email: gregory.chagnon@imag.fr
Denis Favier
email: denis.favier@imag.fr
Mechanical behavior of architectured photosensitive silicone membranes: experimental data and numerical analysis
Keywords: architectured membranes, photosensitive silicone, biocompatible silicone, bulge test, bi-material hyperelastic solid
Introduction
Facing increasing demands for multifunctional solutions, architectured materials take an increasingly important place in many applications in order to design specific mechanical properties for a given purpose. Very often the function is not provided by the local property only (grain size, precipitation, polymer chain design and interchain bonding, state of crystallization), but by the interplay between the shape, the properties, and possible association of materials [START_REF] Brechet | Embury Architectured materials: Expanding materials space[END_REF].
For instance, some polymers are architectured due to their chain design [START_REF] Zuo | Effects of block architecture on structure and mechanical properties of olefin block copolymers under uniaxial deformation[END_REF]. Usually, this strategy operates at scales between 1 nm and 10 µm. Architectured silicone materials can also be composed with an association of materials (also called hybrid material) for example fiberreinforced [START_REF] Bailly | In-plane mechanics of soft architectured fibre-reinforced silicone rubber membranes[END_REF] or NiTi-reinforced membranes [START_REF] Rey | An original architectured NiTi silicone rubber structure for biomedical applications[END_REF]. Even if such composites are very good candidates for biomimetic membranes, this solution involves the integration of different synthetic materials often associated with local mismatches of mechanical properties and adhesion difficulties. Local mismatches may cause excessive stress concentrations within the structure and thus premature failure of the composite upon stretching. Thus, one of the challenges of the architectured material is in ensuring efficient stress transfer and in avoiding local failure between regions of different mechanical properties.
Other materials can be considered as architectured because of their geometry. [START_REF] Meunier | Anisotropic large deformation of geometrically architectured unfilled silicone membranes[END_REF] and [START_REF] Rebouah | Development and modeling of filled silicone architectured membranes[END_REF] developed crenelated membranes with an unfilled and filled silicone rubber. The main advantage of these membranes is that they present an anisotropic behavior without any interface in the material. Indeed, the crenels and their orientations allow to induce and control the anisotropy, but this fact is limited by the out-plane geometry and the process to obtain the reinforced membrane.
This paper focuses on the mechanical behavior of architectured silicone membranes where the membrane architecture is controlled by the in-plane intrinsic properties but also by a desired topology at scale between the microstructure and the application. The concept is to create a heterogeneous material with locally tuned mechanical properties by changing the local crosslink density. The approach can be exploited, for example, to create bioinspired membranes that mimic anisotropic structural properties of soft tissues. In this context, Section 2 presents all precautions concerning the experimental mechanical testing procedures and strain field measurements techniques. Experimental data and analyzes are presented into two parts. First, in Section 3, the three deformations modes (uniaxial, planar and equibiaxial tensile tests) for each phase are independently tested. Second, in Section 4, the bulge test of graded membrane containing two phases. In Section 5, a finite element analysis (FEA) is carried out using a hyperelastic model fitted simultaneously on the three previous tensile tests. Then, the numerical model was used to try to predict the bimaterial and results are discussed. Finally, Section 6 contains some concluding remarks and outlines some future perspectives.
Testing procedures background and strain field measurements techniques
A series of mechanical tests were carried out to characterize the silicone mechanical behavior in its soft and hard phases. First, for the three deformations modes: uniaxial, planar (pure shear) and equibiaxial tensile tests; second, for the bulge test of graded membrane containing two phases.
Preparation of the silicone specimens
Samples were prepared in the IMP laboratory (Ingénierie des Matériaux Polymères -Villeurbanne, France), using the polydimethylsiloxane (PDMS) elastomer in addition of an UV-sensitive photoinhibitor. The membrane was selectively exposed to UV radiation then the cross-linking of the UV exposed elastomer is inhibited, leading to a softer material than the unexposed zone.
From this point forward, the soft phase denotes the UV exposed material and the hard phase the unexposed one.
In-plane tests
In-plane quasi-static experiments were conducted on a Gabo Explorer testing machine with ±25 N and ±500 N load cells for uniaxial tension and planar tension respectively. A 2D digital image correlation system (DIC) was used during the test to obtain 2D fields at the surface of plane specimens. The commercial VIC-2D 2009 software package from Correlated solutions was used to acquire images. The images were recorded at 1 Hz with a Pike F-421B/C CCD camera with a sensor resolution of C r = 7.4 µm/pixel. The reason for this large sensor format is the goal to achieve high resolution images with low noise. The 50 mm camera lens was set to f /22 using a 50 pixels extension ring. Grayscale 8 bit images were captured using a full scan of 2048 pixels × 2048 pixels. After all, a cross-correlation function was used and displacement vectors were calculated by correlating square facets (or subsets) of f size = 21 pixels and grid spacing G s = 10 pixels to carry out the correlation process for the undeformed and deformed images. To achieve a sub-pixel accuracy, optimized 8-tap splines were used for the gray value interpolation. As the optimization criteria for the subset matching, a zero-normalized squared difference was adopted, which is insensitive to offset and scale in lighting. For the weighting function, the Gaussian distribution was selected, as it provides the best compromise between spatial and displacement resolution [START_REF] Sutton | Image correlation for shape, motion and deformation measurements[END_REF]. In uniaxial tensile experiment, the spatial resolution (the physical size per pixel) was S r = 15 µm. Likewise, for the planar tension S r = 7 µm.
Out-plane tests
The bulge test was conducted in order to determinate an equibiaxial state of both phases and also tested the soft-hard bimaterial. A syringe driver was used and the internal pressure is measured by an AZ-8215 digital manometer. Seeing that the material is partially transparent, milk was used as hydrostatic fluid to increase the gray contrast for DIC and to avoid internal reflections. Inflation was sufficient slow to obtain a quasi-static load. Under the assumption of material isotropy over the circumferential direction, the principal directions of both stretch and stress tensors at each material particle are known ab initio to be the meridional and circumferential directions of the membrane surface. From this point forward, these directions will be denoted by the subscripts m and c respectively.
Assuming quasi-static motion, the equilibrium equations for a thin axisymmetric isotropic membrane, as adopted by Hill [START_REF] Hill | A theory of the plastic bulging of a metal diaphragm by lateral pressure[END_REF], can be expressed as
σ m = p 2h κ c (1)
σ c = p 2h κ c 2 - κ m κ c (2)
where (σ m , σ c ) are the meridional and circumferential stresses and (κ m , κ c ) are the meridional and circumferential curvatures. h is the current thickness and p is the time-dependent normal pressure acting uniformly (dp/dR = 0) over the radius R. As mentioned in [START_REF] Wineman | Large axisymmetric inflation of a nonlinear viscoelastic membrane by lateral pressure[END_REF] and [START_REF] Humphrey | Computer methods in membrane biomechanics[END_REF] a remarkable consequence of membrane theory is that it admits equilibrium solutions without explicitly requiring a constitutive equation, since the equilibrium equations are derived directly by balancing forces of a deformed element shape. As a consequence, they are valid for all classes of in-plane isotropic materials.
Recently, Machado et al. [START_REF] Machado | Membrane curvatures and stress-strain full fields of axisymmetric bulge tests from 3D-DIC measurements. Theory and validation on virtual and experimental results[END_REF] presented a methodology to compute the membrane curvature of the bulge test from 3D-DIC measurements. A very convenient calculation scheme was proposed based on the surface representation in curvilinear coordinates. From that scheme, the circumferential and meridional curvatures, and also the respective stresses, can be computed. In [START_REF] Machado | Membrane curvatures and stress-strain full fields of axisymmetric bulge tests from 3D-DIC measurements. Theory and validation on virtual and experimental results[END_REF] authors presented an evaluation scheme for the bulge test based on the determination of the surface curvature tensor and the membrane stress tensor. With this method, the circumferential as well as the meridional stress can be determined at every stage and position of the specimen.
The commercial VIC-3D 7 software package from Correlated solutions was used to acquire images using two digital Pike cameras described in Section 2.3. Both cameras were set up at D = 150 mm distance and 35 • angle to the specimen using 28 mm focal length lenses opened at f /16. Previous to the test, a good calibration of the 3D-DIC system is required. The following correlation options were chosen: 8-tap splines were used for the gray value interpolation, zeronormalized squared difference for the subset matching and the Gaussian distribution for the weighting function. Square facets of f size = 15 pixels and grid spacing G s = 5 pixels to carry out the correlation process for the undeformed and deformed images. The obtained spatial resolution is S r = 15 µm.
Note that the spatial resolution (S r ) of the discretized surface depends essentially on camera sensor resolution (C r ) and on the choice of the grid spacing (G s ) that defines the distance between the data points on the object. The grid spacing is the distance between the grid points in pixel. Thus, grid spacing limits the spatial resolution, as each grid point represents one single data point of the result. The facet size controls the area of the image that is used to track the displacement between images. The minimal facet size (f size ) is limited by the size and roughness of the stochastic pattern on the object surface. Each facet must be large enough to ensure that there is a sufficiently distinctive pattern, with good contrast features, contained in the area-of-interest used for correlation.
Soft and Hard phases: experimental results and analysis
Uniaxial tension test
Uniaxial tensile tests were performed on small dog-bone shaped specimens. The samples had an initial gage length l 0 = 12 mm, width w 0 = 4 mm and thickness h 0 = 0.8 mm. During the test, using an elongation rate of λ = 3.0 × 10 -2 s -1 , the nominal stress tensor P (First Piola-Kirchhoff stress tensor) is assumed to be homogeneous within the gauge region as well as the deformation gradient tensor F. Since the current thickness is not measured, the material is assumed to be incompressible, i.e., det (F) = 1. A cyclic loading-unloading test was realized for soft and hard phases, the curves are presented in Fig. 1. In the same figure, the first load of both phases are plotted. Different phenomena are highlighted, first a large stress-softening appears by comparing the two first loading at each strain level. A little hysteresis after the first cycle is observed. Moreover, few residual elongation is observed for both phases. for soft and hard phases.
Planar tension test
The pure shear strain state was approached by performing planar tension test. The initial height l 0 , the constant width w 0 and the thickness h 0 of samples were 4.5 mm, 40 mm and 0.8 mm, respectively. The width of the specimen used for planar tension test must be at least ten times greater than its length. These dimensions have as objective to create an experiment where the specimen is constrained in the lateral direction such that all specimen thinning occurs in the thickness direction. A cyclic planar loading test was realized for both phases at λ = 1.0 × 10 -2 s -1 . The results are presented in Fig. 2. Planar tensile response, likewise uniaxial traction, presents the same phenomena. For the soft phase, the maximum principal stretch experienced by the planar specimens are smaller if compared with uniaxial tensile test specimens.
In general, this limitation lies in the fact that the planar tensile specimens must be constrained in the lateral direction without slipping. In this manner, the annoying premature tearing at the grips is observed. This is the major difficulty in planar tensile tests of thin specimens.
Equibiaxial tension using the bulge test
The equibiaxial tension state is approached by the bulge test. Due to the axial-symmetry of the experimental configuration the equibiaxiality of the stress and strain is obtained at the top the inflated sample. The elongation rate was not controlled, but the pressure p is slowly increased. The stress-strain curve for the central area are presented in Fig. 3 for a cyclic loading.
The response are qualitatively similar to uniaxial loading with hysteresis and stress-softening.
Analysis
Table 1 presents a comparative for soft and hard phases for the three different experimental load cases (uniaxial, plane shear and equibiaxial). It is easily determined from classical isotropic elasticity theory that, during quasi-linear stages, the ratio of the stress-strain slopes between uniaxial tension (E) and equibiaxial tension (
E e ) is E e /E = 1/(1 -ν).
The ratio E e /E was determined experimentally to be 2 for both phases. This is compatible with an incompressible material with Poisson's ratio ν = 0.5. The Young's modulus ratio between the two phases (E H/S ) was about 3.5 for uniaxial and equibiaxial states and 2.5 for plane shear. At the beginning of loading, the stress ratio between phases at λ = 1.1 of strain (P 10%
S
) are closer than stress ratio at λ = 2.0. The mean stress ratio P calculates over all load history is kept practically constant for all loading cases. The specimen disk effective dimensions are 18.5 mm of radius and 0.4 mm of thickness. The UV-irradiated zone is concentric circle of 10 mm diameter. Cross-linking of the UV exposed elastomer is inhibited, leading to softer region than the surrounding unexposed part,as illustrated in Fig. 4.
The bulge test was chosen to test the bimaterial for two main reasons: stress-concentrations can be easily access the soft-hard interface is far from boundary conditions; each inflation state involves a heterogeneous stress-strain state which can be determined analytically. Having said that, bulge offers a valuable data for modeling benchmark. Inflations were performed from 1 kPa to a maximum pressure of 25 kPa, therefore, for clarity, three levels were chosen to present the results: 6, 15 and 25 kPa. These inflations yielded principal stretches at the pole of about 1.16, 1.43 and 2.21 respectively. Fig. 6 presents principal stretches values (λ m , λ c ) obtained from 3D-DIC system. Save the pole (R = 0) and the clamped boundary (R = 1), all material points involve a heterogeneous strain state. As expected, the circumferential stretch λ c tends to one, i.e., a pure planar stretching behavior towards the clamped boundary (R → 1). However, most of the hard phase deformation is on the circumferential direction since λ c is less than 1.2 even for the maximal pressure level. Principal curvatures (κ m , κ c ) and principal stresses (σ m , σ c ) were computed as explained in [START_REF] Machado | Membrane curvatures and stress-strain full fields of axisymmetric bulge tests from 3D-DIC measurements. Theory and validation on virtual and experimental results[END_REF]. Fig. 7 shows the experimental curves. With respect to principal curvature distributions, note that equibiaxial membrane deformations near the membrane pole (R = 0) are associated with an approximately spherical geometry, i.e., κ m ≈ κ c . The small difference may be explained by the fact that the umbilical point may not lie exactly on the Z direction axis. Note that for all pressure levels, the meridional curvature κ m presents an inflection point representing changes from convex to concave curvature on the soft-hard interface (R = 0.27).
•• • • •• • • •• • • •• • • •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• •• • • • • •• • • •• •••• • • • ••••••• • •• • • • (b)
With regard to the stress plots in Figs. 7c and7d, the stress state can be assumed to be equibiaxial at the pole (R = 0). Both stresses, σ m and σ c , experience an increasingly upward turn for R 0.27 when pressure increases. Two inflections points can be observed around R = 0.27 and R = 0.36 in all directions for all pressure loads.
Finite element simulations
In the previous section, bulge test was used to obtain stress and strain fields for a nonhomogeneous material. Stress were calculated without explicitly specifying a constitutive relation for the material. The aim of this section is to compare these results with usual finite element analysis using the classical Mooney-Rivlin hyperelastic constitutive equation. Stress-softening and hysteresis are not regarded, thus only the first loading behavior was investigated.
Hyperelastic fitting using the Mooney-Rivlin hyperelastic model
Assuming an incompressible isotropic hyperelastic material behavior, the two parameters Mooney-Rivlin model is expressed as
W Ī1 , Ī2 = C 10 Ī1 -3 + C 01 Ī2 -3 (3)
where C 10 and C 01 are material parameters that must be identified. Ī1 and Ī2 are the first and second strain invariants of the the isochoric elastic response. Due to its mathematical simplicity as well as its prediction accuracy in the range of moderately large strains, Mooney-Rivlin model has been widely employed in the description of the behavior of rubbery materials. It is known that different deformation modes are required to obtain the parameters that define the stressstrain relationship accurately. Uniaxial, planar and equibiaxial data acquired in Section 3 are simultaneously involved in a least square minimization in order to extract the sets of material parameters for each material phase. Table 2 summarizes the two sets of C 10 and C 01 parameters. Fig. 8 shows the stress-strain curves of the first loading experimental data and the Mooney-Rivlin model fitting for each deformation mode. The adopted fitting procedure allows a material model description that is valid for a general deformation state. As expected, the model shows a good agreement with uniaxial and planar tensile tests data up to 100% of strain, i.e., λ = 2.0.
Moreover, the Mooney-Rivlin model starts to fail to account strains larger than λ = 1.3 for the equibiaxial tensile test, in particular for the hard phase. Nevertheless, as pointed out in [START_REF] Marckmann | Comparison of hyperelastic models for rubberlike materials[END_REF],
there are very few hyperelastic constitutive models able to simultaneously simulate both the multi-dimensional data with an unique set of material parameters.
Experimental and numerical comparison of bulge test results
The non-homogeneous bulge test was simulated by an axisymmetric using Abaqus commercial finite element code. Continuum eight-node biquadratic hybrid fully integrated elements (CAX8H) were used. Based on the result of the mesh sensitivity study, the optimal global element size for membrane mesh was h = 35 µm. At the soft-hard interface, a non-adaptive h-refinement was used to improve mesh quality employing a mesh size gradient used only on the R direction, resulting an element edge size about h/6 over the interface neighborhood. Over the membrane thickness, 30 Gauss integration points were used.
Results of FEA using the Mooney-Rivlin model are superposed with the experimental fields (principal stretches and principal stresses) in Fig. 9. Two pressure levels were chosen to present the results: 6 and 25 kPa. All numerical predictions follow qualitatively the trends of the experimental data. It is possible to observe a discontinuity in the model response at R = 0.27 even if the an h-refinement was used in this zone.
Considering the principal stresses plot in Figs. 9c and9d, numerical simulations do not correspond well with experimental ones in both load cases. This result can be related to the limitations of the Mooney-Rivlin model to fit complex stress-strain states at large strain levels.
Fig. 10a presents the FEA errors (e m , e c ) with respect to principal stresses over the bulge profile in both deformed configurations. Using a confidence interval of 95%, the mean error are êm = 31%, êc = 30% for the lower pressure level; and êm = 35%, êm = 29% for the highest pressure level. Regardless of the soft-hard interface, stress discrepancies are independent of the deformation level. This fact is also observed in Fig. 10b where the deviations of the stress ratio σ m /σ c are very close for both load levels.
Modeling analysis
Presented results show the limitations of the classical finite element method to tackle heterogeneous systems with moderate modulus mismatch across the material interface undergoing large strains, with an incompressible non-linear hyperelastic material behavior. Results reveals that a more sophisticate representation of the soft-hard interface must be taken into account by numerical modeling. For example, [START_REF] Srinivasan | Generalized finite element method for modeling nearly incompressible bimaterial hyperelastic solids[END_REF] proposed an extension of the generalized finite element method to tackle heterogeneous systems with non-linear hyperelastic materials. However, it must be recognized that outside the context of this study. Independently of the numerical treatment, a more detailed knowledge of the influence the material interface on the macroscopical mechanical behavior is necessary. For this, considering the Mooney-Rivlin model in Eq. 3 in terms of the principal stretches
σ m = 2 C 10 (λ m ) 2 - 1 λ m λ c 2 + C 01 1 λ m λ c 2 -(λ m ) 2 (4)
σ c = 2 C 10 (λ c ) 2 - 1 λ m λ c 2 + C 01 1 λ m λ c 2 -(λ c ) 2 . (5)
Replacing the measured principal stretches in Eqs. 4 and 5 and the previous identified parameters Finally, keeping in mind the experimental spatial resolution of 15 µm, the phase transition is estimated to be about 3.15 mm, i.e., 17% total sample radius. Within the tested loading range, the size of soft-to-hard transition can be assumed independent of the stress-strain level.
F i t o f C 0 1 C 1 0 F i t o f C 1 0 R C 1 0 , C 0
Conclusions
Results show the mechanical behavior of photosensitive silicone membranes with a variable set of mechanical properties within the same material. With a reversible in-plane stretchability up to 200%, the soft-to-hard transition was expressed by a factor 3.57 in the Young's modulus within a single continuous silicone membrane combined with a mean stress factor about 2.5 times.
The approach was tested using the bulge test and the presented results using a bimaterial are distinct from previous investigations of the classic circular homogeneous membrane inflation problem. The mechanical response of the soft-hard interface was observed by inflections on the principal curvatures fields along the meridional and circumferential directions. Analysis of the stress distribution throughout the meridional-section of the membrane revealed low stress peaks at soft-to-hard transition. The results demonstrate that under high strains levels no macroscopic damage was detected. The local cross-linking control eliminates the interfaces between different materials, leading to heterogeneous membrane with efficient stress transfer throughout the structure.
The numerical investigation provided information on the respective contributions of each material phase on its effective behavior under the inflation. Therefore, presented results show the limitations of the classical finite element method to tackle heterogeneous systems with moderate modulus mismatch across the material interface undergoing large strains, with an incompressible non-linear hyperelastic material behavior. Using the experimental local stress-strains values, it was possible to characterize the macroscopic influence of the soft-to-hard interface with spatial resolution of 15 µm. Later, a more sophisticated numerical strategy can be used to describe the soft-hard interface and then the graded membrane global behavior, based on the presented results. Further work to create, test and optimize more complex architectures, is ongoing using the experimental approaches described in the present paper.
Figure 1 :
1 Figure 1: Nominal stress-strain curves resulting from cyclic loading-unloading tensile test at λ = 3.0 × 10 -2 s -1 for soft and hard phases.
Figure 2 :
2 Figure 2: Nominal stress-strain curves resulting from cyclic loading-unloading planar tension test at λ = 1.0 × 10 -2 s -1 for soft and hard phases.
Figure 3 :
3 Figure 3: Nominal stress-strain curves resulting from cyclic loading-unloading equibiaxial test: (a) hard phase; (b) soft phase.
Figure 4 :
4 Figure 4: The photosensitive material sample and the bulge test configuration. (See online version for color figure.)
Figure 5 :
5 Figure 5: Bulge test setup using 3D-DIC technique. (a) Experimental image superposed with the Green-Lagrange major principal strain field (meridional strain); (b) Profiles of the inflated membrane composed by soft and hard phases for different pressure loads. (See online version for color figure.)
Figure 6 :
6 Figure 6: The strain distribution of a deformed foil vs. normalized radius of the circular membrane. (a) meridional direction λm; (b) circumferential direction λc.
Figure 7 :
7 Figure 7: Distributions of principal direction of experimental fields corresponding to three different inflation states: (a) meridional curvature κm; (b) circumferential curvature κc; (c) meridional Cauchy stress σm; (c) circumferential Cauchy stress σc.
Figure 8 :
8 Figure 8: Experimental data for soft and hard phases and the hyperelastic fitting using the Mooney-Rivlin (MR) hyperelastic model: (a) Uniaxial (b) Planar and (c) equibiaxial tensile tests.
Figure 9 :
9 Figure 9: Principal stretches (λm, λc) and Cauchy stress (σm, σc) confronted with the finite element analysis (FEA), corresponding to 6 kPa and 25 kPa inflation states.
Figure 10 :
10 Figure 10: (a) FEA errors with respect to principal stresses in both deformed configurations; (b) Principal stress ratio (σm/σc) confronted with the finite element results (FEA).
CFigure 11 :
11 Figs.11a and 11bshows a good agreement between the experimental (Exp) and Mooney-Rivlin (MR) stresses for both inflation states.In other to determinate the macroscopic influence of the soft-to-hard transition the parameters C 10 and C 01 were evaluated using the Eqs. 4 and 5 using the the measured principal stretches and the experimental stresses obtained by Eqs. 1 and 2, using the different directions and different load levels experimental information. The same ratios C 10 /C 01 of soft and hard phases from previous identification (Table2) were kept. Thus, one obtains a description of the spatial distribution of these parameters, as presented in Fig.12a. It is possible to observe that the soft-to-hard transition transition is almost symmetric with respect the position R = 0.27 and the material parameters gradient extends over the R = [0.21, 0.38] interval.The Mooney-Rivlin (MR) stresses in Eqs. 4 and 5 were recalculated, but now using the functions C 10 (R) and C 01 (R), fitted on experimental results using a sigmoid function. Results
1 (Figure 12 :
112 Figure 12: (a) Mooney-Rivlin parameters gradient obtained using the the experimental local stress-strain states. (b) Errors with respect to principal stresses in both deformed configurations using the Mooney-Rivlin with a material parameters gradient over the R = [0.21, 0.38] interval.
Table 1 :
1 A comparative for soft and hard phases for the three different experimental loading cases.
Parameter Unit Uniaxial Plane shear Equibiaxial
Hard phase elastic modulus E H MPa 2.50 3.50 4.90
Soft phase elastic modulus E S MPa 0.70 1.40 1.40
Elastic modulus ratio E H/S - 3.57 2.50 3.50
Hard stress at 10% Soft stress at 10% P 10% H P 10% S MPa MPa 0.24 0.08 0.31 0.13 0.45 0.13
Hard stress at 100% Soft stress at 100% Mean stress ratio P 100% H P 100% S P MPa MPa - 1.43 0.61 2.44 1.67 0.68 2.44 1.93 0.85 2.90
4. Bulge test with soft-hard phase sample
Table 2 :
2 Fitted parameters (in MPa) of the Mooney-Rivlin constitutive equation for soft and hard phases.
Parameters hard soft
C 10 0.35 0.18
C 01 0.10 0.01
Acknowledgment
The authors wish to acknowledge the financial support of the French ANR research program SAMBA: Silicone Architectured Membranes for Biomedical Applications (Blanc SIMI 9 2012).
We thank Laurent Chazeau, Renaud Rinaldi and Franois Ganachaud for fruitful discussions. |
01760135 | en | [
"shs.eco"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01760135/file/Lindseyetal2018.pdf | Robin Lindsey
email: robin.lindsey@sauder.ubc.ca
André De Palma
email: andre.depalma@ens-cachan.fr
Hugo E Silva
email: husilva@uc.cl
Equilibrium in a dynamic model of congestion with large and small users $
Keywords: Jel Classifications: C61, C62, D43, D62, R41 departure-time decisions, bottleneck model, congestion, schedule delay costs, large users, user heterogeneity, existence of Nash equilibrium
Individual users often control a significant share of total traffic flows. Examples include airlines, rail and maritime freight shippers, urban goods delivery companies and passenger transportation network companies. These users have an incentive to internalize the congestion delays their own vehicles impose on each other by adjusting the timing of their trips.
for the case of symmetric large users. We also develop some examples to identify under what conditions a PSNE exists. The examples illustrate how self-internalization of congestion by a large user can affect the nature of equilibrium and the travel costs that it and other users incur.
Introduction
Transportation congestion has been a growing problem for many years, and road traffic congestion is now a blight in most large cities worldwide. [START_REF] Couture | [END_REF] estimate that the deadweight loss from congestion is about US$30 billion per year in large US cities. 1 [START_REF] Hymel | Does traffic congestion reduce employment growth[END_REF] shows that high levels of congestion dampen employment growth, and that congestion pricing could yield substantial returns in restoring growth. Congestion delays are also a problem at airports, on rail lines, at seaports and in the hinterland of major transportation hubs. [START_REF] Ball | Total delay impact study: a comprehensive assessment of the costs and impacts of flight delay in the united states[END_REF] estimate that in 2007 air transportation delays in the US imposed a cost of US$25 billion on passengers and airlines.
Research on congestion dates back to [START_REF] Pigou | The Economics of Welfare[END_REF]. Yet most economic and engineering models of congestible transportation facilities still assume that users are small in the sense that each one controls a negligible fraction of total traffic (see, e.g., [START_REF] Melo | Price competition, free entry, and welfare in congested markets[END_REF]. This is a realistic assumption for passenger trips in private vehicles. Yet large users are prevalent in all modes of transport. They include major airlines at their hub airports, railways, maritime freight shippers, urban goods delivery companies, large taxi fleets and postal services. In some cases large users account for an appreciable fraction of traffic. 2 Furthermore, major employers such as government departments, large corporations, and transportation service providers can add substantially to traffic on certain roads at peak times. 3 So can large shopping centres, hotels, and major sporting events. 4 Unlike small users, large users have an incentive to internalize the congestion delays 1 Methods of estimating the costs of congestion differ, and results vary widely. The Texas Transportation Institute estimated that in 2014, congestion in 471 urban areas of the US caused approximately 6.9 billion hours of travel delay and 3.1 billion gallons of extra fuel consumption with an estimated total cost of US$160 billion [START_REF] Schrank | 2015 urban mobility scorecard[END_REF]. It is unclear how institutional and technological innovations such as ridesharing, on-line shopping, electric vehicles, and automated vehicles will affect traffic volumes. The possibility that automated vehicles will increase congestion is raised in National Academies of Sciences, Engineering, and Medicine (2017) and The Economist (2018).
2 For example, the world market for shipping is relatively concentrated. According to [START_REF] Statista | Leading ship operator's share of the world liner fleet as of december 31[END_REF], as of December 31, 2017, the top five shipping operators accounted for 61.5% of the world liner fleet. The top ten accounted for 77.7%, and the top 15 for 85.5%. The top five port operators had a 29.9% global market share (Port Technology, 2014). The aviation industry is another example. The average market share of the largest firm in 59 major US airports during the period 2002-2012 was 42% [START_REF] Choo | Factors affecting aeronautical charges at major us airports[END_REF]. Similar shares exist in Europe.
3 For example, [START_REF] Ghosal | Advanced manufacturing plant location and its effects on economic development, transportation network, and congestion[END_REF] describe how the Kia Motors Manufacturing plant, a large automobile assembler in West Point, Georgia, affects inbound and outbound transportation flows on highway and rail networks, and at seaports. 4 Using data from US metropolitan areas with Major League Baseball (MLB) teams, [START_REF] Humphreys | Professional sporting events and traffic: Evidence from us cities[END_REF] estimate that attendance at MLB games increases average daily vehicle-miles traveled by about 6.9%, and traffic congestion by 2%.
their own vehicles impose on each other. This so-called "self-internalization" incentive can affect large users' decisions 5 and raises a number of interesting questions -some of which are discussed further in the conclusions. One is how much a large user gains from selfinternalization. Can it backfire and leave the large user worse off after other users respond?
Second, do other users gain or lose when one or more large users self-internalize? Does it depend on the size of the large users and when they prefer to schedule their traffic? Are mergers between large users welfare-improving? What about unions of small users that create a large user?
There is now a growing literature on large users and self-internalization -notably on large airlines and airport congestion. Nevertheless, this body of work is limited in two respects. First, as described in more detail below, most studies have used static models.
Second, much of the theoretical literature has restricted attention to large users. In most settings, however, small users are also present. Automobile drivers and most other road users are small. Most airports serve not only scheduled commercial service, but also general aviation movements by recreational private aircraft and other non-scheduled users. Lowcost carriers with small market shares serve airports where large legacy airlines control much of the overall traffic. 6 We contribute to the literature in this paper by developing and analyzing a dynamic model of congestion at a transportation facility with both large users and small users.
More specifically, we use the Vickrey bottleneck model to study how large users schedule departure times for their vehicle fleets when small users use the facility too. As we explain in the literature review below, to the best of our knowledge, we are the first to study trip-timing decisions in markets with a mix of large and small users.
Several branches of literature have developed on large users of congestible facilities. 7
They include studies of route-choice decisions on road networks and flight scheduling at congested airports. There is also a literature directed to computer and telecommunications 5 For example, some seaports alleviate congestion by extending operating hours at truck gates, and using truck reservation systems at their container facilities (Weisbrod and Fitzroy, 2011). Cities and travel companies are also attempting to spread tourist traffic by making off-peak visits more attractive and staggering the arrivals of cruise ships [START_REF] Sheahan | Europe works to cope with overtourism[END_REF]. Airports, especially in Europe, restrict the number of landings and takeoffs during specific periods of time called slot windows (see [START_REF] Daniel | The untolled problems with airport slot constraints[END_REF] for a discussion of this practice). 6 Using data from Madrid and Barcelona, Fageda and Fernandez-Villadangos (2009) report that the market share of low-cost carriers is generally low (3-5 carriers with 3-18% of market share). Legacy carriers themselves sometimes operate only a few flights out of airports where another legacy carrier has a hub. For example, at Hartsfield-Jackson Atlanta International (ATL) American Airlines has a 3% market share while Delta's is 73% (Bureau of Transportation Statistics, 2017). 7 See [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] for a brief review.
networks on atomic congestion games. However, most studies have adopted static models that disregard the timing decisions of users despite the fact that congestion delays tend to be highly concentrated at peak times (see, e.g., [START_REF] Naroditskiy | Maximizing social welfare in congestion games via redistribution[END_REF]. The relatively small body of work that does address the temporal dimension of congestion has taken three approaches to incorporate dynamics. One approach has used dynamic stochastic models designed specifically to describe airport congestion (see, e.g., [START_REF] Daniel | Congestion pricing and capacity of large hub airports: A bottleneck model with stochastic queues[END_REF]. A second approach, also directed at studying airport congestion, features deterministic congestion and a sequential decision-making structure in which an airline with market power acts as a Stackelberg leader and schedules its flights before other airlines in a competitive fringe (see [START_REF] Daniel | Distributional consequences of airport congestion pricing[END_REF][START_REF] Silva | Airlines' strategic interactions and airport pricing in a dynamic bottleneck model of congestion[END_REF]. As [START_REF] Daniel | The untolled problems with airport slot constraints[END_REF] discusses, the presence of slot constraints at airports makes the Stackelberg approach relevant. The slots are allocated twice a year with priority for the incumbent airlines; slots allocation for new entrants, which are modeled as followers in this approach, occur only after the incumbents have committed to a slot schedule, and normally come from new airport capacity. In these cases, adopting a sequential decision-making structure seems to be accurate. Nevertheless, at most US airports, the capacity is assigned in a first-come, first-served basis, which makes the simultaneous structure, and Nash as an equilibrium concept, more relevant.
These two approaches lead to outcomes broadly consistent with those of static models.
Two results stand out. First, self-internalization of congestion by large users tends to result in less concentration of traffic at peak times, and consequently lower total costs for users in aggregate. Second, the presence of small users limits the ability of large users to reduce congestion. This is because reductions in the amount of traffic scheduled by large users, either at peak times or overall, are partially offset by increases in traffic by small users.
The Stackelberg equilibrium concept adopted in the second approach rests on the assumptions that the leader can schedule its traffic before other agents, and also commit itself to abide by its choices after other agents have made theirs. These assumptions are plausible in some institutional settings (e.g., Stackelberg leadership by legacy airlines at hub airports), but by no means in all settings. The third approach to incorporating trip-timing decisions, which we adopt, instead takes Nash equilibrium as the solution concept so that all users make decisions simultaneously.
Our paper follows up on recent work by Verhoef and Silva (2017) and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] who focus on determining under what conditions a Pure Strategy Nash Equilibrium (PSNE) in departure-time decisions exists. These two studies employ different deterministic congestion models that are best suited to describe road traffic congestion. Verhoef and Silva (2017) use the flow congestion model developed by [START_REF] Henderson | Road congestion: a reconsideration of pricing theory[END_REF], and modified by [START_REF] Chu | Endogenous trip scheduling: the henderson approach reformulated and compared with the vickrey approach[END_REF]. In this model, vehicles travel at a constant speed throughout their trips with the speed determined by the density of vehicles prevailing when their trip ends. Verhoef and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] show that, if there are two or more large users and no small users, a PSNE always exists. Self-internalization of congestion by the large users results in less concentration of trips at peak times and, not surprisingly, higher efficiency compared to the equilibrium without large users. However, this result is tempered by two well-known drawbacks of the [START_REF] Lindsey | Congestion modelling[END_REF]. [START_REF] Henderson | Road congestion: a reconsideration of pricing theory[END_REF] originally assumed that vehicle speed is determined by the density of traffic encountered when a vehicle starts its trip. This formulation has the additional disadvantage that a vehicle departing when density is low may overtake a vehicle that departed earlier when density was higher. As [START_REF] Lindsey | Congestion modelling[END_REF] explain, overtaking has no behavioral basis if drivers and vehicles are identical, and it is physically impossible under heavily congested conditions. By contrast, in [START_REF] Chu | Endogenous trip scheduling: the henderson approach reformulated and compared with the vickrey approach[END_REF] reformulated model overtaking does not occur in equilibrium. 9 A few experimental economics studies have tested the theoretical predictions of the bottleneck model; see [START_REF] Dixit | Understanding transportation systems through the lenses of experimental economics: A review[END_REF] for a review. The studies used a variant of the bottleneck model in which vehicles and departure times are both discrete. In all but one study, players controlled a single vehicle. The exception is [START_REF] Schneider | Against all odds: Nash equilibria in a road pricing experiment[END_REF] who ran two sets of experiments. In the first experiment each player controlled one vehicle, and in the second experiment each player controlled 10 vehicles which were referred to as trucks. Compared to the first experiment, the aggregate departure-time profile in the second experiment was further from the theoretical Nash equilibrium and closer to the system optimum. Schneider and Weimann conclude (p.151) that "players with 10 trucks internalize some of the congestion externality".
In this paper we extend [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] by investigating the existence and nature of PSNE in the bottleneck model for a wider range of market structures and under more general assumptions about trip-timing preferences. Unlike both Verhoef and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] and [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF], we allow for the presence of small users as well as large users.
As in the standard bottleneck model, small users each control a single vehicle and seek to minimize their individual trip cost. Each large user operates a vehicle fleet that comprises a positive fraction or measure of total traffic, and seeks to minimize the aggregate trip costs of its fleet. 10 Each vehicle has trip-timing preferences described by a trip-cost function C(t, a), where t denotes departure time and a denotes arrival time. Trip cost functions can differ for small and large users, and they can also differ for vehicles in a large user's fleet.
Our analysis consists of several parts. After introducing the basic model and assumptions in Section 2, in Section 3 we use optimal control theory to derive a large user's optimal fleet departure schedule as a best response to the aggregate departure rate profile of other users. We show that the optimal response can be indeterminate, and the second-order condition for an interior solution is generally violated. Consequently, a candidate PSNE departure schedule may exist in which a large user cannot gain by rescheduling any single vehicle in its fleet, yet it can gain by rescheduling a positive measure of vehicles. These difficulties underlie the non-existence of a PSNE in [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. We then show in Section 4 that if vehicles in the large user's fleet have sufficiently diverse trip-timing preferences, a PSNE may exist in which some -or even all -of the large user's vehicles do queue. The fact that a PSNE exists given sufficient user heterogeneity parallels the existence of equilibrium in the Hotelling model of location choice given sufficient preference heterogeneity [START_REF] De Palma | The principle of minimum differentiation holds under sufficient heterogeneity[END_REF].
Next, in Section 5 we revisit the case of symmetric large users that [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] consider, and derive the minimum degree of preference heterogeneity required to support a PSNE. We show that relative to the PSNE in which large users disregard self-imposed congestion, self-internalization results in substantial efficiency gains from reduced queuing delays even when the number of users is fairly large. Then, in Section 6 we modify the example of symmetric large users by assuming that part of the traffic is controlled by a single large user, and the rest by a continuum of small users. We derive conditions for existence of Unfortunately, they do not provide information on how the players distributed their vehicles over departuretime slots. Thus, it is not possible to compare their results with the predictions of our model as far as when large users choose to depart. 10 In game theory, small agents or players are sometimes called "non-atomic" and large agents "atomic".
In economics, the corresponding terms are "atomistic" and "non-atomistic". To avoid confusion, we do not use these terms. However, we do refer to the PSNE in which large users do not internalize their self-imposed congestion externalities as an "atomistic" PSNE.
a PSNE and show how the order in which users depart depends on the flexibility implied by the trip-timing preferences of large and small users. We also show that self-internalization of congestion can have no effect on the PSNE at all.
The model
The model is a variant of the classical bottleneck model.
(t) = Q(t)/s, or q(t) = t -t + s -1 R(t) -R( t) . (1)
A user departing at time t arrives at time a = t + q(t).
The cost of a trip is described by a function C (t, a, k), where k denotes a user's index or type. 12 Function C (t, a, k) is assumed to have the following properties:
Assumption 1: C (t, a, k) is differentiable almost everywhere with derivatives C t < 0,
C a > 0, C tt ≥ 0, C tt + C aa > 0, C ta = C at = 0, C tk ≤ 0, C ak ≤ 0, C tkk = 0, and C akk = 0.
The assumption C t < 0 implies that a user prefers time spent at the origin to time spent in transit. Similarly, assumption C a > 0 implies that a user prefers time spent at the destination to time spent in transit. User types can be defined in various ways. For much of the analysis, type is assumed to denote a user's preferred time to travel if a trip could be made instantaneously (i.e, with a = t). For type k, the preferred time is
t * k = Arg min t C (t, t, k). Given Assumption 1, t * k is unique. Types are ordered so that if k > j, t * k ≥ t * j .
As explained in the Appendix, Assumption 1 is satisfied for various specifications of the cost function including the piecewise linear form introduced by Vickrey (1969):
11 The bottleneck model is reviewed in [START_REF] Arnott | Recent developments in the bottleneck model[END_REF], [START_REF] Small | The Economics of Urban Transportation[END_REF], de Palma andFosgerau (2011), and[START_REF] Small | The bottleneck model: An assessment and interpretation[END_REF]. The exposition in this section draws heavily from [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF].
Literal excerpts are not marked as such and are taken to be acknowledged by this footnote.
12 As explained in the Appendix, the trip cost function can be derived from functions specifying the flow of utility or payoff received at the origin, at the destination, and while in transit.
C (t, a, k) = α k (a -t) + β k (t * k -a) for a < t * k α k (a -t) + γ k (a -t * k ) for a > t * k . (2)
In (2), parameter α k is the unit cost of travel time, β k < α k is the unit cost of arriving early, and γ k is the unit cost of arriving late. The first term in each branch of (2) denotes travel time costs, and the second term denotes schedule delay costs. We refer to this specification of costs as "step preferences". 13
Because step preferences have a kink at t * k , the derivative C a is discontinuous at a = t * k . This turns out to affect some of the results in this paper, and makes step preferences an exception to some of the propositions. It is therefore useful to know whether step preferences are reasonably descriptive of reality. Most studies that have used the bottleneck model have adopted step preferences, but this may be driven in part by analytical tractability and convention. Empirical evidence on the shape of the cost function is varied. [START_REF] Small | The scheduling of consumer activities: work trips[END_REF] found that step preferences describe morning commuter behaviour fairly well, but he did find evidence of discrete penalties for arriving late beyond a "margin of safety."
Nonconvexities in schedule delay costs have been documented (e.g., Matsumoto, 1988), and there is some empirical evidence that the marginal cost of arriving early can exceed the marginal cost of travel time (Abkowitz, 1981a,b;[START_REF] Hendrickson | The flexibility of departure times for work trips[END_REF]Tseng and Verhoef, 2008) which violates the assumption β k < α k .
The paper features examples with step preferences where the results depend on the relative magnitudes of parameters α, β, and γ. Estimates in the literature differ, but most studies of automobile trips find that β < α < γ. Small (1982, Table 2, Model 1) estimates ratios of β:α:γ = 1:1.64:3.9. These rates are representative of average estimates in later studies. 14 For benchmark values we adopt β:α:γ = 1:2:4.
In the standard bottleneck model, each user controls a single vehicle of measure zero and decides when it departs. A Pure Strategy Nash Equilibrium (PSNE) is a set of departure 13 These preferences have also been called "α -β -γ " preferences. 14 Estimates of the ratio γ/β vary widely. It is of the order of 8 in Geneva (Switzerland), and 4 in Brussels (Belgium), where tolerance for late arrival is much larger (see [START_REF] De Palma | Impact of adverse weather conditions on travel decisions: Experience from a behavioral survey in geneva[END_REF][START_REF] Khattak | The impact of adverse weather conditions on the propensity to change travel decisions: a survey of brussels commuters[END_REF]. Tseng et al. (2005) obtain a ratio of 3.11 for the Netherlands. [START_REF] Peer | Long-run versus short-run perspectives on consumer scheduling: Evidence from a revealed-preference experiment among peak-hour road commuters[END_REF] show that estimates derived from travel choices made in the short run can differ significantly from estimates derived from long-run choices when travelers have more flexibility to adjust their schedules. Most studies of triptiming preferences have considered passenger trips. Many large users transport freight rather than people.
Trip-timing preferences for freight transport can be governed by the shipper, the receiver, the transportation service provider, or some combination of agents. There is little empirical evidence for freight transportation on the relative values of α, β, and γ. The values are likely to depend on the type of commodity being transported, the importance of reliability in the supply chain, and other factors. Thus, it is wise to allow for a wide range of possible parameter values.
times for all users such that no user can benefit (i.e., reduce trip cost) by unilaterally changing departure time while taking other users' departure times as given. For brevity, the equilibrium will be called an "atomistic PSNE".
If small users are homogeneous (i.e., they all have the same type), then in a PSNE they depart during periods of queuing when the cost of a trip is constant. Their departure rate will be called their atomistic equilibrium departure rate, or "atomistic rate" for short. The atomistic rate for type k is derived from the condition that C (t, a, k) is constant. Using subscripts to denote derivatives, this implies
C t (t, a, k) + C a (t, a, k) 1 + dq (t) dt = 0.
Given (1), the atomistic rate is
r (t, a, k) = - C t (t, a, k) C a (t, a, k) s. (3) Since C t < 0 and C a > 0, r (t, a, k) > 0. Using Assumption 1, it is straightforward to establish the following properties of r (t, a, k): 15 ∂ r (t, a, k) ∂k ≥ 0, ∂ 2 r (t, a, k) ∂k 2 ≥ 0, Sgn ∂ r (t, a, k) ∂a = -C aa . (4)
For given values of t and a, the atomistic rate increases with a user's type, and at an increasing rate. In addition, the atomistic rate is increasing with arrival time if C aa < 0, and decreasing if C aa > 0. With step preferences, C aa = 0 except at t * k , and the atomistic rate is:
r (t, a, k) = α k α k -β k s for a < t * k α k α k +γ k s for a > t * k .
(5) [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] consider a variant of the standard bottleneck model in which users are "large". A large user controls a vehicle fleet of positive measure, and internalizes the congestion costs its vehicles impose on each other. A PSNE entails a departure schedule for each user such that no user can reduce the total trip costs of its fleet by unilaterally changing its departure schedule while taking other users' departure schedules as given. We will call the equilibrium the "internalized PSNE". [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] focus on the case of two large users with step preferences. In the next section we derive the departure schedule of a large user with general trip-timing preferences when other large users and/or small users may be departing too.
15 See the Appendix.
Fleet departure scheduling and equilibrium conditions
Optimal departure schedule for a large user
This section uses optimal control theory to derive and characterize the optimal departure schedule of a large user with a fleet of vehicles. Call the large user "user A", and let N A be the measure of vehicles in its fleet. Vehicle k has a cost function C A (t, a, k). Vehicles are indexed in order of increasing preferred arrival times so that t * k ≥ t * j if k > j. It is assumed that, regardless of the queuing profile, it is optimal for user A to schedule vehicles in order of increasing k. The departure schedule for user A can then be written r A (t) with
argument k suppressed. If R A (t) denotes cumulative departures of A, vehicle k = R A (t)
departs at time t.
User A chooses r A (t) to minimize the total trip costs of its fleet while taking as given the aggregate departure rate of other users, r -A (t). Trips are assumed to be splittable: they can be scheduled at disjoint times (e.g., some vehicles can travel early in the morning while others travel at midday). Let t As and t Ae denote the first and last departure times chosen by user A. User A's optimization problem can be stated as:
M in t As ,t Ae ,r A (t) t Ae t=t As r A (t) C A (t, t + q (t) , R A (t)) dt, (6)
subject to the equations of motion:
dq (t) dt + = s -1 (r -A (t) + r A (t)) -1 if q (t) > 0 or r -A (t) + r A (t) > s 0 otherwise (7)
(costate variable λ (t) ≥ 0), and
dR A (t) dt = r A (t) (costate variable µ (t) ), (8)
and the following constraints16 :
r A (t) ≥ 0 (multiplier ξ (t) ≥ 0), (9a) R A (t As ) = 0, R A (t Ae ) = N A , (9b)
q (t As ) = q (t As ) (multiplier φ), (9c)
t As , t Ae chosen freely. (9d)
Costate variable λ (t) for Eq. ( 7) measures the shadow cost to user A of queuing time.
Eq. ( 8) governs how many vehicles in user A's fleet have left the origin. Costate variable µ (t) measures the shadow cost of increasing the number of vehicles in the fleet that have started their trips. Condition (9a) stipulates that the departure rate cannot be negative.
Condition (9b) specifies initial and terminal values for cumulative departures. Condition (9c) describes how queuing time is evolving when departures begin. Finally, (9d) indicates that the choice of departure period is unconstrained.
The Hamiltonian for the optimization problem is
H (t) = r A (t) C A (t, t + q (t) , R A (t)) + µ (t) dR A (t) dt + λ (t) dq (t) dt + , ( 10
)
and the Lagrangian is
L (t) = H (t) + r A (t) ξ (t) . ( 11
)
Costate variable λ (t) for queuing time evolves according to the equation of motion
dλ (t) dt = - ∂H ∂q = -r A (t) C A a (t, t + q (t) , R A (t)) ≤ 0. ( 12
)
Variable λ (t) decreases as successive vehicles in the fleet depart because fewer vehicles remain that can be delayed by queuing.
Costate variable µ (t) for cumulative departures evolves according to the equation of
motion dµ (t) dt = - ∂H ∂R A = -r A (t) C A k (t, t + q (t) , R A (t)) ≥ 0. ( 13
)
If vehicles in the fleet are homogeneous, µ is independent of time.
With t Ae chosen freely, transversality conditions at t Ae are:
λ (t Ae ) = 0, (14)
H (t Ae ) = 0. ( 15
)
According to condition ( 14), the shadow cost of queuing time drops to zero when the last vehicle departs. Condition (15) dictates that the net flow of cost is zero when the last vehicle departs. Substituting ( 14) into (10), and applying (15) yields
µ (t Ae ) = -C A (t Ae , t Ae + q (t Ae ) , N A ) . ( 16
)
Condition ( 16) states that the benefit from dispatching the last vehicle in the fleet is the cost of its trip that has now been incurred, and is no longer a pending liability.
With t As chosen freely, a transversality condition also applies at t As . Following Theorem 7.8.1 in [START_REF] Leonard | Optimal control theory and static optimization in economics[END_REF], the transversality condition is:
H (t As ) -φ dq (t) dt t As = 0, ( 17
)
where φ is a multiplier on the constraint (9c). By continuity, φ = λ (t As ). Using ( 10) and ( 8), condition (17) reduces to
r A (t As ) C A (t As , t As + q (t As ) , 0) + µ (t As ) = 0. ( 18
)
It remains to determine the optimal path of r A (t). The optimality conditions governing r A (t) depend on whether or not there is a queue. Attention is limited here to the case with a queue.17 If q (t) > 0, the optimal departure rate is governed by the conditions
∂L ∂r A (t) = C A (t, t + q (t) , R A (t)) + ξ (t) + µ (t) + λ (t) s = 0, (19)
ξ (t) r A (t) = 0.
If r A (t) is positive and finite during an open time interval containing t, then ξ (t) = 0 and ( 19) can be differentiated with respect to t:
d dt ∂L ∂r A (t) = C A t (t, t + q (t) , R A (t)) + C A a (t, t + q (t) , R A (t)) 1 + dq (t) dt + +C A k (t, t + q (t) , R A (t)) r A (t) + dµ (t) dt + 1 s dλ (t) dt = 0.
Using Eqs. ( 7) and ( 12), this condition simplifies to
C A t (t, t + q (t) , R A (t)) + C A a (t, t + q (t) , R A (t)) r -A (t) s = 0. ( 20
)
The left-hand-side of (20) depends on the aggregate departure rate of other users, r -A (t), but not on r A (t) itself. In general, derivatives C A t (t, t + q (t) , R A (t)) and C A a (t, t + q (t) , R A (t)) depend on the value of q (t), and hence the value of R (t), but not directly on r A (t). Condition (20) will therefore not, in general, be satisfied regardless of user A's choice of r A (t). This implies that the optimal departure rate may follow a bang-bang solution between zero flow and a mass departure.18 This is confirmed by inspecting the Hessian matrix of the Hamiltonian:
∂ 2 H ∂r 2 A (t) ∂ 2 H ∂r A (t)∂q(t) ∂ 2 H ∂r A (t)∂R A (t) ∂ 2 H ∂r A (t)∂q(t) ∂ 2 H ∂q 2 (t) ∂ 2 H ∂q(t)∂R A (t) ∂ 2 H ∂r A (t)∂R A (t) ∂ 2 H ∂q(t)∂R A (t) ∂ 2 H ∂R 2 A (t) = 0 C A a (t, t + q (t) , R A (t)) C A k (t, t + q (t) , R A (t)) C A a (t, t + q (t) , R A (t)) r A (t) C A aa (t, t + q (t) , R A (t)) r A (t) C A ak (t, t + q (t) , R A (t)) C A k (t, t + q (t) , R A (t)) r A (t) C A ak (t, t + q (t) , R A (t)) r A (t) C A kk (t, t + q (t) , R A (t)) .
Since the Hessian is not positive definite, the second-order sufficient conditions for a local minimum are not satisfied. As we will show, if users are homogeneous the necessary condition (20) cannot describe the optimal schedule unless C A aa = 0. In summary, user A will not, in general, depart at a positive and finite rate when a queue exists. To understand why, consider condition (20). Given C A t < 0 and C A a > 0, if r -A (t) is "small" the left-hand side of (20) is negative. The net cost of a trip is decreasing over time, and user A is better off scheduling the next vehicle in its fleet later. Contrarily, if r -A (t) is "large", the left-hand side of ( 20) is positive. Trip cost is increasing, and user A should dispatch a mass of vehicles immediately if it has not already done so. In either case, the optimal departure rate is not positive and finite.
In certain cases, described in the next section, condition (20) will be satisfied. The condition can then be written as a formula for the departure rate of other users:
r -A (t) = - C A t (t, t + q (t) , R A (t)) C A a (t, t + q (t) , R A (t)) s ≡ r A (t, t + q (t) , R A (t)) . ( 21
)
Condition ( 21) has the same functional form as Eq. ( 3) for the atomistic rate of small users.
Thus, with step preferences, the right-hand side exceeds s for early arrival and is less than s for late arrival. Moreover, the condition depends only on the aggregate departure rate of other users, and not their composition (e.g., whether the other users who are departing are large or small. However, condition ( 21) is only necessary, not sufficient, to have r A (t) > 0 because the second-order conditions are not satisfied. This leads to:
Lemma 1. Assume that a queue exists at time t. A large user will not depart at a positive and finite rate at time t unless the aggregate departure rate of other users equals the large user's atomistic rate given in Eq. ( 21).
Lemma 1 requires qualification in the case of step preferences because the atomistic rate is discontinuous at the preferred arrival time. If vehicles in a large user's fleet differ sufficiently in their individual t * k , it is possible to have a PSNE in which the fleet departs at a positive and finite rate with each vehicle arriving exactly on time. The aggregate departure rate of other users falls short of the atomistic rate of each vehicle in the fleet just before it departs, and exceeds it just after it departs. This is illustrated using an example in Section 6.
Equilibrium conditions with large users
We now explore the implications of Lemma 1 for the existence of a PSNE in which a large user departs when there is a queue and the atomistic rates of all users are continuous.
Conditions for a PSNE depend on whether or not small users are present, and the two cases are considered separately below.
Multiple large users and no small users
Suppose there are m ≥ 2 large users and no small users. User i has an atomistic rate ri (t, t + q (t) , R i (t)). For brevity, we write this as ri (t) with arrival time and the index k for vehicles both suppressed. Suppose that a queue exists at time t, and user i departs at rate r i (t) > 0, i = 1...m. 19 Necessary conditions for a PSNE to exist are
r -i (t) = ri (t) , i = 1...m. ( 22
)
This system of m equations has a solution
r i (t) = 1 m -1 j =i rj (t) - m -2 m -1 ri (t) = 1 m -1 j rj (t) -ri (t) , i = 1...m. ( 23
)
With m = 2, the solution is r 1 (t) = r2 (t), and r 2 (t) = r1 (t). With m > 2, the solution is feasible only if all departure rates are nonnegative. A necessary and sufficient condition for this to hold at time t is
M ax i ri (t) ≤ 1 m -2 j =i rj (t) . (24)
Condition ( 24) is satisfied if large users have sufficiently similar atomistic rates.
Multiple large users and small users
Assume now that, in addition to m ≥ 1 large users, there is a group of homogeneous small users comprising a positive measure of total traffic with an atomistic rate ro (t).
Suppose that large user i departs at rate r i (t) > 0, i = 1...m, and small users depart at an aggregate rate r o (t) > 0. If a queue exists at time t, necessary conditions for a PSNE are
r -i (t) = ri (t) , i = 1...m, ( 25
) j r j (t) + r o (t) = ro (t) . ( 26
)
The solution to this system of m + 1 equations is
r i (t) = ro (t) -ri (t) , i = 1...m, (27)
r o (t) = j rj (t) -(m -1) ro (t) . ( 28
)
The solution is feasible only if all departure rates are nonnegative. With m = 1, the necessary and sufficient condition is r1 (t) < r0 (t). With m > 1, necessary and sufficient
19 If a user does not depart at time t, it can be omitted from the set of m "active" users at t.
conditions for nonnegativity are 1 m -1 j rj (t) > ro (t) , ( 29)
ri (t) < ro (t) , i = 1...m. ( 30
)
Condition ( 30) requires that all large users have lower atomistic rates than the small users.
However, condition [START_REF] Daniel | Distributional consequences of airport congestion pricing[END_REF] dictates that the average atomistic rate for large users be close enough to the atomistic rate of small users. Together, ( 29) and ( 30) impose relatively tight bounds on the ri (t) .
Existence of PSNE with queuing by a large user
Silva et al. ( 2017) consider two identical large users with homogeneous vehicle fleets and step preferences. They show that a PSNE with queuing does not exist. In addition, they show that if γ > α, a PSNE without queuing does not exist either so that no PSNE exists. In this section we build on their results in two directions. First, we prove that if a large user has a homogeneous vehicle fleet, and C A aa = 0 at any time when the large user's vehicles arrive, a PSNE in which the large user queues does not exist for any market structure. Second, we show that if a large user has a heterogeneous vehicle fleet, and the derivative C A ak is sufficiently large in magnitude, a PSNE in which the large user queues is possible. We illustrate the second result in Section 5. Consider a large user, "user A", and a candidate PSNE in which queuing time is q (t) > 0 at time t. (A bar denotes quantities in the candidate PSNE.) User A never departs alone when there is a queue because it can reduce its fleet costs by postponing departures. Thus, if rA (t) is positive and finite, other users must also be departing. The aggregate departure rate of other users must equal user A's atomistic rate as per Eq. ( 22) or (25): r-A (t) = rA t, t + q (t) , RA (t) .
In addition, user A must depart at a rate rA (t) consistent with equilibrium for other users as per Eq. ( 23), or Eqs. ( 27) and (28). Figure 1 Cumulative departures of user A, RA (t) = R (t) -R-A (t), are measured by the distance between the two curves.
Suppose that user A deviates from the candidate PSNE during the interval (t A , t B ) by dispatching its vehicles slightly later so that section ADB of R (t) shifts rightwards to R (t)
∆C A (k) = C A (t E , t E + q (t E ) , k) -C A (t D , t D + q (t D ) , k) ,
where q (t D ) is queuing time at t D with the candidate equilibrium departure schedule R (t), and q (t E ) is queuing time at t E with the deviated schedule R (t). The path from point D to point E can be traversed along the dashed blue curve running parallel to R-A (t) between points y and z. Let q (t) denote queuing time along this path. The change in cost can then be written
∆C A (k) = t E t=t D C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) 1 + dq (t) dt dt. = t E t=t D C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) s dt = 1 s t E t=t D C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) -r A (t, t + q (t) , k) dt = 1 s t E t=t D C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) -r A (t, t + q (t) , k) -( r A (t, t + q (t) , k) -r A (t, t + q (t) , k)) dt. ( 31
)
The sign of this expression depends on how r A varies with arrival time and vehicle index. We begin by showing that, if vehicles are homogeneous, ( 31) is negative so that ∆C A (k) < 0 and the candidate is not a PSNE.
Homogeneous vehicle fleets
If user A has a homogeneous fleet, the first line in braces in (31) is zero. Given C A aa > 0 and q (t) < q (t) for t ∈ (t D , t E ), rA (t, t + q (t) , k) > rA (t, t + q (t) , k) and the second line in braces is negative. Hence ∆C A (k) < 0, and rescheduling the vehicle from D to E reduces its trip cost. Since point D is representative of all points between A and B, all the rescheduled vehicles except those at the endpoints, A and B, experience a reduction in costs. User A therefore gains from the deviation, and the candidate schedule is not a PSNE.
In the Appendix we show that if C A aa < 0, user A can benefit by accelerating departures of its fleet. Deviation is therefore beneficial both when C A aa > 0 and when C A aa < 0. This result is formalized in Lemma 2. Consider large user A with a homogeneous vehicle fleet. If a queue exists at time t, and C A aa (t, t + q(t)) = 0, user A will not depart at a positive and finite rate at time t.
Lemma 2 shows that although the candidate PSNE is robust to deviations in the departure time of a single vehicle, it is not robust to deviations by a positive measure of the fleet. If C A aa > 0, the departure rate of other users must decrease over time in order for user A to maintain a positive and finite departure rate. By delaying departures, user A enables vehicles in its fleet to benefit from shorter queuing delays. Conversely, if C A aa < 0, the departure rate of other users must increase over time in a PSNE, and user A can benefit by accelerating departures of its fleet.
Lemma 2 contrasts sharply with the results of Verhoef and Silva (2017) who show that, given a set of large users with homogeneous vehicle fleets, a PSNE always exists in the Henderson-Chu model. As noted in the introduction, in the Henderson-Chu model vehicles that arrive (or depart) at different times do not interact with other. In particular, a cohort of vehicles departing at time t is unaffected by the number or density of vehicles that departed before t. Thus, if a large user increases or decreases the departure rate of its fleet at time t, it does not affect the costs incurred by other vehicles in the fleet that are scheduled after t. Equilibrium is determined on a point-by-point basis, and there is no state variable analogous to the queue in the bottleneck model that creates intertemporal dependence in costs.
Heterogeneous vehicle fleets
Suppose now that user A has a heterogeneous fleet. By (4), ∂ rA (t, a, k) /∂k ≥ 0 so that the first line in braces in (31) is positive. Expression ( 31) is then positive if the first line outweighs the second line. We show that this is indeed the case under plausible assumptions. Towards this, we introduce the following two-part assumption:
Assumption 2: (i) The trip cost function depends only on the difference between actual arrival time and desired arrival time, and thus can be written
C A (t, a, k) = C A (t, a -t * k ). (ii) t * k is distributed
) ≤ s ∀ t * k ∈ [t * s , t * e ]
), a PSNE in which user A queues may exist.
Theorem 1 identifies necessary conditions such that a large user may queue in a PSNE.
In light of Lemma 2 the key requirement is evidently sufficient heterogeneity in the triptiming preferences of vehicles in the large user's fleet. Condition f (t * k ) ≤ s stipulates that the desired arrival rate of vehicles in the fleet never exceeds bottleneck capacity. Put another way, if user A were the only user of the bottleneck, it could schedule its fleet so that every vehicle arrived precisely on time without queuing delay.
The assumption f (t * k ) ≤ s is plausible for road transport. Freight shippers such as Fedex or UPS operate large vehicle fleets out of airports and central warehouses, and they can make hundreds of daily trips on highways and connecting roads in an urban area. Nevertheless, deliveries are typically made to geographically dispersed customers throughout the day so that the fleet rarely comprises more than a small fraction of total traffic on a link at any given time. Thus, for any t * k , f (t * k ) is likely to be only a modest fraction of s.
In concluding this section it should be emphasized that Theorem 1 only states that a PSNE in which a large user queues may exist. A large user may prefer to avoid queuing by traveling at off-peak times. To determine whether this is the case, it is necessary to consider the trip-timing preferences of all users. We do so in Section 5 for the case of large users studied by [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. Section 6 examines a variant with both large users and small users.
Existence of PSNE and self-internalization: multiple large users
In this section we analyze the existence of PSNE with m ≥ 2 symmetric large users.
We begin with m = 2: the case considered by [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. Consider two symmetric large users, A and B, that each controls N/2 vehicles with step preferences. Such a market setting might arise with two airlines that operate all (or most of) the flights at a congested airport. This section revisits Proposition 1 in [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] which states that a PSNE does not exist with homogeneous vehicles when γ > α. Their proof entails showing that with γ > α, a PSNE without queuing does not exist. The proof that a PSNE with queuing does not exist either follows the general reasoning used to prove Lemma 2 above. Here we relax the assumption that vehicles are homogeneous, and suppose that in each vehicle fleet, ] and height N . The two users schedule vehicles at the same rate. During the initial interval (t s , t q ), both users depart at an aggregate rate of s without creating a queue.
t * k is uniformly distributed with a density f (t * k ) = N/ (2∆)
Queuing begins at time t q , and ends at t e when the last vehicle in each fleet departs. Queuing time reaches a maximum at t for a vehicle arriving at t * = β β+γ t * s + γ β+γ t * e . Total departures after time t q are shown by the piecewise linear curve ALC. Cumulative departures by user B starting at t q are given by the piecewise linear curve AP E, and cumulative departures by user A are measured by the distance between AP E and ALC. If a PSNE exists, total costs in the internalized PSNE, T C i , are lower than total costs in the atomistic PSNE, T C n . As shown in the Appendix, the total cost saving from internalization with m users is
T C n -T C i = (m -1) β (α + γ) + mαγ 2m (m -1) βγ + 2mαγ Ψ • T C nH ,
where
T C nH = βγ β+γ N 2
s denotes total costs in the atomistic PSNE with homogeneous vehicles. The composite parameter Ψ depends on parameters α, β, and γ only through the ratios β/α and γ/α. Given the benchmark ratios of β:α:γ = 1:2:4, Ψ = 7m-3 4m(1+m) , which varies with m as shown in Table 1. With two users (m = 2) the saving is nearly as great as with a single user. Even with 10 users the savings is over 15 percent of the atomistic costs T C nH . These results are similar to those obtained by Verhoef and Silva (2017) with the Henderson-Chu model of congestion and a single desired arrival time, as they also find significant savings from selfinternalization. Moreover, with heterogeneity in t * , total costs in the atomistic PSNE are less than T C nH so that the proportional cost saving from internalization is actually larger than shown in Table 1. The example shows that self-internalization of congestion can boost efficiency appreciably even if no user controls a large fraction of total traffic. This is consistent with [START_REF] Brueckner | Airport congestion when carriers have market power[END_REF] who showed, using a Cournot oligopoly model, that internalization of self-imposed delays leads to an equilibrium that is more efficient than the atomistic equilibrium, and correspondingly offers smaller potential efficiency gains from congestion pricing.
Existence of PSNE and self-internalization: large and small users
In this section we modify the example in Section 5. We now assume that traffic is controlled by one large user, user A, with a vehicle fleet of measure N A , and a group of homogeneous small users with a measure N o . For ease of reference, vehicles in user A's fleet are called "large vehicles" and vehicles driven by small users are called "small vehicles".
Large vehicles have the same trip-timing preferences as in Section 5. Their unit costs are denoted by α A , β A , and γ A . Their desired arrival times are uniformly distributed over the interval [t * s , t * e ] with a range of ∆ ≡ t * e -t * s . For future use we define δ ≡ s∆/N A . The existence and nature of PSNE depend on how the trip-timing preferences of small vehicles compare with those of large vehicles. We adopt a specification that allows the preferences to be either the same, or different in a plausible and interesting way. Small vehicles have step preferences with unit costs of α, β, and γ. The cost of late arrival relative to early arrival is assumed to be the same as for large vehicles so that γ/β = γ A /β A . The distribution of desired arrival times is also the same as for large vehicles. 22 Small vehicles and large vehicles are allowed to differ in the values of β/α and β A /α A .
The ratio β A /α A measures the cost of schedule delay relative to queuing time delay for large vehicles. It determines their flexibility with respect to arrival time, and hence their willingness to queue to arrive closer to their desired time. If β A /α A is small, large vehicles are flexible in the sense that they are willing to reschedule trips in order to avoid queuing delay. Conversely, if β A /α A is big, large vehicles are inflexible. Ratio β/α has an analogous interpretation for small vehicles. To economize on writing, we use the composite parameter θ ≡ β A /α A β/α to measure the relative flexibility of the two types. We consider two cases. In Case 1, θ ≤ 1 so that large vehicles are (weakly) more flexible than small vehicles. To fix ideas, small vehicles can be thought of as morning commuters with fixed work hours and relatively rigid schedules. Large vehicles are small trucks or vans that can make deliveries within a broad time window during the day. We show below that for a range of parameter values, a PSNE exists in which large vehicles depart at the beginning and end of the travel period without queuing. Small vehicles queue in the middle of the travel period in the same way as if large vehicles were absent.
In Case 2, θ > 1 so that large vehicles are less flexible than small vehicles. This would be the case if large vehicles are part of a just-in-time supply chain, or have to deliver products to receivers within narrow time windows. 23 We show that for a range of parameter values a PSNE exists in which large vehicles depart simultaneously with small vehicles and encounter queuing delays. The PSNE is identical to the atomistic PSNE in which user A disregards the congestion externalities that its vehicles impose on each other. Cases 1 and 2 are analyzed in the following two subsections.
Case 1: Large vehicles more flexible than small vehicles
In Case 1, large vehicles are more flexible than small vehicles. In the atomistic PSNE, large vehicles depart at the beginning and end of the travel period, and small vehicles travel in the middle. A queue exists throughout the travel period, but it rises and falls more slowly while large vehicles are departing than when small vehicles are departing just before and after the peak. 24 One might expect the same departure order to prevail with self-internalization, but with user A restricting its departure rate to match capacity so that queuing does not occur. The candidate PSNE with this pattern is shown in Figure 3. Large vehicles depart during the intervals (t As , t os ) and (t oe , t Ae ), where t As = t * -23 Another possibility is that large vehicles are commercial aircraft operated by airlines with scheduled service, while small vehicles are private aircraft used mainly for recreational purposes.
24 This departure pattern was studied by [START_REF] Arnott | Schedule delay and departure time decisions with heterogeneous commuters[END_REF] and [START_REF] Arnott | The welfare effects of congestion tolls with heterogeneous commuters[END_REF]. . 25 Small vehicles depart during the central interval (t os , t oe ). The departure schedule for small vehicles and the resulting queue are the same as if user A were absent.
If the candidate departure schedule in Figure 3 is a PSNE, neither small vehicles nor any subset of large vehicles can reduce their travel costs by deviating. The requisite conditions are identified in the two-part assumption:
Assumption 3: (i) θ ≤ 1. (ii) α A ≥ (β A + γ A ) (1 -δ).
The following proposition identifies necessary and sufficient conditions for the pattern in Figure 3 to be a PSNE.
Proposition 2. Let Assumption 3 hold. Then the departure pattern in Figure 3 is a PSNE. The key to the proof of Proposition 2 is to show that user A cannot profitably deviate from the candidate PSNE by rescheduling vehicles departing after t oe to a mass departure at t os . Forcing vehicles into the bottleneck as a mass just as small vehicles are beginning to depart allows user A to reduce the total schedule delay costs incurred by its fleet. Doing so at t os is preferable to later because, with θ ≤ 1, large vehicles have a lesser willingness to queue than small vehicles. Queuing delay is nevertheless unavoidable because vehicles that depart later in the mass have to wait their turn. This trade-off is evident in the
condition α A ≥ (β A + γ A ) (1 -δ).
Moreover, the more dispersed desired arrival times are, the lower the fleet's costs in the candidate PSNE, and hence the less user A stands to gain from rescheduling. If δ > 1, rescheduling vehicles actually increases their schedule delay costs because they arrive too quickly relative to their desired arrival times. Rescheduling then cannot possibly be beneficial. Given the benchmark parameter ratios β:α:γ = 1:2:4,
condition α A ≥ (β A + γ A ) (1 -δ) simplifies to δ ≥ 3/5, or ∆ ≥ (3/5) (N A /s).
In words:
the range of desired arrival times for vehicles in the fleet must be at least 60 percent of the aggregate time required for them to traverse the bottleneck. This condition is plausible, at least for road users.
As noted above, the atomistic PSNE features the same order of departures and arrivals as the internalized PSNE, but with queuing by large vehicles as well as small vehicles. It is easy to show that both large vehicles and small vehicles incur lower travel costs with self-internalization. Thus, self-internalization achieves a Pareto improvement. [START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF] show that a PSNE without queuing exists for a symmetric duopoly and homogeneous users if α ≥ γ. We have effectively replaced one of the duopolists with a continuum of small users. The condition for a PSNE here (with δ = 0) is α A ≥ β A +γ A . This is more stringent than for the duopoly with the same unit costs. Hence, counterintuitively, the mixed market with a large user and small users may not have a PSNE even if a PSNE exists for both the less concentrated atomistic market and the more concentrated duopoly.
While this nonmonotonic variation in behavior is intriguing, it complicates the analysis of equilibrium with large users.
Case 2: Large vehicles less flexible than small vehicles
In Case 2, θ > 1 so that large vehicles are less flexible than small vehicles. Large vehicles prefer to travel in the middle of the travel period to reduce their schedule delay costs. However, queuing will be inevitable because small vehicles prefer the same range of arrival times. To meet the requirements of Theorem 1 for an internalized PSNE with queuing, it is necessary to assume that ∆ > N A /s. Given an additional assumption identified in Assumption 4 below, the internalized PSNE turns out to be identical to the atomistic PSNE. All large vehicles thus travel during a queuing period, and depart at the same time as in the atomistic PSNE.28 Thus, in contrast to Case 1, the large user's incentive to internalize self-congestion has no effect on either its fleet or small users. The candidate departure schedule in Figure 4 is an internalized PSNE if and only if neither small vehicles nor any subset of large vehicles can reduce their travel costs by deviating. The three requisite conditions are identified in Assumption 4:
Assumption 4: (i) θ > 1. (ii) ∆ > N A /s. (iii) N A s < (θ -1) βγ α (β + γ) N A + N o s -∆ . (32)
Using Assumption 4, the internalized PSNE is stated as: Proposition 3. Let Assumption 4 hold. Then the departure pattern in Figure 4 is a PSNE.
Large users depart during the queuing period and all arrive on time. Small vehicles arrive at a complementary rate so that the bottleneck is fully utilized. The aggregate departure rate and queuing time are the same as if all vehicles were small.
Proof: See the Appendix.
The roles of Conditions (i) and (ii) in Assumption 4 were explained above. Condition (iii) assures that user A's fleet is small enough that it prefers to schedule all its vehicles on-time during the queuing period, rather than scheduling some vehicles before queuing begins at t os .29
Conclusions
In this paper we have studied trip-timing decisions by large users in the Vickrey bottleneck model of congestion. We believe that the model is representative of many transportation settings including airlines scheduling flights at airports, rail companies operating on rail networks, and freight shippers using congested roads. We build on previous studies of trip-timing decisions by large users in three ways: (i) we allow for the presence of small users; (ii) we consider general trip-timing preferences; and (iii) we allow for heterogeneity of trip-timing preferences within a large user's fleet as well as between large and small users.
Our paper makes two main contributions. First and foremost, it identifies conditions under which a Nash equilibrium in pure strategies exists in a setting in which large users make trip-timing decisions simultaneously and queue in a dynamic model of congestion with realistic propagation of delays. More specifically, we show that if vehicles in a large user's fleet have sufficiently diverse trip-timing preferences, a PSNE in which the large user queues may exist. We also provide an example in which the conditions for existence of a PSNE become less stringent as the number of large users increases.
Second, we illustrate how self-internalization can affect equilibrium travel costs. In two of the three examples presented, self-internalization reduces costs for all users. In the first example with symmetric large users (Section 5), the cost savings are substantial and can be nearly as large as for a monopolistic user that controls all the traffic. In the second example with one large user and a group of small users, all parties also gain if the large user schedules its fleet during the off-peak period without queuing. However, in the third example in which the large user travels during the peak, the equilibrium is identical to the atomistic PSNE so that no one benefits. The three examples illustrate that the effects of self-internalization depend on both market structure and the trip-timing preferences of users.
The analysis of this paper can be extended in various directions. One is congestion pricing: either in the form of an optimal fine (i.e., continuously time-varying) toll that eliminates queuing, or a more practically feasible step-tolling scheme. Although the gains from self-internalization can be substantial, there is still scope to improve welfare by implementing congestion pricing. Indeed, this is what Verhoef and Silva (2017) find using the Henderson-Chu model for the case of large users with homogenous trip-timing preferences.
A second topic is mergers or other measures to enable users to coordinate their trip-timing decisions gainfully without intervention by an external authority using either tolls or direct traffic control measures. It is not obvious from our preliminary results which users, if any, stand to gain by merging, how a merger would affect other users, and whether there is a case for regulation.
A third extension is to explore more complex market structures and different types of user heterogeneity. Ride-sharing companies or so-called Transportation Network Companies (TNCs) have become a major mode of passenger transportation in some cities and evidence is emerging that they are contributing to an increase in vehicle-km and congestion [START_REF] Clewlow | Disruptive transportation: the adoption, utilization, and impacts of ride-hailing in the united states[END_REF]The New York Times, 2017). In Manhattan, the number of TNCs exceeds the number of taxis. Transportation services are offered by six types of operators in all:
yellow cabs that must be hailed from the street, for-hire vehicles or black cars that must be booked, and four TNC companies: Uber, Lyft, Via, and Juno [START_REF] Schaller | Empty seats, full streets: Fixing manhattan's traffic problem[END_REF]. 30 The firms differ in their operations and fare structures. Their trip-timing preferences are also dictated by those of their customers. The size of a firm's fleet is not fixed, but varies by time of day and day of week according to when drivers choose to be in service. The simple Vickrey model would have to be modified to incorporate these user characteristics.
A fourth topic that we are studying is whether self-internalization by a large user can make other users worse off, or even leave the large user itself worse off. Such a result is of policy interest because it suggests that the welfare gains from congestion pricing of roads, airports and other facilities in which large users operate could be larger than previously thought.
Figure A.5: Candidate PSNE with C A aa < 0 ∂ 2 r (t, a, k) ∂k 2 = C a (C t C akk -C a C tkk ) + 2C ak (C a C tk -C t C ak ) C 3 a ≥ 0, ∂ r (t, a, k) ∂a = s C 2 a (C t C aa -C a C ta ) = s C 2 a C t C aa s = -C aa ,
where s = means identical in sign.
Appendix A.3. Proof of Lemma 2 with C A aa < 0 Consider Figure A.5, which depicts a candidate PSNE similar to that in Figure 1, but with C A aa < 0 so that curve R-A (t) is convex rather than concave. Suppose that user A deviates from the candidate PSNE during the interval (t A , t B ) by dispatching its vehicles earlier so that section ADB of R (t) shifts leftwards to R (t). Vehicle k = RA (t D ) originally scheduled to depart at point D and time t D is rescheduled earlier to point E and time t E such that distance Ey equals distance Dz. Vehicle k experiences a change in costs of
∆C A (k) = C A (t E , t E + q (t E ) , k) -C A (t D , t D + q (t D ) , k) .
Let q (t) denote queuing time along the path from point D to point E shown by the dashed blue curve that runs parallel to R-A (t) between points y and z. The change in cost can be written
∆C A (k) = - t D t=t E C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) 1 + dq (t) dt dt = - t D t=t E C A t (t, t + q (t) , k) + C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) s dt = - 1 s t D t=t E C A a (t, t + q (t) , k) r A t, t + q (t) , RA (t) -r A (t, t + q (t) , k) dt.
Since q (t) > q (t) for t ∈ (t A , t B ), with C A aa < 0 and for any j, rA (t, t + q (t) , j) < rA (t, t + q (t) , j). If user A's fleet is homogeneous, the expression in braces is negative, ∆C A (k) < 0, and rescheduling the vehicle from D to E reduces its trip cost.
Appendix A.4. Proof of Theorem 1 (Section 4) Using Eq. ( 1), the term in braces in (31) can be written
Z = RA (t) j=k ∂ r A (t, t + q (t) , j) ∂j + 1 s ∂ r A t, t + q (t) -j-k s , k ∂a dj. (A.6)
A sufficient condition for Z to be positive is that the integrand be positive for all values of j. Given Assumption 2, there is a one-to-one monotonic correspondence between j and t * j . The integrand in (A.6), z, can therefore be written
z = ∂ r A (t, t + q (t) , j) ∂t * j 1 f t * j - 1 s ∂ r A t, t + q (t) -j-k s , k ∂t * j . (A.7) Now ∂ r A (t, t + q (t) , j) ∂t * j - ∂ r A t, t + q (t) -j-k s , k ∂t * j = j n=k ∂ 2 r A t, t + q (t) -n-k s , n ∂ (t * n ) 2 1 f (t * n ) + 1 s ∂ 2 r A t, t + q (t) -n-k s , n ∂a∂t * n dn = j n=k ∂ 2 r A t, t + q (t) -n-k s , n ∂ (t * n ) 2 1 f (t * n ) - 1 s dn (A.8)
By (4), the second derivative is positive, and by assumption, f (t * n ) ≤ s for all t * n . Hence (A.8) is positive. Using this result in (A.7) we have
z ≥ ∂ r A t, t + q (t) -j-k s , k ∂t * j 1 f t * j - 1 s > 0.
This establishes that Z > 0 in (A.6), and hence that ∆C A (k) > 0.
s (t q -t s ) + m m -1 α α -β s t -t q = s (t * -t s ) . (A.12)
Eq. (A.9) stipulates that all vehicles complete their trips. Eq. (A.10) states that the first and last vehicles incur the same private cost. Eq. (A.11) stipulates that cumulative departures equal N . Finally, according to eq. (A.12) total departures from t s to t equals the number of vehicles that arrive early.
Solving (A.9)-(A.12), it is possible to show after considerable algebra that total costs in the candidate PSNE are
T C i = (m -1) (2m -1) βγ + mαγ -(m -1) αβ 2mγ (α + (m -1) β) βγ β + γ N 2 s .
Total costs in the atomistic PSNE are
T C n = βγ β + γ N 2 s .
When vehicles differ in their desired arrival times, schedule delay costs are reduced by the same amount in the two PSNE. The departure rate is unchanged in the candidate PSNE with internalization. The difference in total costs is thus the same with and without heterogeneity so that, as stated in the text
T C n -T C i = (m -1) β (α + γ) + mαγ 2m (m -1) βγ + 2mαγ βγ β + γ N 2 s . Appendix A.5.3. Proof of Proposition 2
It is necessary to show that neither user A nor a small user can gain by deviating from the candidate PSNE. In all, seven types of deviations need to be considered-Deviation 1. A small user cannot gain by deviating.
Small users incur the same cost throughout the candidate departure interval (t os , t oe ).
Hence, they cannot gain by retiming their trips within this interval. Rescheduling a trip either before t os or after t oe would clearly increase their cost. Thus, no small user can benefit by deviating. During the no-queuing period, the bottleneck is used to capacity. It is therefore necessary to distinguish between the cost that user A saves by removing a vehicle from the departure schedule (which does not affect the costs of other vehicles in the fleet) and the cost user A incurs by adding a vehicle (which creates a queue unless the vehicle is added at t Ae ). The respective costs are 33 :
C - A (t) = β A • (t * -t) , t ∈ [t As , t os ] γ A • (t -t * ) , t ∈ [t oe , t Ae ] , C + A (t) = β A • (t * -t) + α A -β A s • tos t r A (u) du + α A +γ A s • t Ae toe r A (u) du, t ∈ [t As , t oe ] γ A • (t -t * ) + α A +γ A s • t Ae t r A (u)
∆C A = -C - A (t) + C + A t = -γ A • t -t + α A + γ A s • t t r A (u) du = -γ A • t -t + α A + γ A s • s t -t = α A t -t > 0.
Since fleet costs increase, the deviation is not gainful.
(ii). Rescheduling late to early: The best time to reschedule a vehicle is t os because this minimizes the vehicle's early-arrival cost as well as the queuing delay imposed on the rest of the fleet. But rescheduling the vehicle to t os is no better (or worse) than rescheduling it to t oe , which is not beneficial as per case (i).
(iii). Rescheduling early to late: The best option in this case is to reschedule a vehicle from t As . However, the gain is the same as (or worse than) from rescheduling a vehicle from t Ae , and this is not beneficial as per case (i). Rescheduling early to late therefore cannot be beneficial.
(iv). Rescheduling early to early: The best option in this case is to reschedule a vehicle from t As to t os . Again, this is not beneficial for the same reason as in case (iii).
Deviation 4. User A cannot gain by rescheduling a single vehicle to a time within the queuing period, (t os , t oe ).
For any vehicle in user A's fleet that is scheduled to depart early at t, there is another vehicle scheduled to depart late at t that incurs the same cost (this follows from symmetry of the t * A distribution). Removing either vehicle saves the same cost: C - A (t) = C - A (t ). However, removing the early vehicle and inserting it at any time during the queuing period creates a (small) queue that persists until t Ae . Removing the late vehicle creates a queue only until t because the queue disappears during the departure-time slot opened up by the rescheduled vehicle. Rescheduling a late vehicle is therefore preferred. The best choice is to reschedule the first late-arriving vehicle at t oe so that no later vehicles in the fleet are delayed. Rescheduling a vehicle from t > t oe would reduce that vehicle's cost by more, but a queue would persist from t oe until t . The fleet's schedule delay costs would therefore not be reduced, and a greater queuing cost would be incurred as well.
Given θ ≤ 1, rescheduling a vehicle from t oe to any time t ∈ (t os , t oe ) will (weakly) increase its cost. So rescheduling it not gainful. But if θ > 1, the vehicle will benefit.
Hence the candidate can be a PSNE only if θ ≤ 1 as per Proposition 2. Deviation 5. User A cannot gain by rescheduling a positive measure of its fleet (i.e., a mass of vehicles) to times within the departure period when there is no queue.
If user A reschedules a positive measure of vehicles to depart during (t As , t os )∪(t oe , t Ae ), queuing will occur during some nondegenerate time interval. By Lemma 1, user A is willing to depart at a positive and finite rate during early arrivals only if r -A (t) = rA = α A • s/ (α A -β A ) > 0. Since no other users depart at t, r -A (t) = 0 and user A is better off scheduling vehicles later. Similarly, for late arrivals user A is willing to depart at a positive and finite rate only if r -A (t) = α A • s/ (α A + γ A ). Since r -A (t) = 0, user A is again better off scheduling vehicles later. Deviation 6. Any deviation by user A involving multiple mass departures is dominated by a deviation with a single mass departure. Suppose that user A deviates from the candidate PSNE by scheduling multiple mass departures. All vehicles in the fleet are assumed to depart in order of their index, including vehicles within the same mass. (This assures that fleet costs in the deviation cannot be reduced by reordering vehicles.) We show that such a deviation is dominated by a single mass departure. The proof involves establishing three results: (i) Fleet costs can be reduced by rescheduling any vehicles that are not part of a mass, but suffer queuing delay, to a period without queuing. (ii) Fleet costs can be reduced by rescheduling any vehicles in a mass departure after t to a period without queuing. (iii) Any deviation with multiple mass departures launched before t entails higher fleet costs than a deviation with a single mass departure at t os . These three results show that the candidate PSNE need only be tested against a single mass departure launched at t os .
Result (i): When a queue exists, user A is willing to depart at a positive and finite rate only if condition (21) is satisfied; i.e. r -A (t) = rA (t). For any vehicle that arrives early this requires r -A (t) = rAE = α • s/ (α -θβ) > 0, and for any vehicle that arrives late, r -A (t) = rAL = α • s/ (α + θγ) > 0. During the departure period (t As , t os ), r -A (t) = 0, so user A is better off scheduling all vehicles in the mass later. During the departure period t os , t , r -A (t) = α • s/ (α -β). Since θ ≤ 1, r -A (t) ≥ rAE > rAL and user A is (weakly) better off scheduling all vehicles in the mass earlier. During the departure period t, t oe , , r -A (t) = α • s/ (α + γ) ≤ rAL < rAE . User A is (weakly) better off scheduling all vehicles later. Finally, during the departure period (t oe , t Ae , ), r -A (t) = 0 and user A is again better off scheduling vehicles later.
Result (ii): Assume that user A launches the last mass departure after t. We show that user A can reduce its fleet costs by rescheduling vehicles in the mass to a later period in which they avoid queuing delay. This is true whether or not each vehicle in the mass is destined to arrive early or late relative to its individual t * . By induction, it follows that all mass departures launched after t can be gainfully rescheduled. In what follows it is convenient to use the auxiliary variable λ -A t,t ≡ t t r -A (u) du/ (s • (t -t )) which denotes average departures of small users as a fraction of capacity during the period [t , t].
Suppose user A launches the last mass departure at time t L with M vehicles. Assume first that at t L there is a queue with queuing time q (t L ). We show that postponing the mass departure to a later time when a queue still exists reduces user A's fleet costs. By induction, it follows that postponing the mass until the queue disappears is gainful. Let j be the vehicle that departs in position m of the mass, m ∈ [0, M ]. Let D j [•] be the schedule delay cost function of vehicle j, and c (j, t) its trip cost if the mass departs at time t.
If the mass departs at time
t L , vehicle j incurs a cost of c (j, t L ) = α • q (t L ) + m s + D j t L + q (t L ) + m s . (A.13)
If the mass departure is postponed to time t L > t L , and a queue still exists at t L , vehicle j incurs a cost of .14) By Result (i), user A does not depart during (t L , t L ) because a queue persists during this period. Hence .15) Substituting (A.15) into (A.14), and using (A.13), one obtains
c j, t L = α • q t L + m s + D j t L + q t L + m s . ( A
q t L = q (t L ) + t L t L r -A (u) -s s du = q (t L ) -t L -t L 1 -λ -A t L ,t L . ( A
c j, t L -c (j, t L ) = -α • 1 -λ -A t L ,t L t L -t L (A.16) +D j t L + q (t L ) + m s + λ -A t L ,t L t L -t L -D j t L + q (t L ) + m s .
The value of λ
) so that λ -A t L ,t L = α α+γ . If t L > t oe , λ -A t L ,t L < α α+γ . Hence, λ -A t L ,t L ≤ α
α+γ for all values of t L and the first line of (A.16) is negative. For the second line there are three possibilities to consider according to when vehicle j arrives:
(a) early both before and after the mass is postponed, (b) early before postponement and late after, and (c) late both before and after postponement. In case (a), the second line of (A.16) is negative, in case (c) it is positive, and in case (b) the sign is a priori ambiguous.
To show that (A.16) is negative it suffices to show this for case (c). The second line is an
increasing function of λ -A t L ,t L . Using λ -A t L ,t L ≤ α α+γ and D j [x] = γx for x > 0, (A.16) yields c j, t L -c (j, t L ) ≤ -α • γ α + γ t L -t L + θγ • α α + γ t L -t L < 0.
This proves that postponing the mass departure (weakly) reduces the cost for every vehicle in the mass. We conclude that if there is a queue when the last mass departs, user A can (weakly) reduce its fleet costs by postponing the mass departure to the time when the queue just disappears (user A's later vehicles are not affected by postponing the mass). To see this, let j be the index of the vehicle that departs in position m in the mass. In the mass departure, vehicle j incurs a cost of
c (j, t L ) = D j t L + m s + α • m s .
In the deviation where vehicle j delays departure until t
L = t L + m/s, it incurs a cost of c j, t L = D j t L + m s .
Its cost changes by The remaining vehicles in the first mass also incur lower queuing costs since they no longer queue between t E and t E . Vehicles in the second mass that departs at t E still depart and arrive at the same time because the same number of vehicles depart before them, and the bottleneck operates at capacity throughout.
c j, t L -c (j, t L ) = -α • m s < 0.
Case 2 : t E < t os < t E . The second mass is scheduled after small users start to depart.
If the queue from the first mass disappears before t os , the reasoning for Case 1 applies.
If the queue from the first mass does not disappear before t os , the queue will not dissipate until after small users have stopped departing at t oe . However, user A can still reduce its fleet costs by rescheduling some of the M vehicles in the first mass to t os , and rescheduling the remainder to the head of the second mass at t E .
Case 3 : t os ≤ t E < t E . The first mass departs when, or after, small users begin to depart.
In this case, user A can reduce its fleet costs by rescheduling the second mass to depart immediately after the first mass. To show this, let q (t), t ≥ t E , denote queuing time after the first mass of M vehicles departs. Let j be the index of the vehicle that departs in position m of the second mass, where m ∈ [0, M ]. Vehicle j arrives at time a j = t E + q (t E ) + m/s and incurs a cost of c j, t E = α q t E + m s + D j a j .
If the second mass is instead dispatched immediately after the first mass at t E , vehicle j arrives at time a j = t E + q (t E ) + m/s and incurs a cost of c (j, t E ) = α q (t E ) + m s + D j [a j ] .
The cost saving is c j, t E -c (j, t E ) = α q t E -q (t E ) + D j a j -D j [a j ] .
(A.17) Now a j = a j + t E -t E + q t E -q (t E ) , (A.18) and (A.20) where ∆q -A ≡ α α-β (t E -t E ) is the gross contribution of small users to queuing time during the period (t E , t E ). The weak inequality in (A.20) holds as an equality if vehicle j arrives early when the second mass departs at t E . The inequality is strict if vehicle j arrives late.
q t E -q (t E ) = β α -β t E -
Since this conclusion holds for all vehicles in the second mass, user A can reduce its costs by merging the later mass with the earlier mass.
By induction, all but one of any mass departures launched before t can be eliminated in a way that decreases user A's fleet costs. Using similar logic, it is straightforward to show that user A can do no better than to schedule the single mass at t os rather than later. In summary, results (i)-(iii) show that, of all deviations from the candidate PSNE entailing mass departures, a deviation with a single mass departure launched at t os is the most viable. Deviation 7. User A cannot gain by rescheduling a positive measure of its fleet to times during the queuing period (t os , t oe ) .
To prove that Deviation 7 is not gainful, we must determine whether total fleet costs can be reduced by deviating from the candidate PSNE. Since user A has weaker preferences for on-time arrival than small users, user A prefers not to schedule departures in the interior of (t os , t oe ). User A's best deviation is to schedule a mass departure at t os . Let N Am be the measure of vehicles in the mass. If N Am is small, the best choice is to reschedule the first vehicles departing late during the interval (t oe , t oe + N Am /s). (As explained in proving that Deviation 4 is not beneficial, this strategy avoids queuing delay for large vehicles that are not part of the mass.) The first of the rescheduled vehicles has a preferred arrival time of t * . In the candidate PSNE, this vehicle incurs a cost The deviation is unprofitable if T C d dev ≥ T C c dev ; that is, if .28) When condition (A.28) is met, user A cannot profit by rescheduling some vehicles from the early-departure interval (t As , t os ) in addition to all large vehicles from the late-departure interval (t oe , t Ae ). To see why, note that the net benefit from rescheduling the vehicle at t As is the same as the net benefit from rescheduling the vehicle at t Ae . The benefit from rescheduling vehicles after t os is lower.
C A t
α A ≥ (β A + γ A ) (1 -δ) . (A
Appendix A.5.4. Proof of Proposition 3
The aggregate departure rate is given by Eq. ( 5) The last large vehicle imposes no delay on others in the fleet, whereas the first large vehicle imposes a delay of 1/s on all the others. The first vehicle can be rescheduled to just before the travel period at a lower cost than the other vehicles. Thus, if deviation from the candidate PSNE is profitable, it must be profitable to reschedule the vehicle departing at t As to t os . It is straightforward to show that user A can retime departures of the remaining large vehicles so that they continue to arrive on time and incur no schedule delay cost. The net gain to the other large vehicles is therefore α A N A /s. The first vehicle incurs a cost of (A.29) in the candidate PSNE, and a cost of (β A β/α) (t * s -t os ) if it rescheduled. The net change in costs for the fleet is
r (t
∆T C A = β A -α A β α γ β + γ N A + N o s -∆ -α A N A s = (θ -1) α A βγ α (β + γ) N A + N o s -∆ -α A N A s .
Deviation is not profitable if this difference is positive, which is assured by condition (32).
depicts a candidate PSNE on the assumption that C A aa > 0. (The case C A aa < 0 is considered below.) Cumulative departures of other users, R-A (t), are shown by the blue curve passing through points y and z. Cumulative total departures, R (t), are shown by the black curve passing through points A, D and B.
Figure 1 :
1 Figure 1: Candidate PSNE with C A aa > 0
Figure 2 :
2 Figure 2: PSNE with two large users
Figure 3 :
3 Figure 3: PSNE in which large user does not queue (Case 1)
Figure 4 :
4 Figure 4: PSNE in which large user does not queue (Case 2).
of the rescheduled vehicles is the unweighted mean of eqs. (A.24) and (A.25). Total costs for the rescheduled vehicles are therefore T C d dev = β A t * -t os + α A -β A (1denotes the deviation. Given (A.23) and (A.26), the change in total costs is T C d dev -T C c dev = [α A -(β A + γ A ) (1 -
Henderson-Chu model. First, vehicles departing at any given time never interact with vehicles departing at other times. 8 Second, compared to the bottleneck model discussed below, the Henderson-Chu model is less analytically tractable, and for most functional forms it can only be solved numerically. The second paper to adopt Nash equilibrium, by Silva et al. (2017), uses the Vickrey (1969) bottleneck model in which congestion takes the form of queuing behind a bottleneck with a fixed flow capacity. Silva et al. consider two large users controlling identical vehicles with linear trip-timing preferences. In contrast to Verhoef and Silva (2017), Silva et al.
find that under plausible parameter assumptions a PSNE in departure times does not exist. They also prove that a PSNE never exists in which large users queue. These results readily generalize to oligopolistic markets with more than two large users.
Silva et al. also
show that more than one PSNE may exist in which no queuing occurs, and that ex ante identical users can incur substantially different equilibrium costs. These results are disturbing given the fundamental importance of existence and uniqueness of equilibrium for equilibrium models. The unease is heightened by the facts that the bottleneck model is widely used, and that when all users are small a unique PSNE with a deterministic and finite departure rate exists under relatively unrestrictive assumptions. 9
8 In essence, this means that every infinitesimal cohort of vehicles travels independently of other cohorts and is unaffected by the volume of traffic that has departed earlier -contrary to what is observed in practice. The Henderson-Chu model is a special case of the Lighthill-Whitham-Richards hydrodynamic model in which shock waves travel at the same speed as vehicles and therefore never influence other vehicles (see
Theorem 1. Consider large user A with a heterogeneous vehicle fleet that satisfies Assumption 2. If the density of desired arrival times in user A's fleet never exceeds bottleneck capacity (i.e., f (t * k
according to a density function f (t * k ) over a range [t * s , t * e ]. The following result is proved in the Appendix:
over the interval [t * s , t * e ] where ∆ ≡ t * e -t * s . It can be shown that introducing heterogeneity in this way does not upset the proof in Silva et al. (2017) that a PSNE without queuing does not exist. However, a PSNE with queuing does exist if the conditions of Theorem 1 are met. Both conditions of Assumption 2 are satisfied. The remaining condition, f (t * candidate PSNE with queuing is shown in Figure 2. 21 The cumulative distribution of desired arrival times for users A and B together is shown by the straight line W with
The domain [t * s , t * e
k ) ≤ s, is also met if N/ (2∆) ≤ s, or ∆ ≥ N/ (2s).
Table 1 :
1 Proportional cost saving from internalization as a function of m
22
Within limits, this assumption can be relaxed. Suppose that t * is uniformly distributed over the interval [t * so , t * eo ] . The existence and nature of PSNE with self-internalization are unaffected if two conditions are satisfied. First, t
β β+γ t * so + γ β+γ t * eo = β β+γ t * s + γ β+γ t *
* eo -t * so ≤ No/s. This condition assures that small vehicles queue in the PSNE. Second, e . This condition assures that small vehicles and large vehicles adopt the same queuing pattern in the atomistic PSNE.
Given this assumption, the atomistic PSNE is as shown in Figure 4. Large vehicles depart during the interval (t As , t Ae ) and arrive at rate N A /∆ over the interval [t * s , t * e ]. Each large vehicle arrives on time. Small vehicles arrive at rate s -N A /∆ during this interval, and at rate s during the rest of the interval [t os , t oe ]. The aggregate departure rate and queuing time are the same as if all vehicles were small. 27
Deviation 2. User A cannot gain by rescheduling vehicles outside the departure period (t As , t Ae ). User A does not queue in the candidate PSNE. Large vehicles therefore do not delay each other. Moreover, the highest costs are borne by the first and last vehicles departing at t As and t Ae , respectively. Rescheduling any vehicles either before t As or after t Ae would increase user A's fleet cost. Deviation 3. User A cannot gain by rescheduling a single vehicle to another time within the departure period when there is no queue; i.e. to any time t ∈ (t As , t os ) ∪ (t oe , t Ae ). 32 32 Much of the following text is drawn, verbatim, from Silva et al. (2017).
du, t ∈ [t oe , t Ae ] Rescheduling late to late: Rescheduling a late vehicle to a later time is never beneficial because the vehicle's trip cost increases, and other vehicles in the fleet do not benefit. Suppose a vehicle is rescheduled earlier from t to t where t oe ≤ t < t. User A's fleet costs change by an amount:
.
A vehicle can be rescheduled in four ways: (i) late to late, (ii) late to early, (iii) early
to late, and (iv) early to early. Consider each possibility in turn.
(i).
-A t L ,t L depends on the timing of t L and t L . If t L ≤ t oe , small users depart at rate α α+γ • s throughout the interval (t L , t L
Every vehicle enjoys a reduction in queuing time cost with no change in schedule delay cost. Hence, in any deviation from the candidate PSNE, fleet costs can be reduced by eliminating the last mass departure. By induction, any mass departure launched after t can be rescheduled without increasing fleet costs.Next, we show that any deviation entailing multiple mass departures before t is dominated by scheduling a single mass departure at t os . Suppose that more than one mass departure is scheduled before t. Assume the first mass is launched at time t E with M vehicles, and the second mass is launched at time t E with M vehicles. There are three cases to consider depending on the timing of t E and t E . Case 1 : t E < t E ≤ t os . Both masses are scheduled before small users start to depart. Since r -A (t) = 0 for t < t E , by Result (i), there is no queue at t E . If the queue from the first mass disappears before t E , as in the proof of Result (ii), user A can reduce its fleet costs simply by rescheduling vehicles in the first mass to depart at a rate of s during
Result (iii):
t ∈ (t E , t E ). Since user A does not depart in the original deviation until the first queue has dissipated, the rescheduled vehicles in the alternative deviation avoid queuing and arrive at the same time -thereby reducing their queuing delay costs without affecting their schedule delay costs. If the queue from the first mass does not disappear before t E , user A can still reduce its fleet costs by rescheduling s • (t E -t E ) vehicles at a rate s during (t E , t E ), and letting the remaining M -s • (t E -t E ) vehicles join the head of the second mass at t E . The first set of vehicles in the first mass avoids queuing and incur the same schedule delay costs.
oe , t oe , t * = γ A t oe -t * .(A.21) The last of the rescheduled vehicles has a preferred arrival time of t * + δN Am /s. It incursThe average cost of the rescheduled vehicles is the unweighted mean of eqs. (A.21) and(A.22). Total costs for the N Am vehicles before they are displaced are thereforeT C c dev = γ A t oe -t * + γ A (1 -δ) N Am 2s N Am , (A.23)where superscript c denotes the candidate PSNE.The first of the rescheduled vehicles departs at t os and incurs a costC A t os , t os , t * = β A t * -t os . (A.24)The last of the rescheduled vehicles incurs a cost of C A t os , t os + N
a cost
C A t oe + N Am s , t oe + N Am s , t * + δ s N Am = γ A t oe -t * + (1 -δ) N Am s . (A.22)
Am s , t * + δ N Am s
= β A t * + δ N Am s -(t os + N Am s ) + α A N Am s
= β A t * -t os + δ N Am s + (α
A -β A ) N Am
Clearly, user A cannot reduce the cost for any single vehicle in its fleet by rescheduling it to another time. It is necessary to check that user A cannot reduce its fleet cost by rescheduling a positive measure of vehicles. The first and last large vehicles to depart incur the same travel cost of C A (t * s
) = α α-β s for t os < t < α α+γ s for t < t < t oe t .
Large vehicles depart at rate
r A (t) = 0 α-β α N A ∆ for t As < t < for t < t As α α+γ N A ∆ for t < t < t Ae t 0 for t > t Ae .
Critical travel times are t os = t * - γ β + γ N A + N o s ,
t As = t * s - βγ α (β + γ) N A + N o s -∆ ,
t = t * - βγ α (β + γ) N A + N o s ,
t * = β β + γ t * s + γ β + γ t * e ,
t Ae = t * e - βγ α (β + γ) N A + N o s -∆ ,
t oe = t * + β β + γ N A + N o s .
The nonnegativity constraint on queuing time, q (t) ≥ 0, is guaranteed by (7).
The optimality conditions with no queue, which involve multiple cases, are not very instructive.
See Leonard and Long (1992, Chapter 8).
If vehicles are homogeneous, the order of departure does not matter. The assumption that they depart in the same order is useful for accounting purposes.
Figure2is a variant of Figure2in[START_REF] Silva | On the existence and uniqueness of equilibrium in the bottleneck model with atomic users[END_REF]. The main difference is that desired arrival times have a nondegenerate distribution rather than being the same for all vehicles.
Recall that γA/βA = γ/β.
In the candidate PSNE, large vehicles travel in the tails of the departure period. In the system optimum there is no queuing, and the optimal order of departure depends on the ranking of βA and β. If βA < β, large vehicles still travel in the tails, but if βA > β they would travel in the middle. Hence the PSNE may be inefficient not only because queuing occurs, but also because total schedule delay costs are excessive.
[START_REF] Newell | The morning commute for nonidentical travelers[END_REF] analyzed a more general version of this arrival pattern in the bottleneck model with small users. See also[START_REF] De Palma | Comparison of morning and evening commutes in the vickrey bottleneck model[END_REF].
Recall Condition (30) which requires that in a PSNE with queuing, all large users have lower atomistic rates than the small users. This condition is satisfied in Case 2 because each large vehicle has a lower atomistic rate than small users after its preferred arrival time.
In the candidate PSNE, large vehicles arrive at their individually preferred arrival times because they are less flexible than small vehicles. In the system optimum there is no queuing and, as in Case 1, the optimal order of departure depends on the ranking of βA and β. If βA > β, large vehicles would still be scheduled at their individually preferred arrival times, but if βA < β they would travel in the tails.
In addition, in 2013 Street Hail Liveries or green taxies began providing service in Northern Manhattan, theBronx, Brooklyn, Queens, and Staten Island (Taxi and Limousine Commission, 2016).
The world's top 5 terminal operators. December 4, https://www.porttechnology.org/news/the worlds top 5 terminal operators.
The New York Times (2017). Your uber car creates congestion. should you pay a fee to ride? December 26 (by Winnie Hu), https://www.nytimes.com/2017/12/26/nyregion/ubercar-congestion-pricing-nyc.html?smid tw-nytimes&smtypcur.
This formulation of scheduling preferences is due toVickrey (1969Vickrey ( , 1973) ) and has been used in several studies since; see de[START_REF] De Palma | Dynamic traffic modeling[END_REF]. Defining preferences in terms of utility is appropriate for commuting and certain other types of trips. For trips involving freight transport, the utility function can be interpreted as profit or some other form of payoff or performance metric.
i (t) can be derived by integrating (12) and applying transversality condition (14).
$ This research was partially funded by FONDECYT project No 11160294, the Complex Engineering Systems Institute (CONICYT -PIA -FB0816) and the Social Sciences and Humanities Research Council of Canada (Grant 435-2014(Grant 435- -2050)).
Tseng, Y., Ubbels, B., and Verhoef, E. (2005). Value of time, schedule delay, and reliabilityestimation results of a stated choice experiment among dutch commuters facing congestion. In Department of Spatial Economics, Free University of Amsterdam. (1973). Pricing, metering, and efficiently using urban transportation facilities. Highway Research Record, 476:36-48. Weisbrod, G. and Fitzroy, S. (2011). Traffic congestion effects on supply chains: Accounting for behavioral elements in planning and economic impact models. In Renko, S., editor, Supply Chain Management-New Perspectives. INTECH Open Access Publisher.
where t h and t w are such that all trips take place within the interval [t h , t w ]. Function u h (•) > 0 denotes the flow of utility at the origin (e.g., home), and function u w (•) > 0 denotes utility at the destination (e.g., work). It is assumed that u h (•) and u w (•) are continuously differentiable almost everywhere with derivatives u h ≤ 0 and u w ≥ 0, and
) for some time t * . Utility from time spent traveling is normalized to zero.
The cost of a trip is the difference between actual utility and utility from an idealized instantaneous trip at time t
Various specification are possible for the flow-of-utility functions. Vickrey (1969) adopted a piecewise constant form:
where u h > 0, 0 < u E w < u h , and u L w > u h . The cost function corresponding to (A.2) is:
where α = u h , β = u h -u E w , and γ = u L w -u h . Another specification adopted by [START_REF] Fosgerau | The value of travel time variance[END_REF], and called the "slope" model by [START_REF] Börjesson | Valuations of travel time variability in scheduling versus mean-variance models[END_REF], features linear flow-of-utility functions:
Preferred travel time is t * = u ho -uwo u h1 +u w1 , and the cost function is
where α = u ho -u h1 t * . To assure that the model is well-behaved, departure and arrival times are restricted to values such that u h (t) > 0 and u w (a) > 0.
A third specification -used in early studies by Vickrey (1973), [START_REF] Fargier | Effects of the choice of departure time on road traffic congestion: theoretical approach[END_REF], and [START_REF] Hendrickson | Characteristics of travel time and dynamic user equilibrium for travel-to-work[END_REF] -is a variant of (A.4) with u h1 = 0:
In (A.5), utility at the origin is constant and schedule delay costs depend on arrival time but not departure time. Cost functions (A.3), (A.4), and (A.5) all satisfy Assumption 1 in the text (with t * in place of k).
Appendix A.2. Atomistic departure rates (Section 2)
The atomistic rate for a user of type k is given by Eq. (3):
Derivatives of specific interest are (with arguments suppressed to economize on notation)
Vehicle k departs at time
and arrives at time
As shown by Silva et al. (2017, Eq. (24c)),
If user A deviates from the candidate PSNE so that vehicle k departs at t rather than t k , vehicle k arrives at
Vehicle k can benefit from deviation only if a k < t * k : a condition which reduces to ∆ < N A /s. Deviation is not profitable if ∆ > N A /s, or equivalently f = N A /∆ < s as per Theorem 1.
Appendix A.5.2. Gain from internalization with m users With m large users the aggregate equilibrium departure rate in the candidate PSNE during the period of queuing is given by eq. ( 23):
When all vehicles have the same desired arrival time, t * , the critical times t s , t q , t, and t e are determined by the following four equations: t e -t s = N/s, (A.9) β (t * -t s ) = γ (t e -t * ) , (A.10) (A.11) |
00176014 | en | [
"sdv.ee.ieo"
] | 2024/03/05 22:32:13 | 2007 | https://hal-bioemco.ccsd.cnrs.fr/bioemco-00176014/file/Hauzy_et_al_.pdf | Celine Hauzy
Florence D Hulot
Audrey Gins
Michel Loreau
INTRA-AND INTERSPECIFIC DENSITY-DEPENDENT DISPERSAL IN AN AQUATIC PREY-PREDATOR SYSTEM
Dispersal intensity is a key process for the persistence of prey-predator metacommunities. Consequently, knowledge of the ecological mechanisms of dispersal is fundamental to understanding the dynamics of these communities. Dispersal is often considered to occur at a constant per capita rate; however, some experiments demonstrated that dispersal may be a function of local species density.
Here we use aquatic experimental microcosms under controlled conditions to explore intra-and interspecific density-dependent dispersal in two protists, a prey Tetrahymena pyriformis and its predator Dileptus sp.
We observed intraspecific density-dependent dispersal for the prey and interspecific density-dependent dispersal for both the prey and the predator. Decreased prey density lead to an increase in predator dispersal, while prey dispersal increased with predator density.
Additional experiments suggest that the prey is able to detect its predator through chemical cues and to modify its dispersal behaviour accordingly.
Density-dependent dispersal suggests that regional processes depend on local community dynamics. We discuss the potential consequences of density-dependent dispersal on metacommunity dynamics and stability.
Knowledge of dispersal mechanisms is crucial to understanding the dynamics of spatially structured populations and metacommunities [START_REF] Leibold | The metacommunity concept : a framework for multi-scale community ecology[END_REF]. Such knowledge may also be useful for explaining the response of communities to fragmentation and climate change. Metacommunity dynamics can be influenced by local processes such as intra-and interspecific interactions (Lotka 1925;[START_REF] Rosenzweig | Graphical representation and stability conditions of predator-prey interactions[END_REF][START_REF] Volterra | Variations and fluctuations in the numbers of individuals in animal species living together[END_REF] and regional processes such as dispersal, that link the dynamics of several local communities [START_REF] Cadotte | Metacommunity influences on community richness at multiple spatial scales: A microcosm experiment[END_REF]. Dispersal is the movement of individuals from one patch (emigration) to another (immigration). Intermediate intensities of dispersal can increase the persistence of prey-predator metacommunities [START_REF] Crowley | Dispersal and the stability of predator-prey interactions[END_REF]Holyoak & Lawler 1996a, b;[START_REF] Huffaker | Experimental studies on predation : dispersion factors and predatorprey oscillations[END_REF]Nachman 1987a;[START_REF] Reeve | Environmental variability, migration, and persistence in host-parasitoid systems[END_REF][START_REF] Zeigler | Persistence and patchiness of predator-prey systems induced by discrete event population exchange mechanisms[END_REF]. Dispersal rate is often considered a constant trait of species, but it may be condition-dependent. In particular, it may depend on the density of species in the local community. Density-dependent dispersal implies a direct interaction between local (population dynamics) and regional (dispersal) processes, which could influence metacommunity dynamics and stability.
Many studies have explored dispersal in the context of a single species. They have shown that dispersal often depends upon a species' own local density [START_REF] Diffendorfer | Testing models of source-sink dynamics and balanced dispersal[END_REF].
We call this effect intraspecific density-dependent dispersal. Dispersal may either increase (positive density-dependent dispersal) or decrease (negative density-dependent dispersal) as population density increases. Positive and negative intraspecific density-dependent dispersal has been observed in mites [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF], insects [START_REF] Fonseca | Density-dependent dispersal of black fly neonates is by flow[END_REF] and vertebrates (French & Travis 2001;[START_REF] Galliard | Mother-offspring interactions affect natal dispersal in a lizard[END_REF][START_REF] Matthysen | Density-dependent dispersal in birds and mammals[END_REF]; see for review [START_REF] Matthysen | Density-dependent dispersal in birds and mammals[END_REF], but not in protists (Holyoak & Lawler 1996a, b). In a mite preypredator system, [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] found positive intraspecific density-dependent dispersal
INTRODUCTION
in the prey, but not in the predator. Conversely, French & Travis (2001) observed densityindependent prey dispersal but density-dependent parasitoid dispersal in a beetle-wasp system.
A few studies have experimentally explored how dispersal of one species is affected by the density of another species. We refer to this type of dispersal as interspecific densitydependent dispersal. The presence of a predator or parasitoid has enhanced prey dispersal in some insect communities [START_REF] Holler | Enemy-induced dispersal in a parasitic wasp[END_REF][START_REF] Kratz | Effects of Stoneflies on Local Prey Populations: Mechanisms of Impact Across Prey Density[END_REF][START_REF] Wiskerke | Larval parasitoid uses aggregation pheromone of adult hosts in foraging behavior -a solution to the reliability-detectability problem[END_REF]. By contrast, in aquatic ciliates, dispersal of the prey (Colpidium striatum) was not affected by the presence of the predator (Didinium nasutum) (Holyoak, personal communication;Holyoak & Lawler 1996a, b). However, these studies considered predator presence or absence and not predator density. [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] showed with terrestrial mites that prey emigration had a positive relationship with predator density and that predator emigration had a negative relationship with prey density. Similarly, [START_REF] Kratz | Effects of Stoneflies on Local Prey Populations: Mechanisms of Impact Across Prey Density[END_REF] found that a decrease in prey density enhanced predator emigration in aquatic insect larvae. French & Travis (2001) observed a decrease in parasitoid swap dispersal as prey dispersal increased but no interspecific density-dependent dispersal for the prey. Thus, overall, dispersal seems to be a function of local densities in several experimental models. However, only two studies [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]French & Travis 2001) have considered the full set of intra-and interspecific effects of density on dispersal in prey-predator systems, in spite of their great interest in the perspective of metacommunity theory.
Interspecific density-dependent dispersal in prey may be considered as a predatorinduced defence [START_REF] Lima | Behaviorial decisions made under the risk of predation : a review and prospectus[END_REF]. Other predator-induced responses include morphological changes in vertebrates [START_REF] Kishida | Flexible architecture of inducible morphological plasticity[END_REF] and invertebrates [START_REF] Kuhlmann | The ecology and evolution of inducible defenses[END_REF][START_REF] Tollrian | Inducible defences in cladocera: constraints, costs, and multipredator environments[END_REF]. Predator-induced dispersal suggests that the prey is able to assess the presence of its predator. Several experiments in aquatic systems showed that prey may detect their predator because of organic compounds they release in the medium, for instance Daphnia [START_REF] Lampert | Chemical induction of colony formation in a green alga (Scenedesmus acutus) by grazers (Daphnia)[END_REF][START_REF] Stibor | Predator-induced phenotypic variation in the pattern of growth and reproduction in Daphnia hyalina (Crustacea; Cladocera)[END_REF] and ciliates [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF][START_REF] Kuhlmann | The ecology and evolution of inducible defenses[END_REF]. By contrast, perception in ciliates may require encounter between individuals: two mechanisms have been reported in ciliates: (1) detection of their predators by direct membrane contact [START_REF] Kuhlmann | Escape response of Euplotes octocarinatus to turbellarian predators[END_REF][START_REF] Kusch | Behavioural and morphological changes in ciliates induced by the predator Amoeba proteus[END_REF]), and (2) detection of local hydrodynamic disturbances created by the motion of cilia [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF].
Consequently, interspecific density-dependent dispersal in ciliates may occur through waterborn chemical cues or may require direct contact.
Here we explore intra-and interspecific density-dependent dispersal in freshwater protists. These organisms are often patchily distributed in ponds and lakes at the scale of millimetres or centimetres [START_REF] Arlt | Vertical and horizontal microdistribution of the meifauna in the Greifswalder Bodden[END_REF][START_REF] Smirnov | Spatial distribution of gymnamoebae (Rhizopoda, Lobosea) in brackish-water sediments at the scale of centimeters and millimeters[END_REF][START_REF] Taylor | Microspatial heterogeneity in the distribution of ciliates in a small pond[END_REF][START_REF] Wiackowski | Small-scale distribution of psammophilic ciliates[END_REF]). We use a prey-predator couple, in aquatic experimental microcosms under controlled conditions and investigate the effects of population density on dispersal, and address three questions. First, does a species' own density affect its dispersal (intraspecific density-dependent dispersal)? We test this hypothesis for the prey and the predator separately. Second, does prey density affect predator dispersal, and does predator density affect prey dispersal (interspecific density-dependent dispersal)? If prey dispersal is positively related to predator density, our third question investigates the effects of predator organic compounds on prey dispersal. In addition, we explore these effects at low and high initial prey density to assess the interaction between prey and predator densities on prey dispersal.
Study Organisms
Tetrahymena pyriformis Ehrenberg, a bacterivorous protist, and its protist predator Dileptus sp. were obtained from Carolina Biological Supply (Burlington, NC, USA). Prey and predator were cultured in 50 mL microcosms containing medium inoculated with a mixed bacterial suspension. The medium was prepared by sterilizing mineral water with 0•75 g.L -1 of Protozoan Pellet (Carolina Biological Supply). Cultures were maintained at 18•0 ± 0•5 °C under controlled light (14:10 h light:dark cycle). One day after bacterial inoculation, each culture was inoculated with 1 mL of T. pyriformis to give about 240 cells.mL -1 . Three days later, T. pyriformis cultures reached a stationary phase; they were then used to feed Dileptus sp. The same culturing method was used in all experiments.
Under our standard culture conditions, the minimal generation times of T. pyriformis and Dileptus sp. were 8,18 h and c. 24 h, respectively (Hauzy C. & Hulot F.D., unpublished data).
Experimental design
To measure dispersal, we used microcosms made of two 100 mL bottles (55 mm internal diameter) connected by a 10 cm tube (5 mm internal diameter). We defined dispersal as migration from a bottle initially containing organisms (donor patch) to a bottle free of organisms (recipient patch). We conducted six independent experiments according to the following design. The tube of each microcosm was initially clamped and donor patches were assigned randomly.
MATERIALS AND METHODS
Initial densities in all experiments were adjusted by serial dilution in 1-day-old bacterial culture after counting the 3-day-old T. pyriformis and the 1-day-old Dileptus sp. cultures.
Counts were done under a binocular microscope in 10 µL drops for T. pyriformis, and 100 µL drops for Dileptus sp. Several drops were examined until a minimum number of 400 individuals was counted. The donor patch received 50 mL of the experimental treatment culture. The recipient patch received 50 mL of standardized 1-day-old bacterial culture. The experiments were initiated by releasing the clamp off the tube. Organisms dispersed freely during a time that was shorter than the generation time of the species studied. Treatments were replicated five times, except experiment 5, which was replicated four times.
At the end of the experiment, the content of each bottle was fixed with formaldehyde at a final concentration of 0•2%. Because the recipient patches did not contain high population densities, they were concentrated by centrifugation (5 min, 2000 r.p.m., 425 g).
Organisms were counted under a binocular microscope in 10 µL drops for T. pyriformis, and 100 µL drops for Dileptus sp. Several drops were examined in accordance with the following two procedures: (1) in experiments 1-4 and 6 (see below) up to 100 or 400 individuals were counted, respectively, and (2) in experiment 5, individuals were counted in 800 µL. Dispersal was measured by the dispersal rate per capita and per generation, and was calculated as the ratio of the density of the focal species in the recipient patch at the end of the experiment to its initial density in the donor patch. Initial, not final, density in the donor patch was used to avoid the potentially confounding factor of prey depletion in experiments testing prey dispersal in the presence of the predator (see experiments 4, 5 and 6 below).
In experiment 1 we tested the effect of T. pyriformis density on its own dispersal in the absence of Dileptus sp. Density treatments corresponded to cultures with 12 700 cells.mL -1 , 1270 cells.mL -1 and 43.1 cells mL -1 . The dispersal time was 4 h.
In experiment 2 we tested the effect of Dileptus sp. density on its own dispersal.
Treatments correspond to three levels of density: 61.3 cells mL -1 , 30.6 cells.mL -1 and 15.3 cells.mL -1 . T. pyriformis density was adjusted to 3.3 cells.mL -1 in all treatments. The dispersal time was 18 h.
Interspecific density-dependent dispersal
Experiment 3 tested the effect of T. pyriformis density on Dileptus sp. dispersal. A Dileptus sp. culture was mixed 50:50 with a T. pyriformis culture of varying density. We obtained three treatments with the same initial Dileptus sp. density (20.8 cells.mL -1 ) but different initial T. pyriformis densities: 5400 cells.mL -1 , 540 cells.mL -1 and 54.0 cells.mL -1 .
The dispersal time was 18 h. Experiment 4 tested the effect of Dileptus sp. density on T. pyriformis dispersal.
Cultures with different Dileptus sp. densities were mixed 50:50 with a T. pyriformis culture.
T. pyriformis initial density was 1120 cells.mL-1 in all treatments, and Dileptus sp. densities were 37.5 cells.mL -1 , 18.8 cells.mL -1 and 9.4 cells.mL -1 . The dispersal time was 5 h.
Mechanism of detection
In order to test whether T. pyriformis is able to detect Dileptus sp. via a chemical signal, we compared prey dispersal rate in the presence of the predator (treatment «with»), in a filtered medium of predator culture (treatment «filtered») and in the absence of predator (treatment «without»). This hypothesis was tested independently for two initial T. pyriformis densities (experiment 5: 550 cells.mL -1 ; experiment 6: 6600 cells.mL -1 ). In the treatment «with», we added the Dileptus sp. culture to the T. pyriformis culture (initial density of Dileptus sp. in experiment 5: 63.5 cells mL -1 ; in experiment 6: 22.1 cells mL -1 ). In the treatment «filtered», we replaced the Dileptus sp. culture of the treatment «with» with the same Dileptus sp. filtered with a 1.2 µm Whatman GF/C filter permeable to chemical compounds and bacteria. In the treatment «without», the T. pyriformis culture was diluted with a 1-day-old bacterial culture. Each treatment was replicated five and four times in experiments 5 and 6, respectively. The dispersal time in both experiments was 8 h.
Statistical Analysis
Data were analysed with linear (LM) or linear mixed effects models in R vs. 2•2•0.
For experiments 1-4, data were considered as continuous variables whereas data of experiments 5 and 6 were considered categorical. When homoscedasticity of variances (Bartlett's test) was satisfied (experiments 2, 3, 5 and 6), we used the LM procedure. When variances were heteroscedastic (experiments 1 and 4), we used the Generalized Least Squares procedure of the linear mixed effects model, which accounts for heteroscedasticity.
The Generalized Least Squares procedure gave the same qualitative results as the LM procedure. Tukey's post hoc tests were used to determine the differences between treatments and groups of treatments.
In experiment 1, no T. pyriformis individuals could be detected in the recipient patch for three of five replicates of the low density treatment. T. pyriformis density had a strong significant effect on its own dispersal rate (Fig. 1a; t = 4.17, d.f. = 13, P = 0.001). The treatment with the highest density (12 700 cells.mL -1 ), which corresponded to the beginning of the stationary phase, was significantly different (P < 0.001) from the lower density treatments (1270 and 43.1 cells.mL -1 ).
Experiment 2 (Fig. 1d) showed no significant effect of Dileptus sp. density on its per capita dispersal rate ( F = 2•45, d.f. = 14, P = 0•141).
Interspecific density-dependent dispersal
In experiment 3 (Fig. 1c), T. pyriformis density had a strong significant effect on Dileptus sp. dispersal rate (F = 7.07, d.f. = 14, P = 0.019). The average Dileptus sp. dispersal rate was significantly higher at the lowest prey density (54.0 cells mL -1 ) than at higher prey densities (5400.0 cells.mL -1 and 540.0 cells.mL -1 ) (P < 0.0001).
In experiment 4 (Fig. 1b), the initial T. pyriformis density (1120 cells.mL -1 ) was chosen such that it does not affect its own dispersal rate (see Results of experiment 1). Dileptus sp. density had a strong significant effect on the dispersal rate of its prey (F = 22.28, d.f. = 14, P < 0.001) and the dispersal rate of T. pyriformis was significantly higher at the two highest Dileptus sp. densities (37.5 cells.mL -1 and 18.8 cells.mL -1 ) than at the lowest density (9.8 cells mL -1 ) (P < 0.0001).
RESULTS
Mechanism of detection
Experiments 5 and 6 were conducted at a predator density that induces prey dispersal (see Results of experiment 4). When the density of T. pyriformis was low (experiment 5), the differences among treatments on T. pyriformis dispersal rate were significant (Fig. 2a; F = 165.4, d.f. = 12, P < 0.001). Tukey's post hoc test indicated that prey dispersal rate in the treatments «filtered» and «with» were significantly higher than in the treatment «without» (P < 0.001). Prey dispersal rate was also significantly higher in the treatment «filtered» than in the treatment «with» (P < 0.005). When initial T. pyriformis density was high (experiment 6), the effects of the treatments «without», «filtered» and «with» on T. pyriformis dispersal rate were marginally significant (Fig. 2b; F = 3.623, d.f. = 9, P = 0.070). Tukey's post hoc test shows that the prey dispersal rate in the treatment «filtered» was marginally higher than in the treatment «without» (P = 0•060). The results of our study suggest that in aquatic prey-predator systems, the dispersal of a species can be a plastic trait that depends on population densities. We observed intraspecific density dependence in dispersal for the prey T. pyriformis. By contrast, there was no significant intraspecific density dependence in dispersal for the predator Dileptus sp.
Interspecific density-dependent dispersal was observed for both the prey and the predator. A decrease in T. pyriformis density led to a significant increase in Dileptus sp. dispersal rate, while T. pyriformis dispersal was higher when Dileptus sp. density was higher.
The two previous studies [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]French & Travis 2001) that have exhaustively explored density-dependent dispersal in a prey-predator system revealed two different patterns (Fig. 3). French & Travis (2001) observed that predator dispersal depended on its own density and on prey density, but prey dispersal was densityindependent. By contrast, [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] showed interspecific density-dependent dispersal for both the prey and the predator, and intraspecific density-dependent dispersal for the prey only. Our results follow the same pattern as [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]. Thus, only two patterns of density-dependent dispersal in prey-predator systems have received experimental support. An increase in the prey dispersal rate when predator density increases, suggests that the prey is able to detect its predator and avoid it. Studies on ciliates' perception have shown that two different detection mechanisms are possible:
recognition through chemical cues released in the medium [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF][START_REF] Kuhlmann | The ecology and evolution of inducible defenses[END_REF][START_REF] Seravin | Feeding Behavior of Unicellular Animals. I. The Main Role of Chemoreception in the Food Choise of Carnivorous Protozoa[END_REF] and recognition that requires direct contact [START_REF] Karpenko | Feeding Behavior of Unicellular Animals. II. The Role of Prey Mobility in the Feeding Behavior of Protozoa[END_REF][START_REF] Kuhlmann | Escape response of Euplotes octocarinatus to turbellarian predators[END_REF][START_REF] Kusch | Behavioural and morphological changes in ciliates induced by the predator Amoeba proteus[END_REF]). Our results suggest that the prey is able to detect its predator through chemical cues. At a low initial prey density, prey dispersal was significantly higher when prey was in the presence of predators or in the presence of a filtered medium of predator cultures than in the control. At a high initial prey density, prey dispersal was marginally higher when prey was in the presence of a predatorfiltered culture than in the control or in the presence of the predator. The difference in prey dispersal between the predator-filtered culture and the predator culture may be a result of prey depletion by the predator in the latter treatment.
Two hypotheses may explain the discrepancy between the experiments at low and high densities. First, at a low initial prey density (550 cells.mL -1 ), there is no effect of prey density on its own dispersal (see experiment 1, Fig. 1a). The dispersal observed in the presence or simulated presence of the predator is only due to the predator. By contrast, at a high initial prey density (6600 cells.mL -1 ), prey density may have an effect on its own dispersal. Therefore, in the absence of the predator, prey dispersal is high and the effect of a predator (whether real or simulated) on dispersal is reduced in comparison with the prey's intraspecific density effect. This result suggests an upper bound on prey dispersal. Second, the discrepancy between the two experiments might be a consequence of different predator densities in experiments 5 (63.5 cells.mL -1 ) and 6 (22.1 cells.mL -1 ). However, these two densities are both in the range of predator densities that induce prey dispersal (see experiment 4, Fig. 1b). Therefore the latter hypothesis is not supported by our data. [START_REF] Travis | Density-dependent dispersal in host-parasitoid assemblages[END_REF], and in (b) [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] and present experiments.
Implications for prey-predator metacommunities
In a seminal paper, [START_REF] Huffaker | Experimental studies on predation : dispersion factors and predatorprey oscillations[END_REF] showed that prey-predator interactions persist longer in a large fragmented landscape than in a small fragmented landscape or isolated patches. His experiment stimulated theoretical studies that have explicitly addressed the role of spatial heterogeneity in the persistence of prey-predator interactions that are prone to extinction when isolated de Roos, 1991 #29;Hassell, 1991 #30;Sabelis, 1988 #31;Sabelis, 1991 #32}. Several experimental studies showed that individuals' migration between local communities allows regional persistence because of the asynchrony of local dynamics (Holyoak & Lawler 1996a;[START_REF] Janssen | Metapopulation dynamics of a persisting predator-prey system in the laboratory: Time series analysis[END_REF][START_REF] Taylor | Metapopulations, Dispersal, and Predator-Prey Dynamics: An Overview[END_REF][START_REF] Van De Klashorst | A demonstration of asynchronous local cycles in an acarine predator-prey system[END_REF].
Theoretical studies focused on the essential role of dispersal intensity in prey-predator metacommunities [START_REF] Crowley | Dispersal and the stability of predator-prey interactions[END_REF]Nachman 1987a, b;[START_REF] Reeve | Environmental variability, migration, and persistence in host-parasitoid systems[END_REF][START_REF] Zeigler | Persistence and patchiness of predator-prey systems induced by discrete event population exchange mechanisms[END_REF]. These models (reviewed in (Holyoak & Lawler 1996a, b) predict that an intermediate dispersal level of prey and predator enables metacommunity persistence. A low dispersal rate reduces the probability of recolonization of locally extinct patches and cannot prevent local extinctions, whereas a high dispersal rate tends to synchronize local dynamics [START_REF] Brown | Turnover Rates in Insular Biogeography : Effect of Immigration on Extinction[END_REF][START_REF] Levins | Extinction. In Some mathematical questions in biology[END_REF][START_REF] Yodzis | The Indeterminacy of Ecological Interactions as Perceived through Perturbation Experiments[END_REF]. Experiments have confirmed that moderate dispersal extends the persistence of prey-predator systems (Holyoak & Lawler 1996a, b). However, in these theoretical studies the dispersal ability of species from one patch to another is regarded as an unconditional process described by a single parameter.
Our results add to the body of experiments [START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF]French & Travis 2001; for review see Matthysen 2005) that show that dispersal is density-dependent, and hence that regional processes depend upon local population dynamics. This strong interaction between local and regional processes is likely to affect the dynamics and stability of communities and metacommunities.
Recent models that incorporate density-dependent dispersal behaviour show different community-level effects of dispersal (reviewed in [START_REF] Bowler | Causes consequences of animal dispersal strategies: relating individual behavior to spatial dynamics[END_REF]. Most of these models explored the effects of intraspecific density-dependent dispersal on the stability of single-species metapopulations. Models that incorporate positive densitydependent dispersal behaviour, as we showed here with T. pyriformis, have found a stabilizing effect of dispersal on population dynamics, whereas models that have simpler dispersal rules do not observe stabilizing effects [START_REF] Janosi | On the Evolution of Density Dependent Dispersal in a Spatially Structured Population Model[END_REF][START_REF] Ruxton | Fitness-dependent dispersal in metapopulations and its consequences for persistence and synchrony[END_REF]; but see [START_REF] Ruxton | Density-dependent migration and stability in a system of linked populations[END_REF]. Other models have shown that the form of the relationship between dispersal and density is important for predicting its consequences for stability [START_REF] Amarasekare | Interactions between local dynamics and dispersal: Insights from single species models[END_REF][START_REF] Ruxton | Density-dependent migration and stability in a system of linked populations[END_REF][START_REF] Ylikarjula | Effects of Patch Number and Dispersal Patterns on Population Dynamics and Synchrony[END_REF].
The effects of interspecific density-dependent dispersal on the stability of preypredator metacommunities are still unclear. French & Travis (2001) parameterized a model and found no differences in species persistence and community dynamics between a fixed mean dispersal and interspecific density-dependent dispersal for the predator (parasitoid).
By contrast, taking into account intra-and interspecific density-dependent dispersal improves the ability of prey-predator metacommunity models to predict metacommunity dynamics in experiments [START_REF] Bernstein | A Simulation Model for an Acarine Predator-Prey System (Phytoseiulus persimilis-tetranychus urticae)[END_REF][START_REF] Ellner | Habitat structure and population persistence in an experimental community[END_REF]Nachman 1987a, b). Thus, density-dependent dispersal may be fundamental for our understanding of prey-predator metacommunity dynamics. At present, several questions remain unanswered. Is there an interaction between the effects of intra-and interspecific density-dependent dispersal on prey-predator metacommunities? Do different density-dependent dispersal patterns (Fig. 3)
have different effects at the metacommunity level? What are the implications of the interaction between local and regional processes for conservation and biological control?
Our microcosm experiments demonstrate that the dispersal of prey and predator protists can depend on both intra-and interspecific density. Our results may be fundamental and general because they were obtained with relatively simple organisms (unicellular eukaryotes). We further show that prey can detect predator presence through organic compounds that the predator releases in the medium. Therefore chemical signals among organisms may play an important role in species dispersal, and density-dependent dispersal may be a pivotal process in metacommunity dynamics. Understanding and testing the effects of density-dependent dispersal on metacommunity dynamics, is a challenge for future studies.
Figure 1 .
1 Figure 1. Effects of (a) Tetrahymena pyriformis density and (b) Dileptus sp. density on T. pyriformis dispersal rate, and effects of (c) T. pyriformis density and (d) Dileptus sp. density on Dileptus sp. dispersal (mean ± 1 SE). Letters indicate significant differences in dispersal rate among density treatments.
Figure 2 .
2 Figure 2. Tetrahymena pyriformis detects Dileptus sp. presence through chemical cues (mean ± 1 SE). (a) Low initial density of T. pyriformis; (b) high initial density of T. pyriformis. Letters indicate significant differences in dispersal rate among treatments.
Figure 3 .
3 Figure 3. Density-dependent dispersal patterns in prey-predator systems. Arrows indicate positive (+) or negative (-) significant effect of density on dispersal observed in (a) French & Travis (2001), and in (b)[START_REF] Bernstein | Prey and predator emigration responses in the acarine system Tetranychus urticae-Phytoseiulus persimilis[END_REF] and present experiments.
We thank M. Huet for her help in maintaining the protist cultures and T. Tully for advice on the statistical analysis. We thank G. Lacroix, S. Leroux and anonymous reviewers for theirs remarks that permit progress of the manuscript. C.H. thanks B. Boublil for his constant support.
ACKNOWLEDGEMENTS REFERENCES |
00153592 | en | [
"phys.cond.cm-sce"
] | 2024/03/05 22:32:13 | 2007 | https://hal.science/hal-00153592v2/file/chpcr.pdf | Sylvain Landron
Marie-Bernadette Lepetit
The crucial importance of the t 2g -e g hybridization in transition metal oxides
We studied the influence of the trigonal distortion of the regular octahedron along the (111) direction, found in the CoO2 layers. Under such a distortion the t2g orbitals split into one a1g and two degenerated e ′ g orbitals. We focused on the relative order of these orbitals. Using quantum chemical calculations of embedded clusters at different levels of theory, we analyzed the influence of the different effects not taken into account in the crystalline field theory ; that is metal-ligand hybridization, long-range crystalline field, screening effects and orbital relaxation. We found that none of them are responsible for the relative order of the t2g orbitals. In fact, the trigonal distortion allows a mixing of the t2g and eg orbitals of the metallic atom. This hybridization is at the origin of the a1g-e ′ g relative order and of the incorrect prediction of the crystalline field theory.
I. INTRODUCTION
Since the discovery of super-conductivity in the hydrated Na 0.35 CoO 2 -1.3H 2 O 1 compound and of the very large thermopower in the Na 0.7±δ CoO 2 2 members of the same family, the interest of the community in systems built from CoO 2 layers has exploded. The first step in the understanding of the electronic properties of transition metal oxides, such as the CoO 2 -based compounds, is the analysis of the crystalline field splitting of the d orbitals of the transition metal atom. Indeed, depending on this splitting, the spin state of the atom, the nature of the Fermi level orbitals, and thus the Fermi level properties will differ.
The CoO 2 layers are built from edge-sharing CoO 6 octahedra (see figure 1). In these layers, the first coordina- tion shell of the metal atom differs from the regular octahedron by a trigonal distortion along the three-fold (111) axis (see figure 6). In all known materials (whether cobalt oxides or other metal oxides such as LiVO 2 , NaTiO 2 , NaCrO 2 , etc. . . ), this distortion is in fact a compression. The local symmetry group of the metal atom is lowered from O h to D 3d . The T 2g irreducible representation of the O h group is thus split into one E g and one A 1g representations. The relative energies of the resulting e ′ g and a 1g orbitals (see figure 6) has been a subject of controversy in the recent literature, as far as the low spin Co 4+ ion is concerned. At this point let us point out the crucial importance of the knowledge of this energetic order for the understanding of the low energy properties of the CoO 2 layers. Indeed, the possible existence of an orbital order, as well as the minimal model pertinent for the description of these systems depend on this order.
Authors such as Maekawa 3 , following the crystalline field theory, support that the a 1g orbital is of lower energy than the two degenerated e g ones, leading to an orbital degeneracy for the Co 4+ ion. On the contrary, ab initio calculations, both using periodic density functional methods 4 and local quantum chemical methods for strongly correlated systems 5 yield an a 1g orbital of higher energy than the e ′ g ones, and a non degenerated Fermi level of the Co 4+ ion. Angle Resolved Photoemis- sion Spectroscopy (ARPES) experiments were performed on several CoO 2 compounds 6 . This technique probes the Fermi surface and clearly shows that the Fermi surface of the CoO 2 layers is issued from the a 1g orbitals, and not at all from the e ′ g orbitals (orbitals of E g symmetry, issued from the former t 2g orbitals), supporting the ab-initio results.
In the present work, we will try to understand the reasons why the crystalline field model is unable to find the good energetic order of t 2g orbitals in such trigonal distortions. Several hypotheses can be made to explain the orbital order : the delocalization of the metal 3d orbitals toward the ligands, the fact that the electrostatic potential of the whole crystal differs from the one assumed in the crystalline field model, the correlation effects within the 3d shell, the screening effects, etc. All these hypotheses will be specifically tested on the Co 4+ (3d 5 ) ion that is subject in this work to a more thorough study than other metal fillings. Nevertheless, other metal fillings (3d 1 to 3d 3 , that can be found in vanadium, titanium chromium, . . . oxides) will also be studied. We will see the crucial importance of the band filling on the t 2g orbitals order. In this work we will focus only on the O h to D 3d trigonal distortion, subject of the controversy.
The next section will present the method used in this work, section three and four will reports the calculations and analyze them, finally the last section will be devoted to the conclusion.
II. COMPUTATIONAL METHOD AND DETAILS
The energy of the atomic 3d orbitals is an essentially local value, as supposed in the crystalline field model. However its analysis exhibits some non local contributions. Indeed, orbitals energies can be seen as resulting from the following terms:
• the electrostatic potential due to the first coordination shell -in the present case, the six oxygen atoms of the octahedron, further referred as nearest neighbor oxygens (NNO) -,
• the electrostatic potential due to the rest of the crystal,
• the kinetic energy that includes the hybridization of the metal orbitals with nearest neighbor ligands,
• the Coulomb and exchange contributions within the 3d shell,
• the radial relaxation of the 3d orbitals,
• and finally the virtual excitations from the other orbitals that are responsible for the screening effects.
All these contributions, excepts for the electrostatic potential due to the rest of the crystal (nucleus attractions and Coulomb interactions), are essentially local contributions 7 and known to decrease very rapidly with the distance to the metal atom. In fact, they are mostly restricted to the first coordination shell of the cobalt. On the contrary, the Madelung potential retains the resulting non local contributions from the nucleus attraction and the Coulomb electron-electron repulsion. It is known to be very slowly convergent with the distance. We thus made calculations at different levels, including first all the above effects, and then excluding them one at the time, in order to end up with the sole effects included in the crystalline field model. The calculations will thus be done on CoO 6 or Co fragments. Different embedding and different levels of calculation will be used. The Co -O distance will be fixed to the value of the super-conducing compound, i.e. R Co-O = 1.855 Å. The angle θ between the Co -O direction and the z axis (see figure 6) will be varied from 0 to 90 • . The calculations will be done at the Complete Active Space Self Consistent Field + Difference Dedicated Configurations Interaction 8,9 (CASSCF+DDCI, see subsection II A) level for the most involved case, using the core pseudopotential and basis set of Barandiaran et al. 10 . The fragment used will include all the first coordination oxygens in addition to the cobalt atom. The embedding will be designed so that to properly represent the full Madelung potential of the super-conducting material, and the exclusion effects of the rest of the crystal on the computed fragment electrons (see reference 5 for further details). For the simplest case a minimal basis set derived from the preceeding one will be used and only the cobalt atom will be included in the computed fragment. The effect of the crystalline field will be described by -2 point charges located at the positions of the first coordination shell oxygens. The calculations will be done at the CASSCF level only. Between these two extreme cases, several intermediate ones will be considered, in order to check the previously enumerate points.
The electrostatic potential due to the cobalt first oxygen neighbors (NNO), as well as the unscreened Coulomb and exchange contributions within the 3d shell, are included in all calculations. The electrostatic potential is treated either through the inclusion of the NNO in the computed fragment or through -2 point charges. The Coulomb and exchange contributions are treated through the CASSCF calculation. The electrostatic contribution of the rest of the crystal is included only in the most involved calculations, using an appropriated embedding of point charges and Total Ions pseudo-Potential 11 . The hybridization of the metal 3d orbitals is treated by including explicitely the NNO in the considered fragment (CoO 6 ). The radial relaxation of the 3d orbitals is treated when extended basis set are used. When a minimal basis set is used, the radial part of the orbitals is frozen as in the high spin state of the isolated Co 4+ ion. Finally, the screening effects are treated only when the calculation is performed at the CASSCF+DDCI level.
A. The CASSCF and DDCI methods
Let us now described shortly the CASSCF and DDCI ab initio methods. These methods are configurations interaction (CI) methods, that is exact diagonalization methods within a selected set of Slater's determinants. These methods were specifically designed to treat strongly correlated systems, for which there is no qualitative single-determinant description. The CASSCF method treats exactly all correlation effects and exchange effects within a selected set of orbitals (here the 3d shell of the cobalt atom). The DDCI method treats in addition the excitations responsible for the screening effects on the exchange, repulsion, hopping, etc. integrals. These methods are based on the partitioning of the fragment orbitals into three sets the occupied orbitals that are always doublyoccupied in all determinants of the Complete Active Space or CAS (here the cobalt inner electrons and the NNO ones), the active orbitals that can have all possible occupations and spins in the CAS (here the cobalt 3d orbitals),
the virtual orbitals that are always empty in the CAS.
The CASCI method is the exact diagonalization within the above defined Complete Active Space. The CASSCF method optimizes in addition the fragment orbitals in order to minimize the CASCI wave function energy. This is a mean-field method for the occupied orbitals but all the correlation effects within the active orbitals are taken into account. Finally the DDCI method uses a diagonalization space that includes the CAS, all single-and double-excitations on all determinants of the CAS, except the ones that excite to occupied orbitals into two virtual orbitals. Indeed, such excitations can be shown not to contribute -at the second order of perturbation -to the energy differences between states that differ essentially by their CAS wave function. Therefore, they have little importance for the present work. The DDCI method thus accurately treats both the correlation within the CAS and the screening effects.
Compared to the very popular density functional methods, the CAS+DDCI method presents the advantage of treating exactly the correlation effects within the 3d shell. This is an important point for strongly correlated materials such as the present ones. Indeed, even if the DFT methods should be exact provided the knowledge of the correct exchange-correlation functional, the present functionals work very well for weakly correlated systems, but encounter more difficulties with strong correlation effects. For instance the LDA approximation finds most of the sodium cobaltites compounds ferromagnetic 4 in contradiction with experimental results. LDA+U functionals try to correct these problems by using an ad hoc on-site repulsion, U, within the strongly correlated shells. This correction yields better results, however it treats the effect of the repulsion within a mean field approximation, still lacking a proper treatment of the strong correlation. The drawbacks of the CAS+DDCI method compared to the DFT methods are its cost in term of CPU time and necessity to work on formally finite and relatively small systems. In the present case however, this drawback appear to be an advantage since it decouples the local quantities under consideration from the dispersion problem.
III. RESULTS AND ANALYSIS
Let us first attract the attention of the reader on what is supposed to be the energy difference between the e ′ g and a 1g orbitals of the Co 4+ ion in an effective model. In fact, the pertinent parameters for an effective model should be such that one can reproduce by their means the exact energies or, in the present case, the ab-initio calculation of the different Co 4+ atomic states. It results, that within a Hubbard type model, the pertinent effective orbital energies should obey the following set of equations
E (|a 1g ) = 4ε(e ′ g ) + ε(a 1g ) + 2U + 8U ′ -4J H E |e ′ g = 3ε(e ′ g ) + 2ε(a 1g ) + 2U + 8U ′ -4J H ∆E = E |e ′ g -E (|a 1g ) = ε(a 1g ) -ε(e ′ g )
where the schematic picture of the |e ′ g and |a 1g states is given in figure 3, ε(e ′ g ) and ε(a 1g ) are the effective orbital energies of the e ′ g and a 1g atomic orbitals, U is the effective electron-electron repulsion of two electrons in the same cobalt 3d orbital, U ′ the effective repulsion of two electrons in different cobalt 3d orbitals and J H the atomic Hund's exchange effective integrals within the cobalt 3d shell. g is doubly-degenerated, the hole being located either on the e ′ g1 or on the e ′ g2 orbitals.
A. The reference calculation
The reference calculation includes all effects detailed in the preceding section. For the super-conducting com-pound the effective t 2g splitting was reported in reference 5 to be
∆E = ε(a 1g ) -ε(e ′ g ) = 315 meV
This point corresponds to θ ≃ 61.5 • (that is a value of θ larger than the one of the regular octahedron θ 0 ≃ 54.74 • ) where the crystalline field theory predicts a reverse order between the t 2g orbitals.
B. Screening effects
The effect of the screening on the t 2g orbital splitting can be evaluated by doing a simple CASCI calculation using the same fragment, embedding, basis set and orbitals as the preceding calculation. Without the screening effects, one finds a t 2g splitting of
∆E = ε(a 1g ) -ε(e ′ g ) = 428 meV
Obviously the screening effects cannot be taken as responsible for the qualitative energetic order between the a 1g and e ′ g orbitals.
C. Cobalt 3d -oxygen hybridization
The effect of the hybridization of the cobalt 3d orbitals with the neighboring oxygen ligands can be evaluated by taking out the oxygen atoms from the quantum cluster, and treating them as simple -2 point charges at the atomic locations. The other parameters of the calculation are kept as in the preceding case. The new orbitals are optimized at the average-CASSCF level between the two |e ′ g and the |a 1g states. It results in a t 2g splitting of ∆E = ε(a 1g )ε(e ′ g ) = 40 meV for the super-conducting compound. Again the hybridization of the cobalt 3d orbitals with the neighboring oxygens cannot be taken as responsible for the inversion of the splitting between the a 1g and e ′ g orbitals.
D. Long-range electrostatic potential
The effect of the long-range electrostatic potential can be evaluated by restricting the embedding to the NNO point charges only, that is to the electrostatic potential considered in the crystalline field method. One finds a t 2g splitting of
∆E = ε(a 1g ) -ε(e ′ g ) = 124 meV
Once again the results is positive and thus the long-range electrostatic potential is not the cause of the crystalline field inversion of the t 2g splitting.
E. Orbital radial relaxation
At this point only few effects on top of the crystalline field theory are still treated in the calculation. One of them is the radial polarization effect of the 3d orbitals, that allows their adaptation to the different occupations in the specific |a 1g and |e ′ g states. This polarization is due to the use of an extended basis set. We thus reduce the basis set to a minimal basis set (only one orbital degree of freedom per (n, l) occupied or partially occupied atomic shell). The minimal basis set was obtained by the contraction of the extended one ; the radial part of the orbitals being frozen as the one of the the isolated Co 4+ high spin state. This choice was done in order to keep a basis set as close as possible to the extended one, and because only for the isolated atom all 3d orbitals are equivalent, and thus have the same radial part. One obtains in this minimal basis set a t 2g splitting of
∆E = ε(a 1g ) -ε(e ′ g ) = 41 meV
At this point we computed the effective orbital energies in the sole crystalline field conditions, however the result is still reverse than what is usually admitted within this approximation. Indeed, the Co 4+ ion was computed in the sole electrostatic field of the NNO, treated as -2 point charges, the calculation is done within a minimal basis set, and at the average-CASSCF level.
F. Further analysis
In order to understand this puzzling result, we plotted the whole curve ∆E(θ) (see figure 4) at this level of calculation and analyzed separately all energetic terms involved in this effective orbital energy difference.
One sees on figure 4 that the ∆E(θ) curve is not monotonic, as expected from the crystalline field theory. Indeed, while for θ = 0 the relative order between the a 1g and e ′ g orbitals is in agreement with the crystalline field predictions, for θ = 90 • the order is reversed. One should also notice that, in addition to the θ 0 value of the regular octahedron, there is another value of θ for which the three t 2g orbitals are degenerated. In the physically realistic region of the trigonal distortion (around the regular octahedron θ 0 value) the relative order between the a 1g and e ′ g orbitals is reversed compared to the crystalline field predictions.
Let us now decompose ∆E(θ) into
• its two-electron part within the 3d shell -∆E 2 (θ) -
• and the rest referred as 3d single-electron part -∆E 1 (θ). ∆E 1 includes the kinetic energy, the electron-nucleus and electron-charge interaction, and the interaction of the 3d electrons with the inner shells electrons. FIG. 4: Orbital splitting between the a1g and e ′ g orbitals when only the nearest neighbor ligands electrostatic field is included. The dotted red curve corresponds to the singleelectron part of the orbital energy difference : ∆E1, that is the kinetic energy (equation ( 1)), the electron-charge interaction (equation ( 2)) and the interaction with the core electrons (equation ( 3)) . The dashed green curve corresponds to the two-electron part of the orbital energy difference : ∆E2, that is the repulsion and exchange terms within the 3d shell (equation ( 4)). The solid vertical line points out the regular octahedron θ value and the dashed vertical line the θ value for the super-conducting compound.
One thus has ∆E
= ∆E 1 + ∆E 2 = ε(a 1g ) -ε(e ′ g1 ) = ε(a 1g ) -ε(e ′ g2 )
with
∆E1 = a1g - ∇ 2 2 a1g -e ′ g - ∇ 2 2 e ′ g (1)
+ a1g
N -ZN RN a1g -e ′ g N -ZN RN e ′ g (2)
+ χ : occ 2 a1g χ 1 r12 a1g χ -a1g χ 1 r12 χ a1g - χ : occ 2 e ′ g χ 1 r12 e ′ g χ -e ′ g χ 1 r12 χ e ′ g (3)
and
∆E2 = a1g a1g 1 r12 a1g a1g -e ′ g e ′ g 1 r12 e ′ g e ′ g +2 a1g e ′ g 1 r12 a1g e ′ g -a1g e ′ g 1 r12 e ′ g a1g (4)
- where the equations are given in atomic units. Z N refers to the nucleus charge of the cobalt atom and the -2 point charges located at the NNO positions. R N is the associated electron-charge distance. The sum on χ runs over all the orbitals of the cobalt inner-shells.
Let us now examine the dependence on θ of each of the terms of ∆E 1 and ∆E 2 .
Kinetic energy : the radial part of each of the 3d orbitals being identical due the the minimal basis set restriction, the kinetic part is identical for all 3d orbitals and thus its contribution to ∆E 1 (terms labeled 1 of ∆E 1 ) vanishes.
Nuclear interaction : obviously this contribution to ∆E 1 (terms labeled 2 of ∆E 1 ) strongly depends on θ through the position of the -2 charges.
Interaction with the inner-shells electrons : this term (terms labeled 3 of ∆E 1 ) depends only on the shape of the t 2g and inner-shells orbitals. However, the minimal basis set does not leave any degree of freedom for the relaxation of the inner-shells orbital whose shapes are thus independent of θ.
Similarly, the 3d radial part of the 3d orbitals is totally frozen.
∆E 2 : finally, the dependence of ∆E 2 can only go through the shape of the a 1g and e ′ g orbitals whose radial part is totally frozen due to the use of a minimal basis set.
If one accepts that the a 1g and e ′ g orbitals are issued from the t 2g orbitals of the regular octahedron, their angular form is totally given by the symmetry (see eq. 5, 6) and both ∆E 2 and the third contribution of ∆E 1 should be independent of θ.
e g e • g1 = 1 √ 3 d xy + √ 2 √ 3 d xz e • g2 = 1 √ 3 d x 2 -y 2 + √ 2 √ 3 d yz (5) t 2g a • 1g = d z 2 e •′ g1 = √ 2 √ 3 d xy -1 √ 3 d xz e •′ g2 = √ 2 √ 3 d x 2 -y 2 -1 √ 3 d yz (6)
where the x, y and z coordinates are respectively associated with the a, b and c crystallographic axes. Figure 4 displays both ∆E 1 (dotted red curve) and ∆E 2 (dashed green curve) contributions to ∆E. One sees immediately that ∆E 2 is not at all independent of θ but rather monotonically increasing with θ. It results that the above hypotheses of the t 2g exclusive origin for the e ′ g orbitals is not valid. Indeed, out of the θ = θ 0 point, the only orbital perfectly defined by the symmetry is the a 1g orbital. The e ′ g and e g orbitals belong to the same irreducible representation (E g ) and can thus mix despite the large t 2g -e g energy difference. If we name this mixing angle α, it comes Figure 5 displays α as a function of θ. One sees that the t 2g -e g hybridization angle α is non null -except for the regular octahedron -and a monotonic, increasing function of θ. Even if very small (±0.6 • ), this t 2g -e g hybridization has an important energetic effect, since it lowers the the e ′ g orbital energy while increasing the e g one. α is very small but it modulates large energetic factors in ∆E 2 : on-site Coulomb repulsions of two electrons in the 3d orbitals. The result is a monotonic increasing variation of ∆E 2 as a function of θ. The variation of the ∆E 1 term is dominated by its nuclear interaction part and exhibits a monotonic decreasing variation as a function of θ, as expected from the crystalline field theory. The nuclear interaction and t 2g -e g hybridization have thus opposite effects on the a 1g -e ′ g splitting. The failure of the crystalline field theory thus comes from not considering the t 2g -e g hybridization.
In the calculations presented in figures 4 and 5, the screening effects on the on-site Coulomb repulsions and exchange integrals were not taken into account. Thus, the absolute value of ∆E 2 as a function of the hybridization α, is very large and α is very small. When the screening effects are properly taken into account, the absolute value of ∆E 2 as a function of α is reduced by a factor about 6, and the t 2g -e g hybridization is much larger than the values presented in figure 5. Indeed, in the superconducting compound, for a realistic calculation including all effects, one finds α ≃ 13 • (θ = 61.5 • ).
At this point we would like to compare the a 1g -e ′ g splitting found in the present calculations and the one found using DFT methods. Indeed, our splitting (315 meV for the superconducting compound) is larger than the DFT evaluations (always smaller < 150 meV). This point can be easily understood using the single-electron and two-electron part analysis presented above. Indeed, while the single-electron part is perfectly treated in DFT calculations, the two-electron part is treated within the exchange-correlation kernel. However these kernels are well known to fail to properly reproduce the strong correlation effects present in the transition metal opened 3d shells. One thus expect that while the singleelectron part of the atomic orbital energies is well treated, the two-electron part is underestimated, resulting in an under-evaluation of the a 1g -e ′ g splitting, as can be clearly seen from figure 4.
IV. OTHER CASES
We considered up to now a Co 4+ ion, that is five electrons in the 3d shell, and a fixed metal-ligand distance, R M-O . Let us now examine the effect of the distance RM -O and the band filling on the a 1g -e ′ g splitting. The calculations presented in this section follow the same procedure as in sections III E, III F. For different fillings a typical example in the transition metal oxides family was used to define the type of metallic atom and metal oxygen distances. Minimal basis set issued from full contraction of the basis set given in reference 10 will be used. sees immediately that despite the large variation of the metal-ligand distance, the relative order of the a 1g and e ′ g orbitals remains identical. The main effect of RM -O is thus to renormalize the amplitude of the splitting, low-ering the splitting for larger distances and increasing it for smaller ones.
A. The effect of the Co-O distance
B. 3d 1
The simplest filling case corresponds to only one electron in the 3d shell. This is, for instance, the case of the NaTiO 2 compound. The calculations were done using the average Ti-O distance found in NaTiO 2 12 : R Ti-O = 2.0749 Å.
In this case, ∆E 2 = 0 and ∆E(θ) = ∆E 1 (θ) behaves as pictured in figure 4. The a 1g orbital is of lower energy than the e ′ g for θ > θ 0 and of higher energy for θ < θ 0 . This result is in perfect agreement with the crystalline field theory.
C. 3d 2
A simple example of the 3d 2 filling in transition metal oxides is the LiVO 2 compound. Indeed, the vanadium atom is in the V 3+ ionization state. We thus used a metal oxygen distance of R V-O = 1.9787 Å13 . Figure 7 displays the a 1g -e ′ g splitting as well as its decomposition into the single-electron and two-electron parts. As in the FIG. 7: Orbital splitting between the a1g and e ′ g orbitals for a 3d 2 transition metal. Only the nearest neighbor ligands electrostatic field is included in the calculation. The dotted red curve corresponds to the single-electron part of the orbital energy difference : ∆E1, that is the kinetic energy (equation (1)), the electron-charge interaction (equation ( 2)) and the interaction with the core electrons (equation ( 3)) . The dashed green curve corresponds to the two-electron part of the orbital energy difference : ∆E2, that is the repulsion and exchange terms within the 3d shell (equation ( 4)). 3d 5 case (figure 4), the single-electron and two-electron parts behave in a monotonic way as a function of θ, and in an opposite manner. In the present case, however, the two-electron part always dominates over the one-electron part and the a 1g -e ′ g orbital splitting is always reversed compared to the crystalline field predictions. As for the 3d 5 system, there is a slight e ′ g -e g hybridization that is responsible for the t 2g orbitals order.
D. 3d 3
Examples of 3d 3 transition metal oxides are found easily in the chromium compounds. Let us take for instance the NaCrO 2 system 14 . The metal oxygen distance is thus : R Cr-O ≃ 1.901 Å. Figure 8 displays the a 1g -e ′ g orbital splitting as well as its decomposition into singleand two-electron parts. As usual the single-electron part
0 20 40 60 80 θ ο ∆E = ε(a 1g ) - ε(e g ') 2-elec. part 1-elec. part ε(a 1g ) -ε(e g ')
t 2g orbital splitting 3d 3 system FIG. 8: Orbital splitting between the a1g and e ′ g orbitals for a 3d 3 transition metal. Only the nearest neighbor ligands electrostatic field is included in the calculation. The dotted red curve corresponds to the single-electron part of the orbital energy difference : ∆E1, that is the kinetic energy (equation (1)), the electron-charge interaction (equation ( 2)) and the interaction with the core electrons (equation ( 3)) . The dashed green curve corresponds to the two-electron part of the orbital energy difference : ∆E2, that is the repulsion and exchange terms within the 3d shell (equation ( 4)).
and the two-electron part are monotonic as a function of θ but with slopes of opposite signs. This case is quite similar to the 3d 5 case since none of the single-and twoelectron parts dominates the t 2g orbital splitting over the whole range. Indeed, for small values of θ, the crystalline field effect dominates and the a 1g orbital is above the e ′ g ones while, for large values of θ, the two-electron part dominates and the a 1g orbital is again above the e ′ g ones. In a small intermediate region the order is reversed. In the realistic range of θ (θ ≃ θ 0 ) there is a strong competition between the two effects (quasi-degeneracy of the a 1g and e ′ g orbitals) and no simple theoretical prediction can be made. The crystalline field theory is not predictive but the present calculations cannot be considered as predictive either, since all the neglected effects may reverse the a 1g -e ′ g order.
V. DISCUSSION AND CONCLUSION
In the present work we studied the validity of the crystalline field theory under the application of a trigonal distortion on the regular octahedron. Under such a distortion, the T 2g irreducible representation (irrep) of the O h group spits into A 1g and E g irreps (T 2g -→ A 1g ⊕E g ), while the e g irrep remains untouched (E g -→ E g ). The hybridization between the t 2g and e g orbitals thus become symmetry allowed, even if hindered by energetic factors. This hybridization is not taken into account in the crystalline field theory. It is however of crucial importance for the relative order between the former t 2g orbitals and the reason of the failure of the crystalline field theory to be predictive. Indeed, due to the t 2g -e g orbitals hybridization, the two-electron part of the e ′ g orbital energy becomes dependant of the amplitude of the distortion and of opposite effect to the single-electron part. The relative order of the t 2g orbitals thus depends on the competition between these two effects and as a consequence of the band filling.
In this work we studied the O h to D 3d distortion, however one can expect similar effects to take place for other distortions of the regular octahedron. The condition for these effects to take place is that the T 2g irreducible representation splits into a one-dimensional irrep (A) and the same two-dimensional irrep (E) as the one the e g orbitals are transformed to
T 2g -→ A ⊕ E E g -→ E
Indeed, under such a distortion, t 2g -e g hybridization phenomena are allowed. The distortion should thus transform O h into sub-groups that keep the C 3 (111) symmetry axis : C 3 , C 3v , D 3 , S 6 and D 3d . Examples such deformations are the elongation of the metal-ligand distance of one of the sets of three symmetry related ligands, or the rotation of such a set three ligands around the (111) symmetry axis. For instance, one will expect that t 2g -e g hybridization will also take place in trigonal prismatic coordination.
However, in real systems like the sodium cobaltites, these distortion do not usually appear alone but rather coupled. For instance, in the squeezing of the metal layer between the two oxygen layers observed as a function of the sodium content in Na x CoO 2 , the Co-O bond length and the three-fold trigonal distortion are coupled. Since this composed distortion belongs to the above-cited class, the t 2g -e g hybridization will take place and the relative orbital order between the a 1g and e ′ g orbitals will be qualitatively the same as in figure 4. The bond length modification at equal distortion angle, θ, will only change the quantitative value of the orbital splitting, but not its sign. A bond elongation reduces the splitting a bond compression increases it. One can thus expect in sodium cobaltites that the a 1g -e ′ g orbital energy splitting will decrease with increasing sodium content. The reader should however have in mind that the effects of this splitting reduction will remain relatively small compared to the band width as clearly seen in reference 17 . In fact, one can expect that a large effect will be the modification of the band dispersion due not only to the bond length modification, but also to the t 2g -e g hybridization.
FIG. 1 :
1 FIG. 1: Schematic representation of the CoO2 layers.
FIG. 2 :arccos 1 √ 3 ≃
213 FIG. 2: Schematic representation of cobalt 3d splitting. θ represents the angle between the z axis -the 3-fold (111) axis of the CoO6 octahedron -and the Co -O direction. θ0 = arccos 1 √ 3 ≃ 54.74 • is the θ angle for the regular octahedron.
FIG. 3 :
3 FIG.3: Schematic representation of the Co 4+ states of interest. Let us point out that |e ′ g is doubly-degenerated, the hole being located either on the e ′ g1 or on the e ′ g2 orbitals.
. part ε(a 1g ) -ε(e g ')
FIG. 5 :
5 FIG.5: t2g-eg hybridization angle under the trigonal distortion.
Figure 6 FIG. 6 :
66 Figure 6 displays the a 1g -e ′ g energy splitting as a function of the distortion angle θ and for different distances. The range of variation : from 1.8 Å to 1.95 Å, includes all physically observed distances in CoO 2 layers. One
. part ε(a 1g ) -ε(e g ')
Acknowledgments
The authors thank Jean-Pierre Doumerc and Michel Pouchard for helpful discussions and Daniel Maynau for providing us with the CASDI suite of programs. These calculations where done using the CNRS IDRIS computational facilities under project n • 1842. |
01760174 | en | [
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2006 | https://insep.hal.science//hal-01760174/file/143-%20Hausswirth-Supplementation-ScienceSports2006-21-1-8-12.pdf | C Hausswirth
C Caillaud
R Lepers
J Brisswalter
email: brisswalter@univ-tln.fr
Influence d'une supplémentation en vitamines sur le rendement de la locomotion après une épreuve d'ultratrail Influence of a vitamin supplementation on locomotion gross efficiency after an ultra-trail race
Keywords: Rendement, Exercice de longue durée, Vitamines, Altération musculaire Gross efficiency, Long duration exercise, Vitamin, Muscle damage
Objectifs. -Le but de ce travail était d'étudier l'importance de la variation du rendement de la locomotion à la suite d'une d'épreuve d'ultratrail. Le second objectif était d'étudier l'effet sur le rendement d'une stratégie de supplémentation avant l'exercice en vitamines selon des doses et une composition correspondantes aux apports nutritionnels conseillés (ANC).
Sujets et méthodes. -Vingt-deux sujets bien entraînés en endurance ont réalisé quatre tests de mesure du rendement avant, 24, 48 et 72 heures après une épreuve de type ultra-« trail » (3000 m en montée suivis de 3000 m en descente) ainsi que quatre tests de mesure de la force maximale volontaire aux mêmes périodes. Ces sujets étaient divisés selon une méthode en double insu en deux groupes expérimentaux (avec ou sans apport nutritionnel en vitamines et micronutriments, Isoxan Endurance ® ).
Résultats.
-Dans les deux groupes on a observé une diminution du rendement de la locomotion 24 et 48 heures après la course (respectivement entre le prétest et 24 heures après : 20,02 ± 0,2 vs 19,4 ± 0,1 %, p < 0,05) et une diminution de la force maximale volontaire immédiatement après l'épreuve. Dans ce cadre, la diminution du rendement, 24 heures après la course est significativement moins importante dans le groupe avec apport nutritionnel.
Conclusion. -Les résultats de cette étude confirment la diminution du rendement à la suite d'un exercice de longue durée observée classiquement dans la littérature. Dans notre étude, l'apport en vitamines et micronutriment est associé à une moindre diminution du rendement et de la force maximale volontaire postexercice suggérant un possible effet de cet apport sur la fonction musculaire. Des travaux ultérieurs devront tester l'effet de ce type d'apport sur une moindre altération de la fonction musculaire notamment à la suite d'exercices excentriques.
Introduction
La dernière décennie a vu l'émergence et le développement chez des pratiquants de différents niveaux d'entraînement d'activités physiques d'endurance de très longue durée (supérieur à cinq heures) dans des profils de terrain et de dénivelés variés « ultra-trails ». Dans ce cadre, comme pour toute activité de longue durée la capacité de l'athlète à dépenser le moins d'énergie pour un même niveau de puissance fournie (rendement) est un facteur de la performance sportive [START_REF] Prampero | Energetics of muscular exercise[END_REF][START_REF] Hausswirth | Le coût énergétique de la course de durée prolongée : étude des paramètres d'influence[END_REF]. La variation du rendement de la locomotion avec la durée de l'exercice a bien été décrite dans la littérature. Pour des exercices d'une durée supérieure à une heure, avec l'apparition de phénomènes de fatigue centrale et périphérique, une diminution du rendement est systématiquement décrite [e.g. , 8]. Plusieurs facteurs impliqués sont cités comme responsables de cette altération tels que la variation de la mobilisation des substrats énergétiques, le stress thermique et la régulation des électrolytes de l'organisme, l'altération de la fonction musculaire, liés à la surcharge de travail notamment de type excentrique ou encore la modification du patron locomoteur.
La dépense énergétique importante (supérieure à 3000 kcal/ j) lors de ce type d'épreuve s'accompagne de la nécessité pour l'athlète d'associer à sa préparation une stratégie d'apport énergétique exogène et de contrôler la composition alimentaire en macro-et micronutriments de ces apports [START_REF] Bigard | Nutrition du sportif[END_REF]. Par ailleurs, dans les épreuves de type « trail » les variations de déclivité et de nature de terrain augmentent la part des contractions excentriques et les risques de microlésions musculaires. Dans ce cadre, il est à présent bien établi dans les épreuves d'endurance que l'augmentation de la consommation d'oxygène et des dommages musculaires se traduisent par l'apparition d'un stress oxydatif néfaste pour l'organisme notamment chez le sujet peu entraîné. Ainsi, des travaux récents se sont intéressés à l'influence de certaines vitamines sur ce stress oxydatif, les résultats semblent suggérer une influence positive de plusieurs vitamines (E et C) sur la capacité antioxydante et un possible effet de cette supplémentation sur l'altération musculaire lors du travail excentrique [START_REF] Maxwell | Changes in plasma antioxydant status during eccentric exercise and the effect of vitamin supplementation[END_REF].
Dans ce cadre, le premier objectif de ce travail est d'observer l'importance de la variation du rendement de la locomotion lors de ce type particulier d'épreuve d'ultratrail. Le second objectif est d'étudier un possible effet bénéfique sur cette variation du rendement d'une stratégie de supplémentation avant l'exercice en vitamines selon des doses et une composition correspondantes aux apports nutritionnels conseillés (ANC) pour la population sportive [START_REF] Martin | Apports nutritionnels conseillés pour la population française[END_REF].
Méthodes
Sujets
Vingt-deux sujets bien entraînés en endurance ont participé à ce travail (âge : 40 ± 1,9 ans, taille : 177 ± 1,3 cm, masse corporelle : 70,4 ± 1 kg. Au cours des deux mois précédant les tests, leur volume d'entraînement hebdomadaire comprenait en moyenne 76 km par semaine. Tous les sujets étaient habitués aux épreuves sur ergocycle en laboratoire. Ils ont rempli un consentement écrit après avoir été informés en détail des procédures de l'expérimentation et cette étude a été agréée par le comité d'éthique pour la protection des individus (Saint-Germain-en-Laye, France)
Protocole expérimental 2.2.1. Test progressif maximal
La première épreuve réalisée par tous les sujets était un test progressif maximal de détermination de la consommation maximale d'oxygène (O 2max ) réalisé sur ergocycle un mois avant l'épreuve d'ultratrail. Après un échauffement de six minutes à 100 W, l'intensité mécanique était augmentée de 30 W par minute, jusqu'à ce que le sujet ne puisse plus maintenir la puissance imposée. Les critères d'atteinte de O 2max étaient les suivants : un plateau de O 2 malgré l'augmentation de la puissance, une fréquence cardiaque (FC) supérieure à 90 % de la FC max théorique et un quotient respiratoire (QR) supérieur à 1,15 [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF]. À partir des valeurs de débit ventilatoire (E), de consommation d'oxygène (O 2 ) et de production de dioxyde de carbone (CO 2 ) le seuil ventilatoire (SV) était déterminé selon la méthode décrite par [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF] [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF]. Lors de cette première épreuve les sujets étaient également familiarisés à un test d'évaluation de la force maximale isométrique des membres inférieurs (FMIV). Lors de ce test l'angle de flexion du genou était fixé à 100 degrés. Chaque contraction maximale était maintenue deux à trois secondes.
Protocole de supplémentation
À la suite du premier test, les sujets ont été divisés en deux groupes d'aptitude aérobie identique et l'apport en vitamines et micronutriments (Isoxan Endurance ® , NHS, Rungis, France) a été randomisé selon une procédure en double insu avec un groupe supplémenté (Iso) et un groupe placebo (Pla). Le traitement a débuté 21 jours avant l'épreuve et a pris fin deux jours après la fin de la course. La composition et les doses en Isoxan Endurance ® étaient conformes aux apports nutritionnels conseillés pour les sportifs.
Tests sous-maximaux d'évaluation du rendement de la locomotion
Le rendement de la locomotion a été évalué lors d'un exercice de pédalage de six minutes sur ergocycle réalisé à une intensité de 100 W (inférieure au seuil ventilatoire pour l'ensemble des sujets) suivi de dix minutes à l'intensité correspondante au seuil ventilatoire. Ces tests ont été réalisés au cours de quatre sessions expérimentales respectivement avant (préexercice), puis 24, 48 et 72 heures après la course (post-24, post-48, post-72). Dix minutes après chaque session, les sujets réalisaient un test de FMVI.
Description de la course
La course a eu lieu à La Plagne sur un parcours totalisant 3000 m de dénivelé positif suivi de 3000 m de dénivelé négatif. Les temps moyens réalisés par les sujets lors de cette course étaient de six heures 34 ± 49 minutes soit une vitesse moyenne de 8,4 km/h. Le rendement mécanique global du cyclisme (en pourcentage) était calculée comme le rapport entre le travail mécanique accompli par minute et l'énergie métabolique dépensée par minute [START_REF] Chavarren | Cycling efficiency and pedalling frequency in road cyclists[END_REF].
Matériel et mesures
Cadence de pédalage
Toutes les épreuves de cyclisme se déroulaient sur un ergocycle à résistance électromagnétique de type SRM (Jülich, Welldorf, Allemagne). Cet ergocycle pouvait s'ajuster précisément à leurs caractéristiques anthropométriques grâce à un réglage horizontal et vertical de la selle et du cintre. Son mode de fonctionnement permettait la production d'une puissance constante indépendamment de la cadence de pédalage naturellement adoptée par les sujets [START_REF] Jones | The dynamic calibration of bicycle power measuring cranks[END_REF][START_REF] Jones | Experimental human muscle damge: morphological changes in relation with other indices of damage[END_REF]. La cadence de pédalage (rév/min) était enregistrée en continu pendant toute la durée des épreuves.
Analyse statistique
Pour chaque variable, la valeur moyenne et l'écart-type étaient calculés. L'effet de la période de mesure et du groupe Les différences avec les valeurs de précourse sont significatives : pour p < 0,05.
de supplémentation sur l'ensemble des variables mesurées était analysé par une analyse de variance (Manova) à deux facteurs.
Pour cette analyse les valeurs étaient exprimées en fonction de la valeur enregistrée au préexercice. Puis les différences entre les conditions expérimentales étaient déterminées par un test posthoc de type Newman-Keuls. Le seuil de signification était fixé à p < 0,05.
Résultats
Rendement de la locomotion
Les valeurs de rendement, de ventilation et de cadence de pédalage sont présentées Tableau 1. Chez tous les sujets, on observe une diminution du rendement de la locomotion et une augmentation de la ventilation 24 et 48 heures après la course. En revanche, aucune différence significative n'est observée entre les valeurs de rendement mesurées préexercice et 72 heures après la course (Fig. 1). Enfin, une diminution significative de la cadence de pédalage est observée 24 heures après la course. Lorsque l'on compare les deux groupes expérimentaux la diminution du rendement (Delta rendement) est significativement moindre dans le groupe supplémenté (Iso) comparé au groupe placebo (Pla) 24 heures après la course (Fig. 2).
Force maximale volontaire
Les valeurs de force maximale volontaires diminuent de façon significative après la course dans les deux groupes (respectivement pour Iso et Pla : -36,5 ± 3 % et -36,9 ± 2 %). Une corrélation significative est observée entre la diminution du rendement et celle de la force maximale isométrique (r = 0,978, p < 0,05). Dans ce cadre les valeurs de FMVI du groupe Iso retournent à des valeurs de repos plus rapidement que dans le groupe Pla.
Discussion
Le premier résultat important de cette étude est l'altération du rendement de la locomotion observée 24 et 48 heures après une épreuve de longue durée de type ultratrail. Ces résultats correspondent à ceux classiquement observés dans la littérature depuis une dizaine d'année. [START_REF] Brisswalter | In: Énergie et performance[END_REF]. Plusieurs facteurs explicatifs sont avancés pour expliquer cette variation : d'une part une modification de l'utilisation des substrats avec une métabolisation accrue de substrats lipidiques, d'autre part l'effet du stress thermique et de la déshydratation associée et enfin une altération des propriétés contractiles notamment dans le cadre d'exercices mettant en jeu une part importante de travail excentrique. Dans notre travail d'une durée moyenne de six heures 34 ± 49 minutes, la moitié de l'épreuve se déroulait en descente, nous aurions pu ainsi émettre l'hypothèse d'une altération plus importante du rendement comparée à des épreuves de durée inférieure et se déroulant en terrain plat. Paradoxalement, nous observons une altération moindre (environ 3 %) que celles observées dans la littérature (de 5-7 %), (pour revue, [START_REF] Hausswirth | Le coût énergétique de la course de durée prolongée : étude des paramètres d'influence[END_REF]). Plusieurs facteurs méthodologiques peuvent expliquer cette différence, en particulier le niveau d'intensité de l'exercice qui correspond ici environ à 40 % de VO 2max , par exemple la place du premier test de mesure du rendement situé 24 heures après la course alors que dans les autres études il est mesuré immédiatement après. Par ailleurs, dans ce travail nous n'observons aucune variation du quotient respiratoire entre le test préexercice et celui postcourse. Dans ce cadre nous pouvons émettre l'hypothèse selon laquelle la diminution du rendement observée ici est principalement liée à un effet résiduel d'altération des propriétés contractiles du muscle qui disparaît dans notre étude 72 heures après l'épreuve. La réalisation d'un exercice physique immédiatement après une course de ce type reste difficile ou impossible à étudier dans des conditions réelles de course, aussi les travaux ultérieurs devront essayer d'analyser les effets de la modification des propriétés contractiles à la suite du travail excentrique sur le rendement immédiatement après l'exercice.
Le second résultat intéressant de ce travail est l'effet significatif et bénéfique d'une supplémentation en vitamines et micronutriments sur l'altération du rendement et sur celle de la force maximale volontaire après l'épreuve. À notre connaissance, il n'a été effectué aucune étude concernant les effets d'un apport en vitamines et micronutriments sur les aspects métaboliques de la locomotion, la majeure partie des travaux ayant étudié les effets de cet apport sur la fonction musculaire [e.g. 13]. À ce jour les résultats restent encore peu clairs. Néanmoins, il est classiquement rapporté dans la littérature une altération de la fibre musculaire lors d'exercice excentriques associée à une perte de force [e.g. 7] .La diminution de la force peut atteindre des valeurs de 50 % et le retour à des valeurs normales perdure plusieurs jours après l'exercice [START_REF] Mackey | Skeletal muscle collagen contents in humans following high force eccentric contractions[END_REF]. Plusieurs facteurs explicatifs semblent impliqués dans cette altération musculaire lors de l'exercice prolongé notamment la production de radicaux libres ou stress oxydatif liés, d'une part à une consommation d'oxygène importante, d'autre part aux microlésions musculaires induites par l'exercice notamment excentrique [pour revue, 1]. Dans ce cadre, il a été proposé une action bénéfique d'un apport en vitamines (notamment C et E) sur ce stress oxydatif. À ce jour, bien que les résultats expérimentaux ayant tenté de valider ces hypothèses restent encore contradictoires, et malgré les conditions de notre étude en situation réelle qui nous limitait à une approche descriptive nous pouvons émettre l'hypothèse selon laquelle chez les sujets du groupe ayant pris un apport en vitamines, la moindre altération de la fonction musculaire a permis également de minimiser la diminution du rendement de la locomotion.
Conclusion
Les résultats de cette étude confirment et précisent les résultats présentés dans la littérature sur la diminution du rendement à la suite d'un exercice de longue durée. Un résultat intéressant de ce travail est la relation significative entre la diminution de la force maximale volontaire observée après l'étude et celle du rendement de la locomotion. Dans le cadre de cette étude descriptive nous observons un effet d'un apport en vitamines et micronutriments sur cette relation. Des travaux ultérieurs portant sur la nature de cet effet, notamment en prenant en compte un possible effet sur le stress oxydatif sont nécessaires pour préciser l'intérêt d'un apport en vitamines sur l'adaptation physiologique dans ce type d'épreuve.
2. 3 . 1 .
31 Mesure des paramètres ventilatoires et gazeux La fréquence cardiaque était enregistrée en continu pendant la course grâce à un cardiofréquencemètre (POLAR vantage, Finlande). Pendant les épreuves sur ergocycle, la consommation d'oxygène (O 2 ), la fréquence cardiaque (FC) ainsi que les paramètres respiratoires (débit ventilatoire :E, fréquence respiratoire : FR) étaient enregistrés en continu par un système d'analyse télémétrique de type Cosmed K4b 2 (Rome, Italie) validé par Mc Laughlin et al. (2001) [9]. Pour chaque paramètre, une valeur de la moyenne et de l'écart-type étaient calculées entre la troisième et la dixième minute d'exercice.
Fig. 1 .
1 Fig. 1. Variation du rendement de la locomotion après la course d'ultratrail.
Fig. 2 .
2 Fig. 2. Comparaison de la variation du rendement (delta rendement) entre les deux groupes expérimentaux (Iso vs Pla).
Remerciements
Cette étude a bénéficié du support des laboratoires NHS (Rungis, France). Nous remercions également les docteurs P. Le Van, J.M. Vallier, E. Joussellin, ainsi que C. Bernard pour leur aide lors de la réalisation de ce projet.
Références |
01760198 | en | [
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2001 | https://insep.hal.science//hal-01760198/file/145-%20Effect%20of%20pedalling%20rates.pdf | R Lepers
G Y Millet
N A Maffiuletti
C Hausswirth
J Brisswalter
Effect of pedalling rates on physiological response during endurance cycling
Keywords: Cadence, Oxygen uptake, Triathletes, Fatigue
This study was undertaken to examine the effect of different pedalling cadences upon various physiological responses during endurance cycling exercise. Eight well-trained triathletes cycled three times for 30 min each at an intensity corresponding to 80% of their maximal aerobic power output. The first test was performed at a freely chosen cadence (FCC); two others at FCC-20% and FCC +20%, which corresponded approximately to the range of cadences habitually used by road racing cyclists. The mean (SD) FCC, FCC-20% and FCC + 20% were equal to 86 (4), 69 (3) and 103 (5) rpm respectively. Heart rate (HR), oxygen uptake (VO2), minute ventilation (VE) and respiratory exchange ratio (R) were analysed during three periods: between the 4th and 5th, 14th and 15th, and 29th and 30th min. A significant effect of time (P < 0.01) was found at the three cadences for HR, VO 2 . The V E and R were significantly (P < 0.05) greater at FCC + 20% compared to FCC-20% at the 5th and 15th min but not at the 30th min. Nevertheless, no significant effect of cadence was observed in HR and VO 2 . These results suggest that, during high intensity exercise such as that encountered during a time-trial race, well-trained triathletes can easily adapt to the changes in cadence allowed by the classical gear ratios used in practice.
Introduction
During training or racing, experienced cyclists or triathletes usually select a relative high pedalling cadence, close to 80-90 rpm. The reasons behind the choice of such a cadence are still controversial and are certainly multi-factorial. Several assumptions relating to neuromuscular, biomechanical or physiological parameters have previously been proposed.
The concept of a most economical cadence is generally supported by experiments where cadences have been varied from the lowest to the highest rates and a parabolic oxygen uptake (V0 2 )-cadence relationship has been obtained. Nevertheless in reality, extreme cadences such as 50 or 110 rpm are very rarely used by road cyclists or triathletes. Simple observations have shown for example that on a flat road at 40 km•h-1 cadences ranged from 67 rpm with a 53:11 gear ratio (GR) to 103 rpm with a 53:17 GR. During an up-hill climb at 20 km•h-1 , cadences ranged from 70 rpm with a 39:17 GR to 103 rpm with a 39:25 GR. Thus, the range of cadences adopted by cyclists using these common GR may vary from 70 to 100 rpm, which corresponds to approximately 85 rpm ± 20%.
The effect of exercise duration upon cycling cadence has not been well studied. The freely chosen cadence (FCC) has seemed to be relatively stable during high intensity cycling exercise of 30 min duration [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF] but the FCC was found to decrease during 2 h of cycling at submaximal intensity [START_REF] Lepers | Evidence of neuromuscular fatigue after prolonged cycling exercise[END_REF]. In a non fatiguing situation, the FCC is known to be higher than the most economical cadence. However, a shift in the energetically optimal rate during exercise towards the FCC has recently been reported by [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF]. These observations raise a question concerning the choice of any particular GR by road racing cyclists and thus of a pedalling rate and the physiological consequences of this choice for exercise duration .
Therefore, the purpose of this study was to investigate whether the use of cadences 20% lower or higher than the freely chosen one during high intensity endurance exercise induced different changes in metabolic parameters as fatigue occurred.
Methods
Subjects
Eight well-trained male triathletes volunteered to participate in this study. The physical characteristics of the subjects are given in Table 1. They were informed in detail of the experiment and gave written consent prior to all tests.
Experiment procedures
Each subject completed four tests during a 3 week period. Each session was separated by at least 72 h. AH experiments were conducted using an electromagnetically braked cycle ergometer (Type Excalibur, Lode, Groningen, The Netherlands) of which the seat and handlebars are fully adjustable to the subject's dimensions. The ergometer was also equipped with racing pedals, and toe clips allowing the subjects to wear cycling shoes. The first session was used to determine the maximal oxygen uptake (VO2max) of the subjects. The V0 2max test began with a warm-up at 100 W lasting 6 min, after which the power output was increased by 25 W every 2 min until the subjects were exhausted. The three other sessions were composed of a 10 min warm-up ride followed by a 30 min submaximal test at 80% of the highest power sustained for 2 min (P max ). The first of these three sessions was performed at the FCC which corresponded to the cadence that the subjects spontaneously adopted within the first 5 min. During the last 25 min of this test, subjects were asked to maintain a similar cadence. For the two other tests, subjects rode in a random order at FCC-20% or FCC + 20%. The heart rate (HR) was monitored continuously, and gas exchanges were collected at three periods: between the 4th-5th (period 1), the 14th-15th (period 2), and 29th-30th (period 3) min. The HR, VO 2 , minute ventilation (V E ) and respiratory exchange ratio (R) for these three periods were analysed.
Statistical analysis
A two-way ANOVA (time x cadence) was performed using HR, VO2, VE and R as dependent variables. When a significance of P < 00.05 was obtained using the ANOVA, Tukey post-hoc multiple comparisons were made to determine differences either among pedal rates or among periods.
Results
Mean (SD) FCC were 86 (4) rpm, therefore FCC-20% and FCC + 20% corresponded to 69 (3) and 103 (5) rpm, respectively (Table 1).
A significant time effect (P <0.01) was found at the three cadences in HR, VE (Table 2). The rise in VO 2 between the 5th and the 30th min corresponded to 11.0 (7.4)%, 10.3 (6.9)% and 9.9 (3.7)% at FCC-20%, FCC and FCC + 20%, respectively. Between the 5th and the 30th min VE increased by 35.4 (17.4)%, 28.7 (10.9)% and 21.2 (5.2)% at FCC-20%, FCC and FCC +20%, respectively. No significant differences appeared among the three cadences.
A significant effect of cadence was found in V E and R in the first part of the exercise (Table 2). Posthoc tests showed that V E was significantly greater at FCC + 20% compared to FCC-20% at the 5th and 15th min but not at the 30th min. Similarly, R was significantly greater at FCC + 20% in comparison to FCC-20% and FCC at the 5th and 15th min but not at the 30th min. In VO 2 and HR, no significant effect of cadence was observed.
Discussion
The main finding of this study was the absence of significant differences in physiological parameters among the three different pedalling rates (FCC, FCC-20% and FCC + 20%) despite a significant effect of exercise duration.
Increases in HR, VO 2 and V E at the end of 30 min of cycling at 80% of P max observed in this study were similar to previous observations made in well-trained triathletes [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF]. Several hypotheses have been proposed to explain the so-called drift in VO 2 at high power outputs, such as an additional oxygen cost of higher rates of V E , increasing muscle and body temperatures and/or changes in muscle activity patterns and/or in fibre type recruitment (for a review, see [START_REF] Whipp | The slow component of 0 2 uptake kinetics during heavy exercise[END_REF]. [START_REF] Barstow | Influence of muscle fiber type and pedal frequency on oxygen uptake kinetics of heavy exercise[END_REF] examined the physiological responses of subjects to intense exercise (half way between the estimated blond lactate threshold and VO 2max ) lasting 8 min for a range of pedalling frequencies between 45 and 90 rpm. Their results showed that the slow composent of VO2 was significantly affected by fibre type distribution but not by pedalling rates. Similarly, [START_REF] Billat | The role of cadence on the slow VO 2 component in cycling and running exercise[END_REF] have shown that for high intensity exercise (95% VO 2max ), a pedalling rate 10% lower than the freely chosen one induced the same VO 2 slow component.
In the present study, in the range of pedalling rates used habitually by road cyclists (from 70 to 100 rpm), no significant effects of cadence were found upon the rises in VO 2 during 30 min of endurance exercise. Also, our data are quite different to those of [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF] who examined cadences between 50 and 110 rpm. However, such a discrepancy could be explained by the relatively small range of cadences used in the present study. The only difference observed between cadences in this study occurred in V E and R in the first part of the exercise. High pedalling rates induced greater V E at the 5th and 15th min of exercise, which were associated with higher R values (> 1.0). These data suggest a higher contribution of anaerobic metabolism to power production in the first 15 min at FCC + 20%. Moreover, they corroborate those of [START_REF] Zoladz | Human muscle power generating capability during cycling at different pedalling rates[END_REF] who showed that beyond 100 rpm there is a decrease in external power that can be delivered at a given V0 2 with an associated earlier onset of metabolic acidosis. Importantly, this could be disadvantageous for maintained high intensity exercise. However, in the present study, such a specificity at the highest pedalling rates did not affect the continuation of the exercise since similar values of V E and R were found at the end of the exercise at all three cadences.
The mean cadence spontaneously adopted by the triathletes in this study [86 (4) rpm] corroborated previous results obtained from trained cyclists or triathletes [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF][START_REF] Lepers | Evidence of neuromuscular fatigue after prolonged cycling exercise[END_REF]. Although it has been shown that pedalling rates could affect:
1. The maximal power during a 10 s sprint [START_REF] Zoladz | Human muscle power generating capability during cycling at different pedalling rates[END_REF] 2. The power generating capabilities following high intensity cycling exercise close to 90% VO2max [START_REF] Beelen | Effect of prior exercise at different pedalling frequencies on maximal power in humans[END_REF] the reasons behind the choice of a particular cadence during endurance cycling and the corresponding GR by cyclists remain unclear.
We recently showed that cycling exercise at different pedalling rates induced changes in the neural and contractile properties of the quadriceps muscle but no significant effects of cadence were found when considering a range of FCC ± 20% (Lepers et al., in press). Moreover, in the present study the FCC did not appear to be more energetically optimal than FCC-20% or FCC + 20%, either at the beginning or at the end of the exercise. [START_REF] Brisswalter | Energetically optimal cadence vs. freely-chosen cadence during cycling : effect of exercise duration[END_REF] have recently shown that the theoretical energetically optimal pedalling rate, corresponding to the lowest point of the parabolic V02cadence relationship, shifted progressively over the du-ration of exercise towards a higher pedalling rate (from 70 to 86 rpm) which was closer to the freely chosen one. Therefore, a minimisation of energy cost seems not to be a relevant parameter for the choice of cadence, at least in a non fatigued state. Actually, the choice of cadence adopted by cyclists during endurance exercise seems dependent upon factors other than the metabolic cost. Biomechanical and neuromuscular hypotheses have already been proposed to explain the choice of the pedalling rate during short-term high intensity exercise.
However, such interesting hypotheses need to be explored during prolonged exercise.
In conclusion, the results of the present study showed that, for high intensity endurance exercise corresponding to a time trial race for example, the use of cadences in a range corresponding to the classical GR induced similar physiological effects. These data suggest that well-trained triathletes can easily adapt to the changes in cadence used habitually during racing. Further investigations are necessary to target the mechanisms involved in the choice of pedalling rate during prolonged cycling. |
01760210 | en | [
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2003 | https://insep.hal.science//hal-01760210/file/146-%20Influence%20of%20drafting%20%20during%20swimming.pdf | A Delextrat
V Tricot
C Hausswirth
T Bernard
F Vercruyssen
J Brisswalter
Influence of drafting during swimming on ratings of perceived exertion during a swim-to-cycle transition in well-trained triathletes
published or not. The documents may come
Influence of drafting during swimming on ratings of perceived exertion during a swim-to-cycle transition in well-trained triathletes
Numerous physiological and psychological factors have been suggested to account for successful performance during endurance events. Among psychological parameters, the perception of workload represents a major determinant of performance [START_REF] Russell | On the current status of rated perceived exertion[END_REF], as a positive or negative evaluation of the exercise's constraints could lead to either the continuation or the cessation of the competitive task. The technique most commonly used for the measurement of perceived exertion is the Rating Scale of Perceived Exertion (RPE, from 6 to 20) described by [START_REF] Borg | Perceived exertion as an indicator of somatic stress[END_REF]. This parameter is classically suggested to be a good indicator of physical stress and provides a complementary tool for training prescription, especially during long duration exercises where fatigue is likely to occur (Williams & Eston, 1989). In this context, most studies have been conducted during continuous unimodal events. During the last decade several multimodal events involving successive locomotion mode sessions such as triathlon have attracted increased attention of scientists. Further, introduction of drafting, i.e., swimming or cycling directly behind a competitor, has considerably modified the race strategy, as triathletes attempt to swim or cycle as fast as possible to stay in the leading group. During these events it has been shown that the succession of different exercises or different drafting strategies leads to specific physiological adaptation when compared with a unimodal sport (Hausswirth, Vallier, Lehenaff, Brisswalter, Smith, Millet & Dreano, 2001). However, to our knowledge, relatively little information is available on the effect of successive exercises on the RPE scores. Therefore, the aim of the present study was to investigate whether RPE responses measured during a cycling session are affected by a prior swimming bout realized at a competition pace. Eight well-trained triathletes competing at national level were tested during four experimental sessions. The first test was always a 750-m swim realized alone Ad at a competition pace (A: Swimming Alone). It was used to judge swimming intensity for each subject. During the three other tests, presented in counterbalanced order, subjects undertook a 15-min. ride on a bicycle ergometer at 75% of maximal aerobic power (MAP) and at a freely chosen cadence (FCC). This test was preceded by either a 750-m swim performed alone at the pace adopted during Swimming Alone (SAC trial), a 750-rn swim in drafting position at the pace adopted during Swimming Alone (SDC trial), or a cycling warm-up (same duration as the swimming tests) at a power representing 30% of maximal aerobic power (MAP, C trial).
The subjects were asked to rate their perceived exertion (RPE 6-20 scale, Borg, 1970) immediately after the cessation of the swimming and cycling bouts. Moreover, blood lactate concentration was assessed immediately after swimming, and after 3 and 15 min. of cycling, and oxygen uptake (V02) was collected continuously during cycling. The RPE responses and physiological parameters measured during the cycling trials are presented in Table 1. Analysis showed that prior swimming alone led to significantly higher V02, Lactate, and RPE values during subsequent cycling when compared with cycling alone (p< ,051. Further, swimming in drafting position yielded significantly lower blood lactate concentration and RPE values measured during subsequent cycling in comparison with swimming alone (p < .05).
The main result was that RPE during cycling at a constant power output is significantly higher after a swimming bout. The similar evolution of RPE and physiological parameters confirms the hypothesis that RPE is a good indicator of exercise metabolic load (Williams & Eston, 1989) even during multimodal events and could therefore be a useful tool during triathletes' training, especially to prescribe exercise intensity during combined swimming and cycling exercises. Moreover, the lower RPE responses obtained during the cycling session when preceded by a swimming bout performed in drafting position in comparison with an isolated swimming bout indicated that drafting strategies during competition lead, on the one hand, to a significant improvement in energy cost of locomotion, i.e., lower V02 and lactate values, and, on the other hand, to a lower perceived workload. Therefore, we suggest that drafting during swimming improved both physiological and psychological factors of triathlon performance. Further studies are still needed to validate this hypothesis during a triathlon competition. |
01738307 | en | [
"shs.gestion"
] | 2024/03/05 22:32:13 | 2018 | https://audencia.hal.science/hal-01738307/file/Radu%20Lefebvre%20%26%20al%2C%20IJEBR%2C%202017.pdf | Étienne St
Jean Miruna
Radu - Lefebvre
Cynthia Mathieu
Can Less be More? Mentoring Functions, Learning Goal Orientation, and Novice Entrepreneurs Self-Efficacy
Purpose
One of the main goals of entrepreneurial mentoring programs is to strengthen the mentees' selfefficacy. However, the conditions in which entrepreneurial self-efficacy is developed through mentoring are not yet fully explored. This article tests the combined effects of mentee's learning goal orientation and perceived similarity with the mentor and demonstrates the role of these two variables in mentoring relationships.
Design
The current study is based on a sample of three hundred and sixty (360) novice Canadian entrepreneurs who completed an online questionnaire. We used a cross-sectional analysis as research design.
Findings
Findings indicate that the development of entrepreneurial self-efficacy (ESE) is optimal when mentees present low levels of learning goal orientation (LGO) and perceive high similarities between their mentor and themselves. Mentees with high LGO decreased their level of ESE with more in-depth mentoring received.
Limitation
This study investigated a formal mentoring program with volunteer (unpaid) mentors. Generalization to informal mentoring relationships needs to be tested.
Practical implication/value
The study shows that, in order to effectively develop self-efficacy in a mentoring situation, learning goal orientation (LGO) should be taken into account. Mentors can be trained to modify mentees'
LGO to increase their impact on this mindset and mentees' entrepreneurial self-efficacy.
Originality/value
This is the first empirical study that demonstrates the effects of mentoring on entrepreneurial selfefficacy and reveals a triple moderating effect of LGO and perceived similarity in mentoring relationships.
Introduction
In recent decades, countries all over the world have implemented support programs contributing to the development of entrepreneurial activity as part of the entrepreneurial ecosystem [START_REF] Spigel | The Relational Organization of Entrepreneurial Ecosystems[END_REF]. Among these initiatives, the mentoring of novice entrepreneurs was emphasized as highly beneficial for enhancing entrepreneurial self-efficacy (ESE) and entrepreneurial skills (e.g. [START_REF] Crompton | The effect of business coaching and mentoring on small-to-medium enterprise performance and growth[END_REF][START_REF] Gravells | Mentoring start-up entrepreneurs in the East Midlands -Troubleshooters and trusted friends[END_REF][START_REF] Radu Lefebvre | How to Do Things with Words": The Discursive Dimension of Experiential Learning in Entrepreneurial Mentoring Dyads[END_REF][START_REF] St-Jean | The Effect of Mentor Intervention Style in Novice Entrepreneur Mentoring Relationships[END_REF]. Extensive empirical research [START_REF] Ozgen | Social sources of information in opportunity recognition: Effects of mentors, industry networks, and professional forums[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF][START_REF] Ucbasaran | Opportunity identification and pursuit: does an entrepreneur's human capital matter?[END_REF] confirmed the positive impact of mentoring relationships on both mentees' cognitions (improving opportunity identification, clarifying business vision) and emotions (reducing stress and feelings of being isolated, establishing more ambitious goals). However, there is limited knowledge of how mentoring relationships produce these outcomes. We thus know little about the individual and relational variables moderating the impact of mentoring relationships. This article makes a theoretical and practical contribution to our understanding of how, and under what conditions, mentor input (mentor functions), along with a mentee variable (mentee's learning goal orientation; LGO) and a mentoring relationship variable (perceived similarity with the mentor) combine to develop novice entrepreneurs' ESE. This, in turn, will enable entrepreneurial support programs to better match and support mentoring dyads.
Despite their potential effects on mentees' ESE [START_REF] Egan | The Impact of Learning Goal Orientation Similarity on Formal Mentoring Relationship Outcomes[END_REF][START_REF] Mitchell | My mentor, my self: Antecedents and outcomes of perceived similarity in mentoring relationships[END_REF], research dedicated to the study of ESE development while simultaneously taking into account mentor functions, perceived similarity with the mentor, and mentees' LGO is scarce. Studies based on goal orientation theory [START_REF] Dweck | Mindset: The new psychology of success[END_REF][START_REF] Dweck | A social-cognitive approach to motivation and personality[END_REF], social learning theory (Bandura, 1986[START_REF] Bandura | Self-efficacy : the exercise of control[END_REF] and social comparison theory [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF] generated consistent evidence related to the development of ESE through supportive relationships such as mentoring.
Goal orientation theory emphasizes the role of LGO in producing positive effects on mentees' ESE [START_REF] Godshalk | Aiming for career success: The role of learning goal orientation in mentoring relationships[END_REF][START_REF] Kim | Learning goal orientation, formal mentoring, and leadership competence in HRD: A conceptual model[END_REF], whereas social learning theory and social comparison theory focus on the importance of perceived similarity in producing positive ESE outcomes at the mentee level [START_REF] Ensher | Effects of Race, Gender, Perceived Similarity, and Contact on Mentor Relationships[END_REF][START_REF] Mitchell | My mentor, my self: Antecedents and outcomes of perceived similarity in mentoring relationships[END_REF]. The present article builds on these three streams of literature to test the combined effects of perceived similarity with the mentor and mentees' LGO on mentees' ESE. Moreover, we build on previous mentoring research in entrepreneurship that has established that the input mentors bring in mentoring relationships can be effectively operationalized as a set of mentoring functions. These mentoring functions can be related to career development whereas others are more focused on the mentees' attitude change and skills development [START_REF] St-Jean | Mentor functions for novice entrepreneurs[END_REF][START_REF] St-Jean | The Effect of Mentor Intervention Style in Novice Entrepreneur Mentoring Relationships[END_REF].
The aim of the present study is to demonstrate that the impact of mentoring functions on mentees' ESE is moderated by the mentee's LGO and perceived similarity with the mentor. The reason for combining these three streams of literature to test our moderating model is that together they contribute to our understanding of the impact of mentoring relationships on novice entrepreneurs. First, the social comparison perspective within mentoring relationships is considered by testing the moderating effect of perceived similarity with the mentor on mentees' ESE development. Second, goal orientation is taken into account as part of novice entrepreneurs' psychological disposition upon entering a mentoring relationship, and how these relationships can have an impact on their ESE. Third, we highlight the potential combined effect of mentees' LGO and perceived similarity with the mentor in explaining the conditions in which mentees' ESE could develop to allow them to reach their full potential.
The article is structured as follows: first, we present the theoretical background and the main hypotheses. Then we focus on our empirical study and the methods used to test the hypotheses.
Based on a sample of 360 entrepreneurs supported by a mentoring program in Canada, the study shows that mentoring functions foster ESE under certain conditions, which supports the hypotheses concerning the moderating role of mentees' LGO and perceived similarity with the mentor. We demonstrate that high perceived similarity with the mentor increases mentees' ESE and we show that mentoring functions increase mentees' ESE, particularly when mentees have low levels of
LGO. We discuss these findings and highlight their theoretical and practical implications for entrepreneurial research and policy.
Theoretical background
This section first presents the notion of ESE and its relevance in the context of mentoring for entrepreneurs. We then focus on the issue of the mentor's input and show the importance of mentor functions and mentees' perceived similarity with the mentor for mentees' ESE development.
Mentees'
LGO is also introduced and we highlight its direct and moderating effects on mentees' ESE enhancement. Finally, the combined effect of mentees' LGO, mentor functions and perceived similarity with the mentor is examined to explore how these variables may influence the development of mentees' ESE as a result of involvement in mentoring relationships.
ESE refers to the subjective perception of one's ability to successfully accomplish a specific task or behavior [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF]. According to Bandura (1997, p. 77), ESE beliefs are constructed through four main sources of information: 1/ enactive mastery experiences that serve as indicators of capability; 2/ vicarious experiences that alter efficacy beliefs through transmission of competencies and comparison with the attainments of others; 3/ verbal persuasion and allied types of social influence that may persuade the individuals that they possess certain capabilities; and 4/ physiological and affective states from which people partly judge their capability, strength, and vulnerability to dysfunction. Although mentoring may not support ESE development through enactive mastery experiences, indirect evidence obtained from previous studies (ref. [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF] suggests that mentoring can develop ESE through the three other processes (vicarious learning, verbal persuasion, physiological and emotional states). Mentors may act as role models in a vicarious learning relationship which consists in facilitating mentees' self-evaluation and development of entrepreneurial and business skills through social comparison and imitative behavioral strategies [START_REF] Barnir | Mediation and Moderated Mediation in the Relationship Among Role Models, Self-Efficacy, Entrepreneurial Career Intention, and Gender[END_REF][START_REF] Johannisson | University training for entrepreneurship: a Swedish approach[END_REF][START_REF] Scherer | Role Model Performance Effects on Development of Entrepreneurial Career Preference[END_REF]. Indeed, vicarious learning from mentors was identified as the most significant contribution to mentoring relationships, regardless of the context being studied [START_REF] Barrett | Small business learning through mentoring: evaluating a project[END_REF][START_REF] Crocitto | Global mentoring as a means of career development and knowledge creation: A learning-based framework and agenda for future research[END_REF][START_REF] D'abate | Mentoring as a learning tool: enhancing the effectiveness of an undergraduate business mentoring program[END_REF][START_REF] Gordon | Coaching the mentor: Facilitating reflection and change[END_REF][START_REF] Hezlett | Protégés' learning in mentoring relationships: A review of the literature and an exploratory case study[END_REF][START_REF] Lankau | An investigation of personal learning in mentoring relationships: content, antecedents, and consequences[END_REF][START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF]. Furthermore, mentors may use verbal persuasion strategies to help mentees explore and sometimes change their attitudes and beliefs [START_REF] Marlow | Analyzing the influence of gender upon high-technology venturing within the context of business incubation[END_REF][START_REF] Radu Lefebvre | How to Do Things with Words": The Discursive Dimension of Experiential Learning in Entrepreneurial Mentoring Dyads[END_REF][START_REF] St-Jean | The Effect of Mentor Intervention Style in Novice Entrepreneur Mentoring Relationships[END_REF]. Finally, mentors may influence mentees' emotional states by reducing their levels of stress related to perceived uncertainty and future challenges [START_REF] Kram | Mentoring as an antidote to stress during corporate trauma[END_REF][START_REF] Sosik | Leadership styles, mentoring functions received, and jobrelated stress: a conceptual model and preliminary study[END_REF].
It is, however, important to note that not all mentors are equally invested in mentoring relationships; some may only provide marginal mentoring [START_REF] Ragins | Marginal mentoring: The effects of type of mentor, quality of relationship, and program design on work and career attitudes[END_REF] or worse, harmful mentoring experiences [START_REF] Eby | Protégés' negative mentoring experiences: Construct development and nomological validation[END_REF][START_REF] Eby | The Protege's Perspective Regarding Negative Mentoring Experiences: The Development of a Taxonomy[END_REF][START_REF] Simon | A typology of negative mentoring experiences: A multidimensional scaling study[END_REF].
The quality and depth of mentoring relationships can be assessed by mentor functions [START_REF] Kram | Mentoring at work : Developmental relationships in organizational life Scott Foresman[END_REF] that allow mentees to benefit from the mentoring relationship in various ways, particularly in terms of positive changes regarding their ESE [START_REF] Day | The relationship between career motivation and self-efficacy with protégé career success[END_REF][START_REF] Powers | An Exploratory, Randomized Study of the Impact of Mentoring on the Self-Efficacy and Community-Based Knowledge of Adolescents with Severe Physical Challenges[END_REF][START_REF] Wanberg | Mentoring research: A review and dynamic process model[END_REF]. Mentor functions studied in large organizations, as well as in entrepreneurship, refer to three categories of support a mentee can receive: psychological, careerrelated, and role modeling [START_REF] Bouquillon | It's only a phase': examining trust, identification and mentoring functions received accross the mentoring phases[END_REF][START_REF] Pellegrini | Construct equivalence across groups: an unexplored issue in mentoring research[END_REF][START_REF] St-Jean | Mentor functions for novice entrepreneurs[END_REF][START_REF] Waters | The role of formal mentoring on business success and self-esteem in participants of a new business start-up program[END_REF]. Mentor functions can act as an indicator of the quality of the mentoring provided or received [START_REF] Hayes | Mentoring and nurse practitioner student self-efficacy[END_REF]. These functions influence the mentoring process, more specifically the development of mentees' ESE; prior research has demonstrated that higher levels of psychological support improve mentees' ESE [START_REF] Kram | Mentoring at work : Developmental relationships in organizational life Scott Foresman[END_REF]. As a result of their focus on providing challenging tasks to the mentee or in guiding them throughout the decision-making process, career-related functions also play a significant role in the development of mentees' ESE [START_REF] Kram | Mentoring at work : Developmental relationships in organizational life Scott Foresman[END_REF][START_REF] St-Jean | Mentor functions for novice entrepreneurs[END_REF]. To sum up, there is consistent evidence that mentor functions have a direct impact on mentees' ESE. Our goal is to demonstrate the contribution of two moderating variables that may enhance or diminish the impact of mentoring functions on mentees' ESE development: perceived similarity with the mentor and mentees' LGO, as indicated in the Figure 1.
The role of perceive similarity with mentor in supporting mentees' ESE development
The notion of "perceived similarity" was introduced by [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF], who stressed that when individuals evaluate their own opinions and abilities, there is a tendency to look to external sources of information such as role models. Social comparison theory [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF] complements Bandura's social cognitive learning theory in suggesting that the greater the perceived similarity to the role model, the greater the impact of that role model on the observer's ESE [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF]. Social comparison theory highlights that the observer's identification with the role model is crucial for maintaining the social comparison process. Perceived similarity regarding age, gender, background [START_REF] Wheeler | Self-Schema matching and attitude change: Situational and dispositional determinants of message elaboration[END_REF], values and goals [START_REF] Filstad | How newcomers use role models in organizational socialization[END_REF] reinforces identification to the role model. Individuals tend to compare themselves with people they perceive as similar to themselves, and avoid comparing themselves with people perceived as too different [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF]. Mentoring relationships with low levels of perceived similarity are thus likely to reduce the social comparison process and generate a negative impact on vicarious learning; this decrease in vicarious learning would negatively impact the observer's ESE.
To generate positive outcomes as role models, one condition seems essential: mentors of entrepreneurs must be perceived as similar by their mentees [START_REF] Elam | Gender and entrepreneurship: A multilevel theory and analysis[END_REF][START_REF] Terjesen | The role of developmental relationships in the transition to entrepreneurship: A qualitative study and agenda for future research[END_REF][START_REF] Wilson | An analysis of the role of gender and self-efficacy in developing female entrepreneurial interest and behavior[END_REF]. In three recent meta-analyses in mentoring contexts, [START_REF] Eby | An interdisciplinary meta-analysis of the potential antecedents, correlates, and consequences of protégé perceptions of mentoring[END_REF], [START_REF] Ghosh | Antecedents of mentoring support: a meta-analysis of individual, relational, and structural or organizational factors[END_REF] and [START_REF] Ghosh | Career benefits associated with mentoring for mentors: A metaanalysis[END_REF] demonstrated that perceived similarity with mentors is correlated to positive mentoring outcomes. The process through which perceived similarity influences mentoring outcomes was characterized by [START_REF] Mitchell | My mentor, my self: Antecedents and outcomes of perceived similarity in mentoring relationships[END_REF] as "relational identification" in work relationships (cf. the theory of relational identification; [START_REF] Sluss | Relational identity and identification: Defining ourselves through work relationships[END_REF]. Prior empirical research has shown that entrepreneurs tend to choose role models of the same gender. This tendency is stronger for women entrepreneurs [START_REF] Murrell | The gendered nature of role model status: an empirical study[END_REF], who start a business in what is still perceived as a male dominated social milieu [START_REF] Wilson | Gender, Entrepreneurial Self-Efficacy, and Entrepreneurial Career Intentions: Implications for Entrepreneurship Education[END_REF]. Interestingly, mentoring research has emphasized that perceived similarity is more important than actual similarity [START_REF] Ensher | Effects of Perceived Attitudinal and Demographic Similarity on Protégés' Support and Satisfaction Gained From Their Mentoring Relationships[END_REF]. When identification is effective, mentors share their values and attitudes, and they may model desired entrepreneurial behaviors or attitudes.
Comparing oneself to a mentor is an upward social comparison that can stimulate mentees' motivation to engage in a learning process when perceived similarity with the mentor is high [START_REF] Schunk | Developing children's self-efficacy and skills: The roles of social comparative information and goal setting[END_REF]. On the other hand, upward social comparisons can also reduce mentees' ESE if the mentor's level of proficiency seems unattainable and perceived similarity is low [START_REF] Lockwood | Superstars and me: Predicting the impact of role models on the self[END_REF]. As a consequence, a high level of perceived similarity will facilitate upward social comparison with the mentor and enable mentees to improve their ESE through the mentor function received. These considerations suggest the following hypothesis:
Hypothesis 1: The mentee's perceived similarity with the mentor has a positive moderating effect on the relation between mentor functions and the mentee's ESE.
Mentees' LGO
Learning goal orientation (LGO) (also known as mastery goal-orientation) is a relatively stable psychological disposition that individuals develop through their interpersonal relationships [START_REF] Dweck | Motivational processes affection learning[END_REF]. Individuals with a high LGO tend to perceive their abilities as malleable and subject to change [START_REF] Dupeyrat | Implicit theories of intelligence, goal orientation, cognitive engagement, and achievement: A test of Dweck's model with returning to school adults[END_REF]. These individuals will therefore approach the tasks at hand with self-confidence, and with the intention of developing new skills. They will consequently value hard work and self-improvement and will be constantly looking for new challenges to enhance their skills [START_REF] Dweck | A social-cognitive approach to motivation and personality[END_REF]. By doing so, they engage in new activities, regardless of their difficulty [START_REF] Button | Goal Orientation in Organizational Research: A Conceptual and Empirical Foundation[END_REF]. Conversely, individuals with low levels of LGO tend to see their intelligence and their skills as 'stable' and 'unchangeable', and they tend to have a lower level of ESE than those who perceive their skills as malleable [START_REF] Ames | Classrooms: Goals, structures, and student motivation[END_REF]. Their approach towards, and expectations of, a mentoring relationship will undoubtedly differ from mentees with high levels of LGO.
LGO does not seem to be related to short-term or long-term goal setting [START_REF] Harackiewicz | Short-term and long-term consequences of achievement goals: Predicting interest and performance over time[END_REF]; however, individuals with low LGO and high LGO use different strategies to reach their goals. For instance, given that LGO is related to self-regulated learning, low LGO individuals rely more heavily on external support than individuals with high LGO, who will mobilize external sources of information to learn but will behave more autonomously [START_REF] Wolters | The relation between goal orientation and students' motivational beliefs and self-regulated learning[END_REF]. The notions of 'goal orientation' and 'goal setting' are distinct [START_REF] Phillips | Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process[END_REF].
LGO plays a crucial role in understanding how mentees perceive their ability to master a number of skills. From a learning perspective, prior research has shown that mentees enter mentoring relationships either with a desire to grow and improve their current skills [START_REF] Barrett | Small business learning through mentoring: evaluating a project[END_REF][START_REF] Benton | Mentoring women in acquiring small business management skills -Gaining the benefits and avoiding the pitfalls[END_REF] or to receive advice and suggestions on how to improve their entrepreneurial project [START_REF] Gaskill | Qualitative Investigation into Developmental Relationships for Small Business Apparel Retailers: Networks, Mentors and Role Models[END_REF][START_REF] Gibson | Developing the professional self-concept: Role model construals in early, middle, and late career stages[END_REF] without having to change their current skills.
LGO may be related to these mentoring outcomes from the mentees' perspective and thus depend on their motivation to grow/learn or to receive advice/help from their mentors. High LGO mentees could exhibit the first category of motivations whereas low LGO mentees may prefer the second types of motivations.
In a study that investigated children's behavior after a failure in school, [START_REF] Diener | An Analysis of Learned Helplessness: Continuous Changes in Performance, Strategy, and Achievement Cognitions Following Failure[END_REF] found that learning-oriented children make fewer attributions and focus on remedies for failure, while helpless children (i.e., low LGO) focus on the cause of failure. In school, students who adopt a high LGO engage in more self-regulated learning than the others [START_REF] Ames | Classrooms: Goals, structures, and student motivation[END_REF][START_REF] Pintrich | The role of expectancy and self-efficacy beliefs[END_REF]. Furthermore, a high LGO mindset, also called a growth mindset [START_REF] Dweck | Mindset: The new psychology of success[END_REF], is demonstrated to be related to high intrinsic motivation [START_REF] Haimovitz | Dangerous mindsets: How beliefs about intelligence predict motivational change[END_REF], goal achievement [START_REF] Burnette | Mind-sets matter: A meta-analytic review of implicit theories and self-regulation[END_REF] and ESE [START_REF] Ames | Classrooms: Goals, structures, and student motivation[END_REF]. Therefore, we assume that mentees with a high level of LGO will also have a high level of ESE, based on the influence the former has on the latter. These considerations lead us to the following hypothesis: Hypothesis 2: Mentee's LGO is positively related to his/her ESE.
As we mentioned earlier, mentees can enter mentoring relationships harboring different motivations: to learn and to improve their skills or to receive advice and suggestions on how to manage their business. Who would benefit most from mentoring relationships with regard to ESE development? There is evidence that LGO is associated with feedback seeking behaviors [START_REF] Tuckey | The influence of motives and goal orientation on feedback seeking[END_REF][START_REF] Vandewalle | A goal orientation model of feedback-seeking behavior[END_REF][START_REF] Vandewalle | A test of the influence of goal orientation on the feedback-seeking process[END_REF]; entrepreneurs with high LGO should thus be attracted to mentoring, as it procures feedback in a career setting where there are no hierarchical superiors for assessing one's skills and performance.
Additionally, entrepreneurs with high LGO should be stimulated by mentoring relationships and consider their mentors as a potential learning source [START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF] to develop their intelligence and skills [START_REF] Ames | Achievement goals in the classroom: Students' learning strategies and motivation processes[END_REF]. On the other hand, low LGO entrepreneurs would prefer situations in which they can perform well (performance goal orientation) [START_REF] Dweck | Mindset: The new psychology of success[END_REF]. Given that they perceive their intelligence as fixed in time, when facing a difficult task or receiving a bad performance, they will seek help or try to avoid the task at hand rather than try to learn new skills that could allow them to face a similar challenge in the future.
As previously mentioned, individuals with high LGO tend to exhibit a higher level of ESE.
Despite the fact that mentoring can be a source of learning for them, it is unlikely that they will significantly improve their ESE. As mentioned by [START_REF] Bandura | Self-efficacy : the exercise of control[END_REF], vicarious experience (i.e., observing someone similar to oneself succeeding in a particular task will improve the observer's beliefs that he/she can also master the task) as well as verbal persuasion allow individuals to adjust their ESE to a more realistic level, either upward or downward. Thus, considering the high level of ESE of mentees with high LGO, it is highly probable that, at best, they will maintain their high ESE, or experience a decrease in ESE to a more realistic level.
The picture is quite different for low LGO mentees. They believe their intelligence to be stable and immovable. When facing a difficult task or receiving negative performance feedback, they will either seek help to accomplish the task or try to avoid it in the future [START_REF] Dweck | Mindset: The new psychology of success[END_REF].
Novice entrepreneurs, despite feeling incompetent at performing certain tasks, are often required to complete these tasks because they often do not have the resources to hire qualified individuals to help them. Under these conditions, external support may become the preferred way to overcome this personal limitation as it may help them feel more effective in their management decisions. Given that low LGO entrepreneurs do not believe their intelligence is malleable, they are not likely to work on developing new skills to face challenging situations. Consequently, mentoring can help them feel more confident about their efficacy in managing their business (i.e., ESE). However, the increase of their ESE is dependent on the mentor functions received, and therefore it may only last as long as they stay in the mentoring relationship.
To sum up, mentoring may have less of an effect on high LGO novice entrepreneurs' ESE.
For these entrepreneurs, mentoring may represent a source of learning (along with formal education, entrepreneurs' clubs, media, learning through action, etc.). Mentoring will thus keep their ESE high or slightly readjust it to a more realistic level. On the other hand, low LGO novice entrepreneurs may view mentoring as a significant source of help to overcome their perceived inability to deal with career-related goals and tasks. With the support of a mentor, the latter type of mentee should consequently perceive themselves as more suited to accomplish the tasks related to their entrepreneurial career, and thus experience an improvement of their ESE. These considerations suggest the following hypothesis: Hypothesis 3: Mentee's LGO has a negative moderating effect on the relationship between the mentor functions and the mentee's ESE, such that the relationship would be stronger for low LGO mentees.
As previously mentioned, low LGO mentees do not think that they are able to significantly improve their abilities. Thus, they will seek advice, support and help from mentors to compensate for their perceived weaknesses. Given that mentoring offers an opportunity to compare with others and because low LGO mentees may not believe they can change their abilities, perceived similarity with the mentor may act as a moderator of the relationship between mentor functions and mentees' ESE. Indeed, mentees would probably be more willing to accept advice and support from a mentor if the former is perceived as highly similar to the latter, causing in turn the mentor functions to improve ESE to a greater extent. Furthermore, throughout social comparison processes [START_REF] Corcoran | Social comparison: Motives, standards, and mechanisms[END_REF][START_REF] Festinger | A Theory of Social Comparison Processes[END_REF], the more the mentor exerts his/her functions, the more adapted the mentee will feel toward his/her entrepreneurial career, which, in turn, will have a positive influence on his/her ESE. However, when the mentee perceives himself/herself as not being very similar to the mentor, social comparison processes will stop [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF]. Therefore, mentor functions would have less effect in improving the mentee's ESE as the mentee would feel less adapted to an entrepreneurial career [START_REF] Lockwood | Superstars and me: Predicting the impact of role models on the self[END_REF]. This suggests the following hypothesis:
Hypothesis 4: The impact of the mentor functions on the mentee's ESE is enhanced when the mentor is perceived as highly similar and when the mentee's LGO is low.
Methodology
We conducted a study of mentoring relationships within Réseau M, a mentoring network Before the first pairing, every mentor receives a mandatory three hour training session on the mission of mentoring and the main guidelines to follow. Novice entrepreneurs benefit from mentor support for a minimal cost: a few hundred dollars per year, and in some cases, for free. The program is available to every novice entrepreneur who wants to be supported by a mentor. Mentees are seeking career-related support (e.g. advice, a sounding board for decision-making, expertise, etc.), as well as psychological support (e.g. to ease loneliness, to be reassured or encouraged, etc.) from their mentors. Each mentor acts as a volunteer to help novice entrepreneurs in their entrepreneurial journey. Most of them are experienced entrepreneurs that are retired and want to stay active by supporting those less experienced, and a few of them are still working in the business world (e.g. bankers, practitioners, etc.). To ensure the coordination of the mentoring cells, the Fondation organizes workshops dedicated to the development of mentor-mentee relationships. Réseau M provides a Code of Ethics and a standard mentoring contract signed by mentors and mentees at the beginning of their interaction.
Sample
The sample for this study was composed of mentored entrepreneurs from Réseau M of the Fondation de l'entrepreneurship, who had attended at least three meetings with their mentor or were still in a mentoring relationship, and whose email addresses were valid at the time of the survey. In 2008, mentees were invited to participate in the study by email, and two follow-ups were conducted with non-respondents, resulting in a total of 360 respondents (a response rate of 36.9%). Given that the Fondation was not able at that time to provide information concerning the demographic characteristics of the sample, we decided to compare early respondents (who answered the first time), and later respondents (who answered after follow-ups), as suggested by [START_REF] Armstrong | Estimating nonresponse bias in mail surveys[END_REF]. There are no significant differences between the two groups in terms of demographic variables, business-related variables, and the variables measured in the study. The respondents are thus representative of the studied population. Table 1 shows the characteristic of the sample. (1999). This allowed us to measure several perceived abilities such as: defining strategic objectives (3 items), coping with unexpected challenges (3 items) [START_REF] De Noble | Entrepreneurial self-efficacy: The development of a measure and its relationship to entrepreneurial action[END_REF], recognizing opportunities (3 items), engaging in action planning (3 items), supervising human resources (3 items), and managing finance issues (3 items) [START_REF] Anna | Women business owners in traditional and non-traditional industries[END_REF]. These items are similar to those suggested by other authors [START_REF] Mcgee | Entrepreneurial Self Efficacy: Refining the Measure[END_REF].
Seven-point Likert scales were used. The Cronbach's alpha was 0.936, which is well above the average [START_REF] Cronbach | Coefficient alpha and the internal structure of tests[END_REF]. A mean score of all the items was calculated.
Mentor functions. The measure of mentor functions was developed by St-Jean (2011), and includes 9 items assessed on a seven-point Likert scale. This scale provides an assessment of the depth of mentoring provided. The Cronbach's alpha was 0.898, which is well above average. A mean score of all the items was calculated.
Perceived similarity. We used the measure developed by [START_REF] Allen | Relationship effectiveness for mentors: Factors associated with learning and quality[END_REF], which includes similarity in values, interests, personality, and those suggested by [START_REF] Ensher | Effects of Race, Gender, Perceived Similarity, and Contact on Mentor Relationships[END_REF], including similarity in worldview. Seven-point Likert scales were used and the Cronbach's alpha was 0.897, which is well above average. A mean of all the items was calculated.
Learning goal orientation (LGO).
The study used a measure developed by [START_REF] Button | Goal Orientation in Organizational Research: A Conceptual and Empirical Foundation[END_REF], which includes 8 items. Seven-point Likert scales were used. The Cronbach's alpha was 0.927, which is well above the average suggested. A mean score of all the items was calculated.
Control variables. There are certain exogenous variables that may impact ESE, such as the respondents' gender [START_REF] Mueller | Gender-role orientation as a determinant of entrepreneurial self-efficacy[END_REF][START_REF] Wilson | An analysis of the role of gender and self-efficacy in developing female entrepreneurial interest and behavior[END_REF], age [START_REF] Maurer | Career-relevant learning and development, worker age, and beliefs about selfefficacy for development[END_REF], education level and management experience. They were all included in the analysis.
The research was conducted in French. Thus, all the items have been translated into English and proofread by a professional translator, to ensure the validity of measures.
Common method bias
Using self-reported data and measuring both predictors and dependent variables may result in common method variance (CMV) [START_REF] Lindell | Accounting for common method variance in crosssectional research designs[END_REF][START_REF] Podsakoff | Common method biases in behavioral research: A critical review of the literature and recommended remedies[END_REF]. To reduce the possibility of CMV, we first ensured confidentiality for each respondent in order to reduce social desirability, respondent leniency, and taking on perceptions consistent with the researchers' objectives [START_REF] Podsakoff | Common method biases in behavioral research: A critical review of the literature and recommended remedies[END_REF]. We also performed Harman's single factor test as a post-hoc test. This procedure involved conducting an unrotated exploratory factor analysis on all of the items collected for this study. Results indicate that data converge into four factors, with the first factor explaining 26.87% of the variance. Furthermore, data show negative correlation or no correlation between the main variables (Table 1 shows no significant correlation between LGO and perceived similarity or mentor functions), which is unlikely to appear in data contaminated with CMV.
Moreover, when the variables are too complex and cannot be anticipated by the respondent, as observed in this study, this reduces the potential effects of social desirability and therefore reduces CMV [START_REF] Podsakoff | Common method biases in behavioral research: A critical review of the literature and recommended remedies[END_REF]. Given that personality is usually measured through self-report instruments, the fact that we used a self-report questionnaire for LGO does not constitute a limitation of the current study [START_REF] Spector | Method variance in organizational research -Truth or urban legend?[END_REF]. We thus believe that the risk of CMV with the data used for the present study is relatively low.
Data analysis
A hierarchical regression analysis of ESE was conducted to test the hypotheses. We started by entering control variables, and then we considered the main effects of mentees' LGO, perceived similarity with the mentors and mentor functions. Lastly, we entered the interactions between independent variables and we ended with a triple interaction analysis. To calculate the interaction between variables and to avoid collinearity, we first multiplied the relevant variables and focused on the results of each mean. After removing surveys where participants left out answers, the remaining sample was composed of 314 respondents.
Results
Means, standard deviations and correlations between variables are shown in Table 2. Model 3 takes into consideration the moderators (R 2 =0.268), and Model 4 adds the three-way interaction between independent variables (R 2 =0.284). The hypotheses were validated with model 4. Indeed, Model 4 shows that age has a negative effect on ESE, whereas the level of education and prior management experience produced a positive impact on ESE (p=0.073).
LGO is related to ESE level (β=0.344, p=0.000), which confirms H2. The moderation of the LGO (H3) and perceived similarity (H1) on ESE is also confirmed (β=-0.357, p=0.000 and β=0.205, p=0.008, respectively).
Finally, the three combined independent variables simultaneously influence ESE, which confirms H4 (β=-0.160, p=0.023). Overall, the two-way and three-way interactions explain 0.099% of the variance of ESE (Δ adj.R 2 ). Figure 2 shows that perceived similarity positively influences the interaction between mentor functions and ESE. Thus, when mentees perceive little similarity with their mentor, there is no shift in their ESE. Yet, in dyads where mentees perceive their mentor as highly similar, an increase in mentor functions increases mentees' ESE as well. Figure 4 illustrates the three-way interaction between variables. When a mentee has a high
LGO, the mentor functions lower his/her ESE, no matter the level of perceived similarity. For mentees with low LGO, mentor functions increase their ESE level. This effect is the most significant when mentees perceive their mentors as similar, which indicates that mentoring relationships are the most effective at enhancing mentees' ESE when mentees have a low LGO orientation and a high level of perceived similarity with their mentor.
Implications
The present research results show the positive effects of mentor functions on mentees' ESE when perceived similarity with the mentor is high. This suggests that entrepreneurial role models may play a similar role in improving ESE as found with other types of support relationships, such as entrepreneur-in-residence programs and business incubators [START_REF] Christina | The Role of Entrepreneur in Residence towards the Students' Entrepreneurial Performance: A Study of Entrepreneurship Learning Process at Ciputra University, Indonesia[END_REF][START_REF] George | What is (the point of) an entrepreneur in residence? The Lancaster University experience, with some worldwide comparisons[END_REF], peer learning networks [START_REF] Kempster | Learning to lead in the entrepreneurial context[END_REF][START_REF] Kutzhanova | Skill-based development of entrepreneurs and the role of personal and peer group coaching in enterprise development[END_REF] and, more generally, in the context of public support for entrepreneurs [START_REF] Delanoë | From intention to start-up: the effect of professional support[END_REF][START_REF] Robinson | Supporting black businesses: narratives of support providers in London[END_REF].
Findings suggest that high and low LGO mentees do not share the same motivations when entering mentoring relationships. Mentees with low levels of LGO are looking for advice and approval relative to their entrepreneurial skills (reassurance motivation) because external feedback may enable them to go beyond their perceived abilities (guidance motivation). On the other hand, mentees with high LGO levels are probably looking for a mentoring relationship that may enable them to improve their skills by learning from their mentor's experience, a support relationship that may stimulate them in terms of new ideas and practices (motivation to be challenged). The present research also demonstrates that low LGO mentees benefit most from mentors' help in improving their ESE. High LGO mentees experienced a higher ESE when mentor functions were lower; conversely, when mentor functions were fully exercised, these mentees' ESE had a tendency to decrease to the same ESE level as that of low LGO mentees. In other words, in an intense mentoring context (high mentor functions), mentees reported a similar level of ESE, regardless of their LGO levels. At first glance, one would be tempted to prevent high LGO novice entrepreneurs from being accompanied by a mentor, as it seems to lead to a reduction in their level of ESE. However, previous studies have demonstrated that some entrepreneurs are overly optimistic, and this has a negative effect on the survival of their business [START_REF] Lowe | Overoptimism and the performance of entrepreneurial firms[END_REF]. Moreover, [START_REF] Hmieleski | When does entrepreneurial self-efficacy enhance versus reduce firm performance?[END_REF] demonstrated that a high ESE has a negative effect on business performance when the entrepreneurs' optimism is high. In this perspective, mentoring could be useful for these entrepreneurs because it brings ESE to a level closer to the reality of the entrepreneurs' abilities, which could reduce errors committed due to overconfidence in their skills.
Finally, our findings suggest that the positive effect of mentoring on mentees' ESE may be limited to the duration of the mentoring relationship for low LGO novice entrepreneurs. In other words, as long as low LGO mentees are involved in a mentoring relationship, they will probably feel more self-confident. However, once the mentoring relationship ends, they may experience a decrease in their ESE because of their need for constant external reassurance and support. This suggests that LGO is an important personal variable to consider in researching entrepreneurship support outcomes. In this regard, [START_REF] Dweck | Motivational effects on attention, cognition, and performance[END_REF] demonstrated that it is possible to develop specific training and support that effectively enhances the participants' LGO, which, in turn, has an important effect on their motivational processes, attention, cognition, and performance.
Thus, an important practical implication of our findings is that mentors could learn how to counsel novice entrepreneurs with low levels of ESE and LGO, and help them not only improve their ESE level but also their LGO, thus securing an enduring increase in their ESE once the mentoring relationship ends.
Discussion
The present study has three main theoretical contributions. First, we demonstrate that the impact of mentors on mentees' ESE is moderated by the perceived similarity with the mentor, as previously assessed in entrepreneurial education contexts [START_REF] Laviolette | The impact of story bound entrepreneurial role models on self-efficacy and entrepreneurial intention[END_REF][START_REF] Lockwood | Superstars and me: Predicting the impact of role models on the self[END_REF][START_REF] Schunk | Developing children's self-efficacy and skills: The roles of social comparative information and goal setting[END_REF]. Prior research has stressed the positive effect of mentoring on mentees' ESE [START_REF] Gravells | Mentoring start-up entrepreneurs in the East Midlands -Troubleshooters and trusted friends[END_REF][START_REF] Kent | An evaluation of mentoring for SME retailers[END_REF][START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF] and the fact that mentors act as role models [START_REF] Barnir | Mediation and Moderated Mediation in the Relationship Among Role Models, Self-Efficacy, Entrepreneurial Career Intention, and Gender[END_REF]. We introduce the notion of upward comparison with the mentor to explain the importance of mentees' perceived similarity with the mentor, based on social comparison theory [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF][START_REF] Gibson | Role models in career development: New directions for theory and research[END_REF]) [START_REF] Festinger | A Theory of Social Comparison Processes[END_REF].
Second, our study demonstrates the importance of mentees' LGO in entrepreneurial mentoring relationships, because of its relationship with mentees' ESE. Prior research based on goal-orientation theory documented the relationship between LGO and ESE in other contexts [START_REF] Phillips | Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process[END_REF]. Our findings suggest that there is a strong relationship between LGO and the need for feedback [START_REF] Tuckey | The influence of motives and goal orientation on feedback seeking[END_REF][START_REF] Vandewalle | A goal orientation model of feedback-seeking behavior[END_REF][START_REF] Vandewalle | A test of the influence of goal orientation on the feedback-seeking process[END_REF], as the mean score for the level of mentees' LGO in our study is 6.24 (on 7). However, another explanation for this high level of LGO may be that entrepreneurship, being a career with many challenges and difficulties [START_REF] Aspray | Positive illusions, motivations, management style, stress and psychological traits : A review of research literature on women's entrepreneurship in the information technology field[END_REF][START_REF] Grant | On being entrepreneurial: the highs and lows of entrepreneurship[END_REF], attracts individuals interested in learning and with a desire to improve their abilities. This latter explanation is probably more plausible, as previous research on LGO in a mentoring context found a mean score of mentees' LGO of 4.35 (on 7) [START_REF] Egan | The Impact of Learning Goal Orientation Similarity on Formal Mentoring Relationship Outcomes[END_REF]) and a study measuring the impact of LGO on entrepreneurial intentions found an LGO score of 5.198 (on 7) [START_REF] De Clercq | The roles of learning orientation and passion for work in the formation of entrepreneurial intention[END_REF]. Additionally, prior research has shown that a high level of LGO combined with a high level of ESE is likely to lead to choosing entrepreneurship as a career choice [START_REF] Culbertson | Enhancing entrepreneurship: The role of goal orientation and self-efficacy[END_REF]. In fact, a recent study indicated that LGO strengthens the relationship between ESE and entrepreneurial intention [START_REF] De Clercq | The roles of learning orientation and passion for work in the formation of entrepreneurial intention[END_REF]. Thus, LGO may be an important mindset that attracts and retains individuals in an entrepreneurial career, which suggests new research directions.
Finally, the third contribution of the present study is that it provides evidence concerning the combined effects of mentor functions, mentees' LGO and perceived similarity with the mentor on mentees' ESE. We confirmed the fourth hypothesis relative to the positive impact of the mentor functions on the mentee's ESE when the mentor is perceived as highly similar and when the mentee's LGO is low. The research model explains 15.1% of the variance when considering main effects only (adj. R 2 ). Adding the interaction effects explains an additional 9.9% of the variance, for an R 2 final adjustment of 0.25. Findings confirm previous research relative to the positive correlation between the mentees' LGO, level of education, prior management experience, and ESE [START_REF] Bell | Goal orientation and ability: Interactive effects on selfefficacy, performance, and knowledge[END_REF][START_REF] Phillips | Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process[END_REF]. We found that a low level of LGO combined with a high level of perceived similarity significantly contributed to reinforcing novice entrepreneurs' ESE in a mentoring context.
Our study has, however, several limitations. First, although LGO is highlighted as an important moderator to consider in the study of mentoring for entrepreneurs, we cannot confirm without a doubt that low/high LGO mentees have different motivations for entering a mentoring relationship. Our reasoning was guided by the theoretical framework of LGO and social comparison theory; however, further investigation of the reasons underlying the need for a mentor could bring additional confirmation of the underlying processes at play. Second, the present research assessed the impact of mentoring on mentees' ESE. However, not every entrepreneur has the desire to improve his/her ESE and novice entrepreneurs may seek mentoring for other cognitive or affective reasons. Thus, our final sample may include mentees who did not seek ESE development.
Nevertheless, the reader should keep in mind that many other outcomes could be reached through mentoring and, as such, focusing on ESE development, despite highlighting specific processes at play, suggests a limited view of the potential effects of mentoring on the entrepreneurial process.
The role of mentoring in improving opportunity identification, reducing loneliness and stress of novice entrepreneurs, or developing better managerial skills are also important research questions to be further explored. Third, we measured ESE development within a formal mentoring program.
Given that mentors are trained and aware of the many aspects that could foster or hinder the effectiveness of mentoring, our findings cannot be extended to informal mentoring settings. Indeed, because informal mentors are generally well-known by their mentees before the beginning of the mentoring relationship, the former may be selected based on perceived similarity with the latter. Thus, our findings are most relevant for formal mentoring programs. Fourth, the study was not longitudinal, making it difficult to assess the mentoring effects on the development of mentees' ESE over time. Longitudinal research is thus necessary to better evaluate the contribution of personal and relational mentoring variables in terms of impact on mentees' ESE.
Conclusion
For the past decades, many mentoring programs have been launched in developed countries and evidence exists that they may trigger many outcomes [START_REF] Wanberg | Mentoring research: A review and dynamic process model[END_REF]. Prior research has also emphasized mentoring's contribution to novice entrepreneurs' personal development [START_REF] Edwards | Promoting entrepreneurship at the University of Glamorgan through formal and informal learning[END_REF][START_REF] Kent | An evaluation of mentoring for SME retailers[END_REF][START_REF] St-Jean | The role of mentoring in the learning development of the novice entrepreneur[END_REF][START_REF] Sullivan | Turning experience into learning[END_REF] and business success in terms of startup launching, fundraising and business growth [START_REF] Mcadam | Building futures or stealing secrets? Entrepreneurial cooperation and conflict within business incubators[END_REF][START_REF] Radu Lefebvre | How to Do Things with Words": The Discursive Dimension of Experiential Learning in Entrepreneurial Mentoring Dyads[END_REF][START_REF] Styles | Using SME intelligence in mentoring science and technology students[END_REF][START_REF] Sullivan | Entrepreneurial learning and mentoring[END_REF]. These programs invest time and energy into identifying mentees and mentors potentially interested in developing mentoring relationships. However, little attention is being paid to the matching process of mentors and mentees in terms of perceived similarity and the training of mentors that could be offered.
The present research demonstrates that role-model identification needs to be secured by mentoring programs so as to ensure that novice entrepreneurs perceive their mentor as someone who is relevant, inspiring, and accessible. Mentoring programs could consider the similarity of mentors and mentees before making proposals concerning the composition of mentoring dyads. Also, mentors could be informed of the importance of perceived similarity in mentoring relationships. Moreover, the predominance of male mentors may become an issue as more women entrepreneurs enter the market.
Research indicates that gender matching of mentors and mentees is especially important for women [START_REF] Quimby | The influence of role models on women's career choices[END_REF]. Social identity theory [START_REF] Tajfel | Differentiation between social groups: Studies in the social psychology of intergroup relations[END_REF] and the similarity-attraction paradigm [START_REF] Byrne | The attraction paradigm[END_REF]) predict more perceived similarity and identification in same-gender relationships.
Another practical implication related to these findings is that more attention should be paid to the matching process of mentoring dyads in terms of learning motivations and learning orientation.
Complementary mentoring relationships may thus develop, with the help of a program manager, who could assist mentors in the identification of mentees' learning needs so as to ensure more effective mentoring relationships with regard to their potential impact on mentees' ESE. Training should be provided to mentors in order to help them identify their mentees' needs and personal profile more accurately in order to adapt the rendering of mentoring functions while taking into account mentees' needs and motivations. Given that LGO can be enhanced through training, mentors may play a significant role in developing mentees' LGO and in fostering mentees' ESE by the same token.
Figure 1 .
1 Figure 1. Tested theoretical model
launched in 2000 by the Fondation de l'entrepreneurship, an organization dedicated to Quebec's economic development. Réseau M provides mentoring support to novice entrepreneurs through a network of 70 mentoring cells implemented across the province of Quebec (Canada). These cells are generally supported by various economic development organizations such as local development centres (LDC's), Community Future Development Corporations (CFDCs), and local chambers of commerce. These organizations ensure the program's local and regional development, while subscribing to the mentoring model provided by the Fondation de l'entrepreneurship. Local organizations have cell coordinators in charge of recruiting mentors, organizing their training, promoting the program to novice entrepreneurs, and pairing and guiding mentor-mentee dyads.
Figure 2 .
2 Figure 2. Moderating effect of perceived similarity on the interaction between mentor functions and ESE
Figure 3 .
3 Figure 3. Moderating effects of LGO on the interaction between mentor functions and ESE.
Figure 4 .
4 Figure 4. Three-way interaction between mentor functions, LGO, and perceived similarity for the development of ESE
Table 1 . Sample characteristics
1
Mentoring relationship caracteristics
Male mentees: 162 (51.6%)
Female mentees: 152 (48.4%)
Paired with male mentors: 275 (81.4%)
Paired with female mentors: 63 (18.6%)
Mean mentoring relationships length: 16.07 months (SD=14.4)
Mean meeting length: 68.52 minutes (SD=14.4)
Median meeting frequency: Each month
Mentees characteristics
Mean age: 39.8 years old (SD=8.97)
Mentees with university degree: 173 (55%)
Experience in industry before startup: Less than 5 years: 61.6%
Experience in entrepreneurship: Less than 5 years: 82.9%
Firm characteristics
Mean number of employees: 4.48 (SD=9.69)
Annual turnover: Less than $100,000CAD: 62.8%
Annual gross profit: Less than $25,000CAD: 68.1%
Professional services: 23.0%
Manufacturing: 14.4%
Retailing: 11.9%
Others: 50.7%
Measures
Entrepreneurial self-efficacy (ESE). To gain better insight into the dimensions of ESE, we combined the scales developed by
[START_REF] Anna | Women business owners in traditional and non-traditional industries[END_REF]
and De Noble et al.
Table 2 . Means, Standard Deviations and Correlations of Variables
2
Mean S.D. 1 2 3 4 5 6 7
1-Gender 0.48 0.50 1.00
2-Age 39.81 8.97 -0.01 1.00
3-Education 2.53 0.94 0.12* 0.08 1.00
4-Managerial experience 2.29 1.56 -0.13* 0.25* -0.09 1.00
5-LGO 6.24 0.88 0.12* -0.05 -0.02 0.04 1.00
6-Perceived Similarity 4.71 1.40 0.01 -0.14* -0.09 -0.01 -0.00 1.00
7-Mentor Functions 5.39 1.15 0.06 -0.14* -0.00 -0.03 0.01 0.61* 1.00
8-Ent. Self-efficacy (ESE) 5.89 0.76 0.01 -0.21* 0.05 0.08 0.33* 0.16* 0.16*
(dependent variable)
*=p≤0.05
Table 2
2
illustrates the results of the hierarchical regression of ESE. As expected, Model 1
takes into account control variables (R 2 =0.069), Model 2 adds the main effects (R 2 =0.175), while
Table 3 . Entrepreneurial Self-Efficacy Hierarchical Regression
3
Model 1 Model 2 Model 3 Model 4
Std.β Std. β Std.β Std.β |
01760249 | en | [
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2005 | https://insep.hal.science//hal-01760249/file/CadenceSelectionBrJSportsMed-2005-Vercruyssen-267-72.pdf | Dr Vercruyssen
email: vercruyssen@univ-tln.fr......
Objectives: To investigate the effect of cadence selection during the final minutes of cycling on metabolic responses, stride pattern, and subsequent running time to fatigue. Methods: Eight triathletes performed, in a laboratory setting, two incremental tests (running and cycling) to determine peak oxygen uptake (VO 2 PEAK) and the lactate threshold (LT), and three cycle-run combinations. During the cycle-run sessions, subjects completed a 30 minute cycling bout (90% of LT) at (a) the freely chosen cadence (FCC, 94 (5) rpm), (b) the FCC during the first 20 minutes and FCC220% during the last 10 minutes (FCC220%, 74 (3) rpm), or (c) the FCC during the first 20 minutes and FCC+20% during the last 10 minutes (FCC+20%, 109 (5) rpm). After each cycling bout, running time to fatigue (T max ) was determined at 85% of maximal velocity. Results: A significant increase in T max was found after FCC220% (894 (199) seconds) compared with FCC and FCC+20% (651 (212) and 624 (214) seconds respectively). VO 2 , ventilation, heart rate, and blood lactate concentrations were significantly reduced after 30 minutes of cycling at FCC220% compared with FCC+20%. A significant increase in VO 2 was reported between the 3rd and 10th minute of all T max sessions, without any significant differences between sessions. Stride pattern and metabolic variables were not significantly different between T max sessions. Conclusions: The increase in T max after FCC220% may be associated with the lower metabolic load during the final minutes of cycling compared with the other sessions. However, the lack of significant differences in metabolic responses and stride pattern between the run sessions suggests that other mechanisms, such as changes in muscular activity, probably contribute to the effects of cadence variation on T max .
uring triathlon racing (swim/cycle/run), the most critical and strategic aspect affecting overall performance is the change from cycling to running. [START_REF] Bentley | Specific aspects of contemporary triathlon: implications for physiological analysis and performance[END_REF][START_REF] Gottshall | The acute effects of prior cycling cadence on running performance and kinematics[END_REF][START_REF] Hausswirth | Relationships between mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF][START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] These studies have attempted to identify aspects of cycling that may improve running performance in triathletes. Drafting has been shown to be a beneficial cycling strategy which results in an improved subsequent running performance in elite triathletes. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] More recently, the selection of cycling cadence during a cycle-run combination has been identified by researchers as an important variable that may affect overall performance. 1 2 5 6 Cadence selection has been reported to influence metabolic responses, kinematic variables, and performance during a cycle-run session. However, the extent to which the cadence selection affects subsequent maximal running performance during a cycle-run combination remains unclear.
In a laboratory setting, Vercruyssen et al [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF] have shown that the adoption of a low cadence (73 rpm), corresponding to the energetically optimal cadence, reduced oxygen uptake (VO 2 ) during a cycle-run session, compared with the selection of higher cadences (80-90 rpm). These authors suggested that the choice of a low cadence (,80 rpm) before the cycle-run transition may be advantageous for the subsequent run. However, during field based investigations, Gottshall and Palmer 2 found an improved 3200 m track running performance after 30 minutes of cycling conducted at a high cadence (.100 rpm) compared with lower cadences (70-90 rpm) for a group of triathletes. It was suggested that the selection of a high cadence improved running performance through increased stride rate and running speed during the subsequent run. In contrast, Bernard et al [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] showed no effect of cycling cadence (60-100 rpm) and stride rate on a subsequent 3000 m running performance. These conflicting results indicate the difficulty of predicting the optimal cadence selection for a cycle-run session in trained triathletes.
In most of the above experiments, the triathletes were required to cycle at either an imposed cadence (range 60-110 rpm) or a freely chosen cadence (range 80-90 rpm) which remained constant for the entire 30 minutes of the cycle bout. This lack of cadence variation does not reproduce race situations, during which the cadence may vary considerably especially before the cycle-run transition. [START_REF] Bentley | Specific aspects of contemporary triathlon: implications for physiological analysis and performance[END_REF] Many triathletes attempt to optimise the change from cycling to running by selecting high cadences (.100 rpm) during the final kilometres of cycling. 1 2 6 Another strategy, however, may be the selection of a low cadence (,75 rpm) before the cycle-run transition, in order to conserve energy for the subsequent run. 4 5 To our knowledge, no data are available on cadence changes during the last few minutes before the cycle-run transition and its effects on subsequent running performance.
Therefore the aim of this investigation was to examine, in a laboratory setting, the effect of cadence variations during the final 10 minutes of cycling on metabolic responses, stride pattern, and subsequent running time to fatigue in triathletes.
METHODS
Participants
Eight experienced male triathletes currently in training volunteered to take part in this experiment. All had regularly competed in triathlon racing at either sprint (0.750 km swim/ 20 km cycle/5 km run) or Olympic distances (1.5
Maximal tests
Two incremental tests were used to determine peak oxygen uptake (VO 2 PEAK), maximal power output (P max ), maximal running speed (V max ), and lactate threshold (LT). Subjects performed cycling bouts on a racing bicycle mounted on a stationary turbo-trainer system. Variations in power output were measured using a ''professional'' SRM crankset system (Schoberer Rad Messtechnick, Fuchsend, Germany) previously validated in a protocol comparison using a motor driven friction brake. [START_REF] Jones | The dynamic calibration of bicycle power measuring cranks[END_REF] Running bouts were performed on a motorised treadmill situated next to the cycle turbo-trainer.
For cycling, the test bout began at an initial workload of 100 W for three minutes, after which the power output was increased by 40 W every three minutes until exhaustion. For the treadmill test, the initial running speed was fixed at 9 kph, with an increase in velocity of 1.5 kph every three minutes. For both cycling and running tests, there was a one minute rest period between each increment for the sampling of capillary blood (35 ml) from a hyperaemic earlobe. Blood samples were collected to determine plasma lactate concentration ([La 2 ]) using a blood gas analyser (ABL 625; Radiometer Medical A/S, Copenhagen, Denmark). performed 15 minutes of warm up comprising 13 minutes at a low power output (100-130 W) and the last two minutes at the individual workload required during the cycle bout of cycle-run sessions. After two minutes of rest, each triathlete completed a cycle bout at (a) the freely chosen cadence (FCC), (b) the FCC during the first 20 minutes and FCC220% during the last 10 minutes (FCC220%), or (c) the FCC during the first 20 minutes and FCC+20% during the last 10 minutes (FCC+20%). The FCC¡20% range has previously been used during a 30 minute cycling exercise in triathletes. 9 10 Cycling bouts were performed at a power output corresponding to 90 % of LT (266 (28) W) and represented an intensity close to that reported in previous studies of the relation between cycling cadence and running performance. 5 6 FCC220% was chosen to replicate cadence values close to the energetically optimal cadence previously noted in triathletes, [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF] and FCC+20% allowed us to reproduce cadence values close to those reported during cycling strategies before running. 1 2 6 Cadence and power output were monitored using the SRM power meter during all cycling bouts. No feedback was given to the subjects on their FCC over the three conditions.
After each cycling bout, running time to fatigue (T max ) was determined on the treadmill at a running speed corresponding to 85% of V max (.LT) for each athlete (16.7 (0.7) kph). 11 12 During these tests, VO 2 , minute ventilation (VE), and On the basis of previous experiments and the completion respiratory exchange ratio were continuously recorded every 15 seconds using Ametek gas analysers (SOV S-3A and COV CD3A; Pittsburgh, Pennsylvania, USA). The four highest consecutive VO 2 values were summed to determine VO 2 PEAK. [START_REF] Bishop | The relationship between plasma lactate parameters, Wpeak and endurance cycling performance[END_REF] P max and V max were calculated as the average power output and running speed in the last three minutes completed before exhaustion. Heart rate (HR) was monitored every 10 seconds during each experimental session using an electronic HR device with a chest electrode (Polar Vantage NV; Polar Electro Oy, Kempele, Finland). The LT calculated by the modified D max method was determined by the point on the polynomial regression curve that yielded the maximal perpendicular distance to the straight line formed by the lactate inflection point (first increase in lactate concentration above the resting level) and the final lactate point. [START_REF] Bishop | The relationship between plasma lactate parameters, Wpeak and endurance cycling performance[END_REF] Cycle-run combinations All triathletes completed, in random order, three cycle-run sessions each composed of 30 minutes of cycling, on a cycle turbo-trainer, and a subsequent run to fatigue. A fan was used in front of the subject during these experimental sessions. Before each experimental condition, subjects of pilots tests, this running intensity was chosen to induce fatigue in less than 20 minutes. All subjects were given verbal encouragement throughout each trial. The T max was taken as the time at which the subject's feet left the treadmill as he placed his hands on the guardrails. The transition time between running and cycling was fixed at 45 seconds to reproduce the racing context. 1 6 Measurement of metabolic variables VO 2 , VE, and HR were monitored and analysed during the following intervals: 3rd-5th minute of cycling bout (3-5 min), 20th-22nd minute (20-22 min), 28th-30th minute (28-30 min) and every minute during the running sessions. Five blood samples were collected at the following intervals: before the warm up, at 5, 20, and 30 minutes during cycling, and at the end of T max .
Measurement of kinematic variables
Power output and cycling cadence were continuously recorded during the cycling bouts. For each running session, a 50 Hz digital camera was mounted on a tripod 4 m away from the motorised treadmill. Subsequently, the treadmill speed and period between two ground contacts for the same foot were determined using a kinematic video analysis system (SiliconCoach Pro Version 6, Dunedin, New Zealand). From these values, stride pattern characteristicsthat is, stride rate (Hz) and stride length (m)-were calculated every 30 seconds during the first five minutes and the last two minutes of the T max sessions.
Statistical analysis
All data are expressed as mean (SD). A two way variance analysis plan with repeated measures was performed to analyse the effects of cadence selection (FCC, FCC220%, FCC+20%) and time during the cycle-run sessions using VO 2 , VE, HR, [La 2 ], stride rate, stride length, cadence and power
Cycling bouts of cycle-run sessions
No significant variation in FCC was observed during the first 20 minutes of the three cycling bouts (table 2). In addition, mean power output values were not significantly different between the cycling bouts (264 (30), 263 (28), and 261 (29) W respectively for FCC, FCC220%, and FCC+20%). These data show that subjects adhered to the experimental design with respect to the required power output-cadence combination.
A significant effect of exercise duration (between 3-5 and 28-30 min intervals) was observed on VO 2 , VE, and HR during the FCC and FCC+20% bouts whereas no significant variation in these metabolic variables was identified with exercise duration during the FCC220% condition (table 3). Moreover, mean VO 2 , VE, and HR were significantly lower at FCC220% compared with FCC+20% during the 28-30 min interval (respectively, 25.3%, 218.2%, and 26.8%). [La 2 ] was significantly higher during the 28-30 min interval at FCC+20% compared with FCC (+31.2%) or FCC220% (+55.5%).
Running bouts of cycle-run sessions
A significant increase in T max was observed only after the FCC220% modality when compared with both the FCC+20% and FCC conditions (+43.3% and +37.3% respectively; fig 1). T max values were 624 (214), 651 (212) and 894 (199) seconds after the FCC+20%, FCC and FCC220% modalities respectively. A significant increase in DVO 2 -that is, between the 3rd and 10th minute-was found during the T max completed after FCC (+6.1%), FCC+20% (+6.7%), and FCC220% (+6.5%) (table 4). However, mean VO 2 , VE, HR, and [La 2 ] were not significantly different between the three T output, as dependent variables. A Tukey post hoc test was sessions (table 4). max used to determine any differences between the cycle-run combinations. Differences in T max obtained between the three experimental conditions were analysed by one way analysis of variance. A paired t test was used to analyse differences in VO 2 PEAK, HR peak , and VO 2 at LT between the two maximal tests. Statistical significance was set at p,0.05.
RESULTS
Maximal tests
No significant differences in VO 2 PEAK were observed between the sessions (table 1). However, HR peak and VO 2 at LT were significantly higher during running than during the maximal cycling bout (+2.9% and +15.8% respectively).
No significant difference in stride pattern was observed during the T max sessions whatever the prior cadence selection (fig 2). Mean stride rate (Hz) and stride length (m) were 1.49 (0.01) and 3.13 (0.02), 1.48 (0.01) and 3.13 (0.03), 1.49 (0.01) and 3.15 (0.02), during the T max sessions subsequent to the FCC, FCC220% and FCC+20% bouts respectively.
DISCUSSION
The main findings of this investigation show a significant increase in T max when the final 10 minutes of cycling is performed at FCC220% (894 seconds) compared with FCC (651 seconds) and FCC+20% (624 seconds). Several hypotheses are proposed to explain the differences in T max reported during the various cycle-run combinations for the group of triathletes.
A number of studies have analysed characteristics of cyclerun sessions in triathletes, with particular focus on physiological and biomechanical aspects during the subsequent run. [START_REF] Bentley | Specific aspects of contemporary triathlon: implications for physiological analysis and performance[END_REF] For instance, during a running session after cycling, a substantial increase in energy cost, VE, and HR, and differences in muscle blood flow have been observed compared with an isolated run. 1 3 5 6 Moreover, variations in running kinematics such as stride rate, segmental angular position, and joint angle have been shown after a cycle bout. 3 5 These running alterations, which have been linked to the effects of exercise duration and cycle-run transition, were reported during treadmill sessions conducted at a submaximal intensity and not during a high intensity running bout. In this study we investigated these effects at a high intensity close to a running speed previously observed during a short cycle-run combination in triathletes. [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF]
Metabolic hypotheses
The T max values of this investigation are comparable to those previously reported during an exhaustive isolated run performed at an intensity corresponding to 85-90% VO 2 MAX. [START_REF] Avogadro | Changes in mechanical work during severe exhausting running[END_REF][START_REF] Billat | The VO2 slow component for severe exercise depends on type of exercise and is not correlated with time to fatigue[END_REF][START_REF] Candau | Energy cost and running mechanics during a treadmill run to voluntary exhaustion in humans[END_REF] It has previously been reported that metabolic and muscular factors are potential determinants of middle distance running performance and/or exhaustive treadmill sessions in trained subjects. [START_REF] Borrani | Is the VO2 slow component dependent on progressive recruitment of fast-twitch fibers in trained runners?[END_REF][START_REF] Brandon | Physiological factors associated with middle distance running performance[END_REF][START_REF] Paavolainen | Neuromuscular characteristics and muscle power as determinants of 5-km running performance[END_REF][START_REF] Paavolainen | Neuromuscular characteristics and fatigue during 10 km running[END_REF][START_REF] Prampero | Factors limiting maximal performance in humans[END_REF] With respect to metabolic factors, the improvement in T max observed after FCC220% may be related to changes in energy contribution. In support of this hypothesis, it has been reported that the determinants of maximal performances in middle distance running may be linked to the energy requirement for a given distance and the maximal rate of metabolic energy output from the integrative contribution of aerobic and anaerobic systems. 15 18 During submaximal and maximal running, the VO 2 variation has been reported to reflect the relative contribution from the aerobic and anaerobic sources. [START_REF] Brandon | Physiological factors associated with middle distance running performance[END_REF] In the context of a cycle-run session, Bernard et al [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] have reported that triathletes were able to sustain a higher fraction of VO 2 MAX during a 3000 m track run performed after cycling at 60 rpm than during cycling at 80 and 100 rpm. These authors suggested that a greater contribution of the aerobic component, during running after the choice of a low cadence, may delay fatigue for longer running distances. In this investigation, the analysis of VO 2 may also provide information on possible changes in aerobic contribution during high intensity running. Given the range of T max values, the metabolic variables were analysed during the first 10 minutes of each running session, corresponding approximately to the mean T max values reported after the FCC and FCC+20% modalities (fig 1). The evaluation of this time interval indicates no significant differences in VO 2 between the T max sessions, suggesting that the determination of T max in this study was not affected by changes in metabolic energy from the aerobic or anaerobic systems.
There was, however, a significant increase in VO 2 between the 3rd and 10th minute (6.1-6.7%) during the three T max sessions, regardless of the prior experimental condition (table 4). During exercise lasting less than 15 minutes, the continual rise in VO 2 beyond the 3rd minute has been termed the VO 2 slow component (VO 2SC ). 5 11 19 20 The occurrence of a VO 2SC is classically observed during heavy running and cycling exercises associated with a sustained lactic acidosis-that is, above the LT. 19 21 22 Postulated mechanisms responsible for this VO 2SC include rising muscle temperature (Q 10 effect), cardiac and ventilatory muscle work, lactate kinetics, catecholamines, and recruitment of less efficient type II muscle fibres. [START_REF] Poole | Determinants of oxygen uptake[END_REF] Within this framework, Yano et al [START_REF] Yano | Relationship between the slow component of oxygen uptake and the potential reduction in maximal power output during constant-load exercise[END_REF] suggested that muscular fatigue may be one of the factors that produce the development of a VO 2SC during high intensity cycling exercise.
However, several investigators have examined the influence of prior exercise on the VO 2 response during subsequent exercise. [START_REF] Burnley | Effects of prior heavy exercise on phase II pulmonary oxygen uptake kinetics during heavy exercise[END_REF][START_REF] Gerbino | Effects of prior exercise on pulmonary gasexchange kinetics during high-intensity exercise in humans[END_REF][START_REF] Scheuermann | The slow component of O2 uptake is not accompanied by changes in muscle EMG during repeated bouts of heavy exercise in humans[END_REF] Burnley et al [START_REF] Burnley | Effects of prior heavy exercise on phase II pulmonary oxygen uptake kinetics during heavy exercise[END_REF] showed that the magnitude of VO 2 kinetics during heavy exercise was affected only by a prior bout of heavy exercise. On the basis of similar results, it has been suggested that, during successive bouts of heavy exercise, muscle perfusion and/or O 2 off loading at the during the second bout of exercise. 25 26 In addition, changes in the VO 2 response may be accentuated by the manipulation of cadence during an isolated cycling bout. [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] Gotshall et al [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] showed an increase in muscle blood flow and a decrease in systemic vascular resistance with increasing cadence (from Stride rate (Hz) 70 to 110 rpm). These previous experimental designs, based on the characteristics of combined and isolated exercises, are similar to the current one and suggest that cadence selection may affect blood flow and hence the VO 2 response during a subsequent run. For instance, the increased muscle blood flow at high cycling cadence [START_REF] Gotshall | Cycling cadence alters exercise hemodynamics[END_REF] during a prior cycle bout could attenuate the magnitude of VO 2SC during subsequent running.
In contrast with these earlier studies, the VO 2SC values of this investigation were not significantly different between trials during the first 10 minutes of exercise between the T max sessions. This was observed despite differences in metabolic load and cadence selection during the previous cycling bouts. These results indicate that the adoption of FCC220% is associated with a reduction in metabolic load with exercise duration, but does not affect the VO 2SC during the subsequent run. For instance, the selection of FCC220% is associated with a significant reduction in VO 2 (25.3%), VE (218.2%), HR (26.8%), and [La 2 ] (255.5 %) during the final 10 minutes of cycling compared with FCC+20%, without any significant changes in VO 2SC during subsequent running between the two conditions. This suggests that the chosen cadences do not affect the VO 2 responses during the subsequent run and also that the occurrence of a VO 2SC does not contribute to the differences in T max found in this study. This is consistent with previous research on trained subjects. [START_REF] Billat | The VO2 slow component for severe exercise depends on type of exercise and is not correlated with time to fatigue[END_REF] Muscular and stride pattern hypotheses Although we conducted no specific analysis of muscular parameters, an attractive hypothesis to explain the differences in T max between conditions is that they are due to differences in the muscular activity or fatigue state during cycle-run sessions. Muscular contractions differ during cycling and running. Cycling is characterised by longer phases of concentric muscular contraction, whereas running involves successive phases of eccentric-concentric muscular action. [START_REF] Bijker | Differences in leg muscle activity during running and cycling in humans[END_REF] Muscle activity during different modes of contraction can be assessed from the variation in the electromyographic signal. In integrated electromyography based investigations, it has been shown that muscles such as the gastrocnemius, soleus, and vastus lateralis are substantially activated during running. 14 17 28 Any alterations in the contractile capability of these muscles may have affected the ability to complete a longer T max during the cycle-run sessions in this study.
Furthermore, many studies have reported substantial changes in muscular activity during isolated cycling exercises, especially when cadence is increased or decreased. [START_REF] Ericson | Muscular activity during ergometer cycling[END_REF][START_REF] Marsh | The relationship between cadence and lower extremity EMG in cyclists and noncyclists[END_REF][START_REF] Neptune | The effect of pedaling rate on coordination in cycling[END_REF][START_REF] Takaishi | Optimal pedaling rate estimated from neuromuscular fatigue for cyclists[END_REF] With respect to the cycle-run combination, the manipulation of cadence may accentuate modifications in muscular activity during cycling and influence the level of fatigue during a subsequent run. Marsh and Martin [START_REF] Marsh | The relationship between cadence and lower extremity EMG in cyclists and noncyclists[END_REF] showed a linear increase in electromyographic activity of the gastrocnemius and vastus lateralis muscles when cadences increased from 50
What this study adds
This study shows that the choice of a low cadence during the final minutes of cycling improves subsequent running time to fatigue.
to 110 rpm. Although activity of the gastrocnemius muscle has been shown to increase considerably more than the soleus muscle as cadence is increased, 30 31 Ericson et al [START_REF] Ericson | Muscular activity during ergometer cycling[END_REF] have also reported a significant increase in soleus muscle activity with the selection of high cadences. These results from isolated cycling exercises conducted in a state of non-fatigue suggest that, during the last 10 minutes of the cycling bout of our study, there was greater recruitment of the vastus lateralis, gastrocnemius, and soleus muscles after cycling at higher cadences. This may have resulted in an increase in fatigue of these muscles, which are substantially activated during subsequent running. In contrast, the lower activity of the vastus lateralis, gastrocnemius, and soleus muscles after the FCC220% condition may have reduced the fatigue experienced during cycling and resulted in improved utilisation of these muscles during the subsequent run. This may have contributed to the observed increase in T max for this condition. Nevertheless, Lepers et al 10 suggested that the neuromuscular fatigue observed after 30 minutes of cycling was attributable to both central and peripheral factors but was not influenced by the pedalling rate in the range FCC¡20%. In this earlier study, the selected power outputs (.300 W) for all cadence conditions were significantly higher than those used in our experiment (260-265 W). The choice of high power outputs during cycling 10 may result in attenuation of the differentiated effects of extreme pedalling cadences on the development of specific neuromuscular fatigue. Further research is required to analyse the relation between various pedalling strategies and muscular recruitment patterns specific to a short cycle-run session (,1 hour).
The analysis of movement patterns during the cycle-run sessions also indicates that possible changes in muscle activity may be associated with modifications in kinematic variables. [START_REF] Hausswirth | Relationships between mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF] Hausswirth et al 3 reported significant variations in stride rate-stride length combination during a run session subsequent to a cycling bout compared with an isolated run. These modifications were attributed to local muscle fatigue from the preceding cycle. In the present study, the absence of significant differences in stride pattern during running (fig 2), regardless of the prior cadence selection, indicates that there is no relation between stride pattern and running time to fatigue. These results are consistent with previous results from a laboratory setting where the running speed was fixed on a treadmill after various cadence selections. [START_REF] Vercruyssen | Influence of cycling cadence on subsequent running performance in triathletes[END_REF] In contrast, in field based investigations, in which the running speed and stride pattern were freely selected by the athletes, Gottshall and Palmer 2 found that cycling at 109 rpm, compared with 71 and 90 rpm, during a 30 minute cycle session resulted in an increased stride rate and running speed during a 3200 m track session. However, these results are in contrast with those of Bernard et al [START_REF] Bernard | Effect of cycling cadence on subsequent 3-km running performance in well-trained triathletes[END_REF] indicating an effect of the prior cadence on stride pattern only during the first 500 m and not during the overall 3000 m run. The relation between stride pattern, cycling cadence, and running performance is not clear. Further investigation is required to elucidate the mechanisms that affect running performance during a cycle-run session.
In conclusion, this study shows that the choice of a low cadence during the final minutes of cycling improves subsequent running time to fatigue. The findings suggest What is already known on this topic Various characteristics of cycle-run sessions in triathletes have been studied, with particular focus on physiological and biomechanical aspects during the subsequent run. During a running session after cycling, a substantial increase in energy cost, minute ventilation, and heart rate, and differences in muscle blood flow have been observed compared with an isolated run. Moreover, variations in running kinematics such as stride rate, segmental angular position, and joint angle have been shown after a cycle bout.
that metabolic responses related to VO 2 do not explain the differences in running time to fatigue. However, the effect of cadence selection during the final minutes of cycling on muscular activity requires further investigation. From a practical standpoint, the strategy to adopt a low cadence before running, resulting in a lower metabolic load, may be beneficial during a sprint distance triathlon.
Figure 1
1 Figure 1 Running time to fatigue after the selection of various cycling cadences. Values are expressed as mean (SD). *Significantly different from the other sessions, p,0.05.
1 TFigure 2
12 Figure 2 Variations in stride rate during the running time to fatigue after the selection of various cycling cadences. T, Stride rate obtained at Tmax; T21, stride rate obtained at Tmax -1 min; T22, stride rate obtained at T max -2 min.
Table 2
2
Cadence and power output values during the
three cycling bouts at different time periods: 3-5, 20-22,
28-30 min
Cycling bout Cadence (rpm) Power output (W)
FCC220%
Values are mean (SD).
*Significantly different from the first 20 minutes, p,0.05.
*Significantly different from the other conditions at the same time period,
p,0.05.
Table 3
3 Variations in mean oxygen uptake (VO 2 ), minute ventilation (VE), heart rate (HR), and blood lactate concentration ([La 2 ]), during the three cycling bouts, at different time periods: 3-5,20-22, 28-30 min
Values are expressed as mean (SD).
*Significantly different from the 3-5 min interval, p,0.05.
*Significantly different from the 20-22 min interval, p,0.05.
`Significantly different from FCC220% at the same period, p,0.05.
1Significantly different from FCC+20% at the same period, p,0.05.
Table 4
4 Variations in mean oxygen uptake (VO 2 ), DVO 2 (10-3 min), minute ventilation (VE), heart rate (HR), and blood lactate concentration ([La 2 ]) during the three running sessions performed after cycling
Values are expressed as mean (SD).
*Significantly different between the 3rd and 10th minute of exercise, p,0.05.
ACKNOWLEDGEMENTS
We gratefully acknowledge all the triathletes who took part in the experiment for their great cooperation and motivation. . . . . . . . . . . . . . . . . . . . . . Authors' affiliations F Vercruyssen, J Brisswalter, Department of Sport Ergonomics and Performance, University of Toulon-Var, BP 132, 83957 La Garde cedex, France R Suriano, D Bishop, School of Human Movement and Exercise Science, University of Western Australia, Crawley, WA 6009, Australia C Hausswirth, Laboratory of Physiology and Biomechanics, Nationale Institute of Sport and Physical Education, 11, avenue du Tremblay, 75 012 Paris, France
Competing interests: none declared |
00176025 | en | [
"shs.eco",
"sde.es",
"scco.psyc"
] | 2024/03/05 22:32:13 | 2008 | https://shs.hal.science/halshs-00176025/file/Flachaire_Hollard_06c.pdf | Emmanuel Flachaire
Guillaume Hollard
email: hollard@univ-mlv.fr
Individual sensitivity to framing effects by
Keywords: starting-point bias, wta-wtp divergence, social representation JEL Classification: C81, C90, H43, Q51
come
Introduction
It has long been recognized that the design of a survey may influence respondents' answers. In the particular case in which respondents have to estimate numerical values, this implies that two different surveys may lead to two different valuations of the same object. Such a variation of answers, induced by non-significant change in the survey design, is called a framing effect. Consequently, surveys are sometimes viewed with suspicion when used to provide economic values, since framing effects may alter the quality of survey-based valuation. The existence of these effects is well documented [START_REF] Levin | All frames are created equal : a typology and critical analysis of framing effects[END_REF]. However, the extent to which they may vary between individuals has received little attention. Are some individuals less sensitive to framing effects than others ? How to detect them ? These are the questions addressed in this paper.
Our basic idea is to use the theory of social representation to assign to each individual a new variable. This variable represents a proxy for the individual's sensitivity to framing effects. According to this representation variable, we isolate two types of individuals. The first type is proved to be less sensitive to framing effects than the other. We examine two framing effects which are known to have a dramatic effect on valuation, namely, starting-point bias and willingness to pay (WTP) and willingness to accept (WTA) divergence. The results suggest that taking into account heterogenous sensitivity to framing effects is successful in limiting the impact of biases. Furthermore, they prove that the constructed representation variable is not correlated to any of the usual variables. Thus, using the representation variable allows researchers to gather relevant new information.
The paper is organized as follows. Section 2 details how social representation can be used to design a new individual variable. Section 3 presents study of the problem of starting-point bias in contingent valuation surveys. Section 4 deals with WTA and WTP divergence. Section 5 provides a discussion, and Section 6 concludes.
Representation as a source of heterogeneity
Representations are defined in a broad sense by social psychologists as a form of knowledge that serves as a basis for perceiving and interpreting reality, as well as for guiding one's behavior. Representation could concern a specific object, or a more general notion of social interest1 . The founding work [START_REF] Moscovici | La psychanalyse, son image et son public[END_REF] explores the representation of psychoanalysis. In the following decades, various topics have been investigated: representation of different cities, madness, remarkable places, hunting, AIDS, among others (see the different articles presented in Farr and[START_REF] Farr | Social representations[END_REF][START_REF] Moscovici | Psychologie Sociale[END_REF]. The theory of representation has proved useful in the study of economic subjects such as saving and debt [START_REF] Viaud | A positional and representational analysis of consumption: Households when facing debt and credit[END_REF], or the electronic purse [START_REF] Penz | It's practical but no more controllable": Social representation of the electronic purse in Austria[END_REF].
The basic structure of a social representation is composed of a central core and of peripheral elements [START_REF] Abric | Central system, peripheral system : their fonctions and roles in the dynamics of social representation[END_REF]. The central core contains the most obvious elements commonly associated with the object. They can be viewed as stereotypes or common sense. Those elements are not subject to any dispute, as everyone agrees that they are related to the object described. The core in itself does not contain much information and usually is not a surprise to an observer. The peripheral elements, however, contain fewer consensual elements and are less obvious. They represent potential changes in the social representation and indicate new elements that may in the near future become part of the core. They are, somehow, rivals of the core elements.
There are several ways to explore the composition of social representations of particular subjects (namely, ethnography, interviews, focus-groups, the content analysis of the media, questionnaires and experiments). In what follows, we will focus on a particular technique, which is the statistical analysis of word associations. These word associations are gathered through answers to an open-ended question such as "What are the words that come to mind when thinking of [the object]?" or "What does [the object] evoke to you?". Thus, the purpose of such questions is to investigate the words being spontaneously associated with a given object. The next step is thus to determine the core of the social representation, on the basis of those individual answers. Once the core has been found, we sort individuals according to those who refer to the core of the social representation and those who don't. This "aller-retour" between social and individual representations can be compared to an election system where individual opinions are aggregated, using majority voting. Once individuals have voted, it is possible to recognize who belongs to the majority and who doesn't. All in all, the task is to transform representations (i.e. lists of words) into a quantitative and individual variable.
The method consists of four steps, each of which is illustrated with an example, namely the Camargue representation 2 . The Camargue is a major wetland in the delta of the Rhône (south of France) covering 75.000 hectares. Of exceptional biological diversity, it hosts many fragile ecosystems and is inhabited by numerous species. The survey was administered to 218 visitors to the Camargue at the end of their visit 3 . Note that the respondents had therefore spent some time in the Camargue.
Step 1: The data: collecting lists of words
The usual way to collect information on representation is by open-ended questions. More precisely, we use a question such as: "What does [the object] evoke to you?" or "What are the words that come to mind when thinking of [the object]?". Individuals are expected to provide a list of words or expressions. Thus, the data take the form of ordered lists of words. The set of answers typically displays a large number of different words, as each individual provides different answers. Indeed, a great variety of words can be used to describe a given object [START_REF] Vergès | Approche du noyau central: propriétés quantitatives et structurales[END_REF][START_REF] Wagner | Theory and method of social representations[END_REF].
Application: In our questionnaire, respondents were asked: "What words come to your mind when you think about the Camargue?" More than 300 different words or expressions have been obtained.
Step 2: Classification: choosing a typology for words An individual representation is captured through an ordered list of words. The high number of different words (say 100 to 500) imposes a categorization, i.e. putting together words that are "close" enough. Choosing a particular categorization thus consists in defining a particular typology for the set of words. Empirical applications typically use six to ten categories, which are chosen so as to form homogeneous categories. This step is the only one which leaves the researcher with some degree of freedom, since the notion of proximity is not straightforward.
After categorization, each individual's answer is transformed into an ordered list of categories (rather than a list of words). At the end of this categorization, we are left with individual representations containing doubles, that is, with several attributes belonging to the same category. To obtain transitive individual representations, we suppress the lower-ranking citations belonging to the same category. Such treatment eliminates some information. In our case, the length of individual answers decreased by 20%.
After treatment (i.e. categorization + suppression of doubles), individual representations boil down to an individual ranking of the set of categories.
Application: A basic categorization by frame of reference leads to eight different categories. For instance, the first category is called Fauna and Flora. It contains all the attributes which refer to the animals and local vegetation of the Camargue (fauna, 62 citations, birds, 44, flora, 44, bulls, 37, horses, 53, flamingos, 36, . . . ). The other categories are Landscape, Disorientation, Isolation, Preservation, Human presence and Coast. A particular case is the category Nature which only contains the word nature which can hardly fall into any of the previous categories. There is a ninth category which clusters all attributes which do not refer to any of the categories mentioned above4 .
Step 3: Finding the core
The simplest way of determining the core element is to classify the different categories according to their citation rate. The core is thus composed of the category that is most widely used by individuals. This is in accordance with the definition of the core as the most consensual and widely accepted elements associated with a given object.
Application: After consolidating the data in step 2, we were left with 218 ordered lists of categories. We computed the number of appearances for each category. The results are presented in Step 4: Sorting individuals
We choose to isolate individuals who do not mention the top element of the social representation (i.e. the core of the social representation). This leads to a breakdown of individuals into two sub-samples: one which contains the individuals who used the core element in their representation, and one which contains the individuals who did not.
The main reason for this is that it is remarkable not to mention any of the core elements. It is thus assumed that not mentioning the core is indeed significant. Since it does not conform to most common practice, this group is often referred to as "minority". The other group, which mentions the core element, is referred to as "mainstream".
Application: In the case of the Camargue, the subjects were interviewed at the end of their visit and had seen a lot of animals and plants (they could even see some of them while being interviewed). A small minority of individuals did not refer to Fauna and Flora (18% of the total population, see Table 1).
Given these four steps, we are left with two categories of individuals. This leads to a breakdown of individuals into two sub-samples: those who refer to the core of the social representation (mainstream) and the others (minority). We can define a mainstream dummy variable, which can be used to control the sensitivity to framing effects. To do so, existing models have to be adapted. In the following, we use this new variable with empirical data, considering two standard framing effects, starting point bias and WTA-WTP divergence.
Starting-point bias
In contingent valuation, respondents are asked if they are willing to pay a fixed sum of money for a given policy to be implemented. This discrete choice format is recommended by the NOAA panel over other methods (a panel of experts that sets guidelines to run evaluation surveys, see [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. This "take it or leave it" format mimics a market situation which individuals face in everyday market transactions, and it is incentive-compatible. However, a major drawback is that it leads to a qualitative dependent variable (the respondent answers yes or no), which reveals little about the individual's willingness-to-pay (WTP). To gather more information on respondents' WTP, [START_REF] Hanemann | Some issues in continuous and discrete response contingent valuation studies[END_REF] and [START_REF] Carson | Three essays on contingent valuation (welfare economics, non-market goods, water quality[END_REF] proposed adding a follow-up discrete choice question to improve the efficiency of discrete choice questionnaires. This mechanism is known as the double bounded model. It basically consists of proposing a second bid to the respondent, greater than the first bid, if the respondent's first answer is yes, and lower otherwise.
Several studies have found that estimates of the mean of willingness-to-pay are substantially different from estimates based on the first question alone. This is the so-called starting-point bias [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF], which can be seen as a particular case of the anchoring bias put forward by [START_REF] Tversky | Judgment under uncertainty: Heuristics and biases[END_REF] 6 . Different models have been proposed in the litterature to control for such undesirable effects. However, empirical results suggest that efficiency gains obtained with follow-up questions are lost relative to models using first questions only. All these models assume that all individuals are equally sensitive to starting-point bias. In this section, we consider that some individuals may be more sensitive than others to starting-point bias. We develop a model to handle starting-point bias with heterogeneity in two groups. An application shows that, with individual sensitivity to starting-point bias, we can control for starting-point bias with efficiency gains.
Model
Different models are proposed in the literature to control for starting-point bias in double-bounded models. All these models assume that the second answer is sensitive to the first bid offer, in the sense that a prior willingness-to-pay W i is used by the respondent i to respond to the first bid offer, b 1i , and an updated willingness-to-pay W ′ i is used to respond to the second bid, b 2i . Each model leads to a specific definition of W ′ i . Whitehead ( 2002) proposes a general model, combining several effects, as follows:
W ′ i = W i + γ (b 1i -W i ) + δ, (1)
where γ and δ are two parameters. If δ = 0 this model corresponds to the Anchoring model proposed by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF], where the respondents combine their prior WTP with the value provided by the first bid, such that the first bid offer plays the role of an anchor. The parameter γ measures the strength of the anchoring effect (0 ≤ γ ≤ 1). If γ = 0 this model corresponds to the Shift model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], where the WTP systematically shifts between the two answers. The first bid offer is thus interpreted as providing information about the cost or the quality of the object: a respondent can interpret a higher bid offer as paying more for the same object and a lower bid offer as paying less for a lower quality object. The model (1) combines both Anchoring and Shift effects.
In model ( 1), all individuals are supposed to be sensitive to the first bid offer in the same manner: the two parameters γ and δ are constant across individuals. If only some respondents are influenced by the first bid (i.e. they combine their prior WTP with the first bid), while the others do not, individual heterogeneity is present. It is well known that econometric estimation of regression models can be seriously misleading if such heterogeneity is not taken into account. Let us assume that we can divide respondents into two distinct groups: one group subject to starting-point bias and another insensitive. We define a Heterogenous model as
W ′ i = W i if I i = 0 W i + γ (b 1i -W i ) + δ if I i = 1 (2)
where I i is a dummy variable which is equal to 1 when individual i belongs to one group and 0 if he belongs to the other group. Note that, if I i = 1 for all respondents, this model reduces to the Anchoring & Shift model ; if I i = 0 for all respondents, it reduces to the standard Double-bounded model.
These models can be estimated with random effect probit models, taking into account the dynamic aspect of follow-up questions [START_REF] Cameron | Estimation using contingent valuation data from a dichotomous choice with follow-up questionnaire[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF], Whitehead 2004). Estimation requires simulated methods and a formal definition of the probability that the individual i answers yes to the j th question, j = 1, 2. For the heterogenous model (2), we calculate this probability, which is equal to:
P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )I i D j + λI i D j (3)
where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σγσ) and λ = δ/(σγσ). Based on this equation, the parameters are interrelated according to:
β = ασ, γ = θσ/(1 + θσ) and δ = λσ(1 -γ). (4)
Implementation of the Double-bounded model is obtained with δ = γ = 0, which corresponds to θ = λ = 0 in (3). Implementation of the Anchoring & Shift model is obtained with I i = 1 for i = 1, . . . , n. For a more detailed discussion on the estimation of a random effect probit model, see [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]Whitehead (2004).
Results
We use the dummy variable mainstream, defined in the previous section, and the Camargue survey to conduct an application. In practice, a value of particular interest is the estimate of the WTP mean. Once a model has been estimated, we can obtain fitted values Ŵi , for i = 1, . . . , n, from which we can calculate the estimate of the mean of WTP: μ = n -1 n i=1 Ŵi . We estimate the mean values of WTP from a linear model (MacFadden and Leonard 1993) and compute the confidence intervals by simulation with the Krinsky and Robb procedure (see Haab and McConnell 2003, ch.4). The Single-bounded and Double-bounded models give very different WTP means: 113.5 and 89.8. Their confidence intervals do not overlap. Such inconsistent results suggest that follow-up questions generate starting-point bias in the Double-bounded model.
To control for starting-point bias, we estimate an Anchoring & Shift model. The WTP mean is equal to 158.5. It is still very different from the WTP mean obtained from the Single-bounded model (113.5); however, the two confidence intervals overlap slightly. Note that the confidence interval is very wide and the gain in precision obtained by using follow-up questions is lost, a result suggested by [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF].
The Heterogeneous model gives a WTP mean of 110.1. It is very close to the 113.5 obtained from the Single-bounded model. Moreover, the confidence interval obtained from the Heterogeneous model ([99.0; 125.2]), is entirely contained in the confidence interval obtained from the Single-bounded model ([98.1; 138.2]), and thus is narrower. In other words, the Heterogeneous model provides consistent results with the Single-bounded model, and is more precise. Table 3 presents the full econometric results, that is, all the parameter estimates, with the standard errors given in italics. It is clear from this table that using follow-up questions (Double-bounded, Anchoring & Shift and Heterogenous models) provides significantly reduced standard errors, compared to using first answers only (Single-bounded model). Moreover, the precision of the parameter estimates of the regressors is quite similar for the different models using follow-up questions. The anchoring parameter γ is statistically significant when we perform a likelihood-ratio (LR) test of the null hypothesis γ = 0, not the shift parameter δ. It suggests that, when the minority group is not sensitive to starting-point bias (γ = 0, see equation 2), the mainstream group is significantly subject to such an effect (γ = 0.26, with 0 ≤ γ ≤ 1).
Finally, the Heterogeneous model performs better than the others: it provides consistent results with the Single-bounded model and greatly improves the precision of the estimation. This suggests that taking into acount an individual sensitivity to startingpoint bias does indeed matter.
WTA and WTP divergence
Over the past twenty years, a large pattern of empirical evidence has accumulated suggesting a significant divergence between willingness-to-pay (WTP) measures of value, where individuals have to pay for a given object or policy, and willingness-to-accept (WTA) measures of value, where individuals sell the same object or receive money to compensate for the suppression of the same policy [START_REF] Brookshire | Measuring the value of a public good: an empirical comparaison of elicitation procedures[END_REF]. Economic theory suggests that, with small income effects, WTP and WTA should be equivalent. Results from a meta-analysis however prove that the divergence, measured by the ratio WTA/WTP, is often high (i.e. the ratio largely exceeds one) [START_REF] Sayman | Effects of study design characteristics on the wtawtp disparity: a meta analytical framework[END_REF]. Since valuation measures are used for the study of many public-policy questions, these results raise questions about which procedure to use in practice.
The divergence is frequent but can be controlled for. The existence of substitutes has been proved to play an important role [START_REF] Shogren | Resolving differences in willingness to pay and willingness to accept[END_REF]). In the case of private goods the divergence disappears if subjects are recruited among experienced dealers [START_REF] List | Does market experience eliminate market anomalies?[END_REF][START_REF] List | Neoclassical theory versus prospect theory: evidence from the marketplace[END_REF]. This suggests that individuals may learn to avoid the divergence. This intuition is confirmed by the design of an experimental protocol that eliminates the divergence [START_REF] Plott | The willingness to pay/willingness to accept gap, the endowment effect, subject misconceptions and experimental procedures for eliciting valuations[END_REF]. The basic ingredients of this protocol are the existence of training rounds and the use of incentive-compatible mechanisms. Taken together, the previous results suggest that subjects may learn to overcome the divergence within a short period of time. These results, however, apply to private goods. If we consider the valuation of some public policy, the time between the survey and the implementation of the policy is too long to implement training rounds. This is the reason why being able to detect subjects who are prone to framing effects is of particular interest for contingent valuations.
Survey
To measure the discrepancy between WTA and WTP for public goods, we needed to find a public good that can be sold or withdrawn, or bought or provided. Such public goods are not the most common ones. However, we were lucky enough to be offered a golden opportunity. The University of Marne la Vallée (France) was considering changing its Saturday morning classes policy. The growing number of students led to an increasing number of classes on Saturday morning due to the lack of available classrooms. Students started to protest and asked for a clarification of the policy regarding classes on Saturday morning. Two options were considered. Some students were told they would pay lower fees if they accepted classes scheduled on Saturday. The reason for this was that the university could then rent the extra classroom during the week to movie companies to use for filming on location. Other students were offered the option of avoiding Saturday classes by paying higher fees, as the university would have to rent an extra building. So, the trade-off was between paying more to avoid Saturday classes and being paid to attend them. Note that, even though the survey concerned students, it was used to take a real decision. Thus, answers to this particular survey had an impact on the respondents' welfare.
We conducted a contingent valuation survey to evaluate both the willingness to pay to avoid classes on Saturday and the willingness to accept classes on Saturday morning. The survey was given to 359 students at the University of Marne La Vallée: 184 individuals were given the WTP version, 175 the WTA one (subjects were randomly assigned to one version).
Heterogeneity
Gathering information on social representations using an additional open-ended question leads to our four-step methodology. We propose here to simplify this treatment by running this methodology on a sample of subjects, at a pre-test stage, to identify the items that capture most of the opposition between mainstream and minority. This allows us to detect mainstream and minority using a simple discrete choice question. This greatly simplified the exploitation of the data. While the use of an open-ended question implies a specific treatment (categorization and so on), the use of a simple question does away with the need for any treatment.
Prior to the survey, we then elicited the representations "of classes on Saturday morning". Quite surprisingly, the survey revealed two groups that differ more broadly on their vision of university rather than on their vision of Saturday morning (we were expecting more precise reasons, such as the opportunity to have a job on Saturday morning, religious reasons). For the mainstream, the main goal of their studies is to get diplomas, while the minority consider that the most important thing to get from university is skills. Following our method, we then decided to include in the contingent valuation survey an additional question labeled as follows:
In your opinion, the main purpose of your studies is to get:
diplomas 2. skills
The two items were presented in a random order. As expected, a large majority of 71% of the 359 respondents, chose the first option (diplomas). And only a minority chose the second option (skills). We now propose to explore the impact of this distinction on the WTA-WTP divergence. If we neglect the distinction among respondents, the WTA and WTP means are very different, respectively equal to 68.7 and 15.3. The WTA/WTP ratio largely exceeds one and is equal to 4.5. Then, we calculate the WTA and WTP means for individuals who answered the questiondiplomas and skills -separately. When we consider the mainstream group (Diplomas), the discrepancy between WTA and WTP is wide and the ratio higher (5.8). However, when we consider the minority group (Skills), the discrepancy and the ratio (2.7) are significantly reduced. Students from the minority group are less sensitive to the WTA and WTP divergence. Even if the discrepancy is not completely removed, the mainstream variable allows us to separate the whole population into two groups that highly differ in their sensibility to framing effects, since the ratio is falling from 5.8 for the mainstream group to 2.7 for the minority.
Results
Further results and discussion
The previous results show that it is possible to extract information on individual representation for a given object, which can be successfully used as a good proxy for the individual sensibility to framing effects. Evidence was presented for two distinct sets of data and two different well-known framing effects. So far, we have basically found a statistically significant relationship between the mainstream variable and the sensitivity to framing effects. The remaining question is thus why does this occur? The first section proves that the representation variable conveys new information, which is not related to other individual characteristics. The second section proposes an interpretation of the link between social representation and framing effects. General considerations on social representation are given in a third section. The last section deals with possible improvements to the proposed approach.
Does representation provide new information ?
Here, we check if the dummy variable, based on social representation, is correlated with some other individual characteristics. First consider the Camargue survey. Table 5 shows the Pearson correlation coefficient ρ between the mainstream dummy variable and the regressors included in the regression model. A P -value is given in parenthesis for the null hypothesis ρ = 0. We can see that in all cases, the null is rejected (all the P-values are greater than 0.05). It suggests that the dummy variable is not correlated to the regressors.
Secondly, consider the Saturday classes survey. Table 6 shows the Pearson correlation coefficient ρ between the Diplomas/Skills dummy variable and other questions from the questionnaire. A P -value is given in parenthesis for the null hypothesis ρ = 0. We can see that in all cases, the null is rejected (all the P-values are greater than 0.05). Again, it suggests that the dummy variable is not correlated to the regressors. These results suggest that the information obtained from individual representation cannot be captured by the use of standard individual characteristics. In this sense, it is new information, not related to standard questions in surveys.
From representation to framing effects
So far, we have concentrated on the most technical aspects, based on statistical evidence. Here, we propose an interpretation about why representations can be linked to framing effects. This interpretation relies on three distinct arguments. The first two are nothing more than an application of general ideas developed in psychology and sociology. The key argument is thus the third one.
1. Our use of social representation is very classical on some points and more original on others. Identifying the core and peripheral elements of a social representation on the basis of a statistical analysis of word associations is a classic in social psychology. It is also admitted that peripheral elements are identified by a minority. Our approach thus consists in pooling all minorities in one group.
2. The next step of our reasoning is to assume that these individuals are conscious of not being members of the mainstream, while others may just follow the crowd with no clear consciousness of doing so. The idea that members of the minority have a more accurate perception of their identity is generally accepted in sociology. Thus, we associate a classical sociological argument with a more psychological one.
3. The core idea of our work is that the minority group on a particular subject has a stronger opinion, i.e. a more personal or elaborate point of view 7 . Thus, the minority is more likely to resist outside influences and is therefore less sensitive to framing effects.
Representation as a marker of past experience
If you have never coped with an object or a situation in the past in the past, you are very likely to handle it at first glance in a very predictable way, using common sense or stereotypes. This is what the core represents. But, if for any reason, you have been confronted with this problem in the past, it is very likely that you start recomposing your representation of this object or situation (you don't have the same representation of Paris once you've been there). According to that view, non-mainstream representations are then a consequence of past experiences. Representations can thus be thought of as a fast and frugal way to capture information about past experiences.
If we now concentrate on the problem of eliciting preferences (say for public decision making), representations allow us to isolate individuals that have somehow "invested" in their own preference. We expect them to hold a stronger opinion and have more stable preferences, thus be less sensitive to framing effects. Such a distinction is similar to the debate on the origin of individual preferences [START_REF] Slovic | The construction of preference[END_REF]. Members of the minority are assumed to be individuals that have set their preferences, while some members of the mainstream population are assumed to construct their preferences through the elicitation process. Our results suggest how to identify individuals that have set their preferences before the elicitation process begins. The existence of such a population is not a surprise since in any experiments that intend to detect biases, a small, but significant, part of the subjects do not exhibit pathological preferences (among many references, see the experiments in [START_REF] Kahneman | Choices, Values and Frames[END_REF]. This paper is a first step towards detecting such individuals.
Criticism, improvements and further research
The proposed method has reached the goal of proving that a substantial heterogeneity relative to the sensitivity to framing effects exists, even in socially very homogenous populations such as students. The agenda for further research includes the design of more subtle tools to classify individuals. Here, we are able to isolate a population that is prove to be much less sensitive to framing effects than the residual population. One can think of a more continuous variable that measures the sensitivity to framing effects. a strong point of view.
The proposed methodology is open to criticism at two distinct levels. As we are exploiting an open-ended question, a choice has to be made on how to categorize the answers. A good classification requires the creation of homogeneous categories. Even though our classification8 tends to demonstrate the presence of individual sensitivity to framing effects, another choice could be considered. A second criticism may concern the way we construct the two subpopulations on the basis of the social representation. Our choice is to put respondents who cite the most cited category in a mainstream group, and the others in the minority group. Other choices and alternatives splits (with more than two groups) could be used. Finally, our dichotomous split has done a good job as a first step, but further research may help us to better understand the determinants of individual sensitivity to framing effects.
Conclusion
This paper is a first step towards approaching heterogeneity relative to the sensitivity to framing effects. A simple tool is designed to detect a group of individuals that is proved to be far less sensitive to framing effects than the reference population. This approach is effective on two distinct sets of data concerning different framing effects. This raises important questions at the normative level. How should values be set within heterogeneous groups ? Should the values be computed using only the values of those detected as not sensitive to framing effects ?
Table 1 .
1 The top element, Fauna-Flora, is used by a large number of
Category Citation Rate Rank
Fauna-Flora 82 % 1
Landscape 74 % 2
Isolation 58 % 3
Preservation 51 % 4
Human presence 34 % 5
Nature 33 % 6
Disorientation 32 % 7
Coast 26 % 8
Table 1 :
1 Citation Rank respondents, 82%. Only a minority do not use any element of this category. This is not a big surprise since the main interest of the Camargue (as presented in all related commercial publications, or represented on postcards) is the Fauna and Flora category 5 .
Table 2 :
2 Table 2 presents estimates of the WTP mean, obtained from the Double-bounded, Anchoring & Shift and Heterogeneous models. We include estimates obtained from the Single-bounded, taking into account the first answers only. The analysis is based on two criteria: if the mean of WTP is consistent (consistency) and if the standard errors are more precise (efficiency) with those obtained from the Single-bounded model. Estimation of the mean of willingness-to-pay
Model WTP mean conf. interval consistency efficiency
Single-bounded 113.5 [98.1;138.2]
Double-bounded 89.8 [84.4;96.5] no yes
Anchoring & Shift 158.5 [122.6;210.7] yes no
Heterogeneous 110.1 [99.0;125.2] yes yes
Table 3 :
3 Random effects probit models (standard errors in italics)
Table 4 :
4 Table4shows the WTP and WTA means for all the students (384, 175 for the WTA version and 184 for the WTP version) and for the two sub-groups (those who answer Diplomas and those who answer Skills). The last line presents the WTA/WTP ratio. WTA/WTP divergence
All Diplomas Skills
WTA 68.7 71.9 62.5
WTP 15.3 12.5 23.3
Ratio 4.5 5.8 2.7
Table 5 :
5 The Camargue survey: correlation coefficient.
Table 6 :
6 Saturday classes survey: correlation coefficient.
A survey of the theory and methods used to study social representations can be found in[START_REF] Wagner | Theory and method of social representations[END_REF] and[START_REF] Canter | Empirical approaches to social representation[END_REF]
This method was originally developed in[START_REF] Flachaire | A new approach to anchoring: theory and empirical evidence from a contingent valuation survey[END_REF]
See Claeys-Mekdade et al. (1999) for a complete description of the survey
After categorization and deletion of doubles, the average number of attributes evoked by the respondents falls
from 5.5 to 4.0. 5 A quick look at any website about the Camargue is also a way of confirming Fauna-Flora as the obvious aspect of the Camargue. Among many others see: www.parc-camargue.fr or www.camargue.com
The anchoring bias appears in experimental settings in which subjects are asked to provide numerical estimations (e.g. the height of Mount Everest). Prior to the estimation stage, they are asked to compare their value to an externally provided value (e.g. 20 000 feet). This last value received the name of anchor as it was proved to have a great influence on subjects' valuations (i.e. a different anchor, or starting point, leads to a different valuation)
Note that we do not exclude that some individuals may have a strong point of view which is in accordance with that of the mainstream. We only suggest that we can isolate some individuals holding
A full description of the one we used is available in[START_REF] Hollard | Théorie du choix social et représentations : analyse d'une enquête sur le tourisme vert en camargue[END_REF]
Whitehead, J. C. (2004). Incentive incompatibility and starting-point bias in iterative valuation questions: reply. Land Economics 80 (2), 316-319.
Acknowledgements
The authors thank Jason Shogren for useful comments and two anonymous referees. |
01760260 | en | [
"info.info-ar",
"info.info-ds",
"info.info-pf",
"info.info-ao",
"info.info-dc",
"info.info-se",
"info.info-cv"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01760260/file/final-draft.pdf | Florian Lemaitre
email: florian.lemaitre@cern.ch-ben.couturier@cern.ch-lionel.lacassagne@lip6.fr
Benjamin Couturier
Lionel Lacassagne
Small SIMD Matrices for CERN High Throughput Computing
System tracking is an old problem and has been heavily optimized throughout the past. However, in High Energy Physics, many small systems are tracked in real-time using Kalman filtering and no implementation satisfying those constraints currently exists. In this paper, we present a code generator used to speed up Cholesky Factorization and Kalman Filter for small matrices. The generator is easy to use and produces portable and heavily optimized code. We focus on current SIMD architectures (SSE, AVX, AVX512, Neon, SVE, Altivec and VSX). Our Cholesky factorization outperforms any existing libraries: from ×3 to ×10 faster than MKL. The Kalman Filter is also faster than existing implementations, and achieves 4 • 10 9 iter/s on a 2×24C Intel Xeon.
I. INTRODUCTION
The goal of the paper is to present a code generator and optimizations to get a fast reconstruction of a system trajectory (tracking) using Kalman filtering for SIMD multi-core architectures, for which it exists no efficient implementation. The constraints are strong: few milliseconds to track thousands of particles. Right now, the choice was to focus on generalpurpose processors (GPP) as SIMD extensions are present in every system (so all CERN researcher could benefit of it). GPUs were not selected when the work started in 2015 as the transfer time (through PCI) between the host and the GPU was longer than the amount of time allocated to the computation. With the rise of the last generation of GPU connected to a CPU with a high bandwidth bus, it becomes worth evaluating them.
Even though optimizing Kalman filter tracking is an old problem [START_REF] Palis | Parallel Kalman filtering on the connection machine[END_REF], existing implementations are not efficient for many small systems.
The code generator uses the template engine Jinja2 [START_REF]Python template engine[END_REF] and implements high level and low level optimizations. It is used to produce a fast Cholesky factorization routine and a fast Kalman filter in C. The generated code is completely generic and can be used with any system. It also supports many SIMD architectures: SSE, AVX, AVX512, Neon, SVE, Altivec and VSX. In order to have a representative Kalman filter and validate its implementation, a basic 4×4 system was selected. Depending on the experiment the matrix size can change. Some specific variants can also exist: 5×5 systems for High Energy Physics [START_REF] Fr Ühwirth | Application of Kalman filtering to track and vertex fitting[END_REF], using three steps: forward, backward, smoother.
This work will be used in the next upgrade of the LHCb experiment to achieve real-time event reconstruction. // Backward substitution In this paper, we will first present Cholesky Factorization, the optimizations we applied to it, and their performance impact. Then, we will present Kalman Filter, its optimizations and their performance impact.
16 for i = n -1 : 0 do 17 s ← Y (i) 18 for j = i + 1 : n -1 do 19 s ← s -L(j, i) • X(j) 20 X(i) ← s/L(i, i)
II. CHOLESKY FACTORIZATION
A. Algorithm
Cholesky Factorization (also known as Cholesky Decomposition) is a linear algebra algorithm used to express a symmetric positive-definite matrix as the product of a triangular matrix with its transposed matrix: A = L • L T . It can be combined with forward and backward substitutions to solve a linear system (algorithm 1).
Cholesky Factorization of a n×n matrix has a complexity in terms of floating-point operations of n 3 /3 that is half of the LU one (2n 3 /3), and is numerically more stable [START_REF] Higham | Accuracy and stability of numerical algorithms[END_REF], [START_REF] Higham | Cholesky factorization[END_REF]. This algorithm is naturally in-place as every input element is accessed only once and before writing the associated element of the output: L and A can use the same storage. It requires n square roots and (n 2 + 3n)/2 divisions for n×n matrices which are slow operations especially on double precision.
With small matrices, parallelization is not efficient as there is no long dimension. Therefore, matrices are grouped by batches in order to efficiently parallelize along this new dimension. The principle is to have a for-loop iterating over the matrices, and within this loop, compute the factorization of the matrix. This is also the approach used in [START_REF] Dong | LU factorization of small matrices: accelerating batched DGETRF on the GPU[END_REF].
B. Transformations
Improving the performance of software requires transformations of the code, especially High Level Transforms (HLT). For Cholesky, we made the following transforms:
• High Level Transforms: memory layout [START_REF] Allen | Optimizing compilers for modern architectures: a dependence-based approach[END_REF] and fast square root (the latter is detailed in II-C), • loop transforms (loop unwinding [START_REF] Lacassagne | High level transforms for SIMD and low-level computer vision algorithms[END_REF] and unroll&jam),
• Architectural transforms: SIMDization.
1) Memory Layout Transform:
The memory layout transform is the first transform to address as the other ones rely on it. The default memory layout in C is Array of Structure (AoS), but is not suited for SIMD. In order to enable SIMD, the layout should be modified into Structure of Arrays (SoA). A hybrid memory layout (AoSoA) is preferred to avoid systematic cache evictions.
The alignment of the data is also crucial. Aligned memory allocations should be enforced by specific functions like posix_memalign, _mm_malloc or aligned_alloc (in C11). One might also want to align data with the cache line size (usually 64 bytes). This may improve cache hits by avoiding data being split into multiple cache lines when they fit within one cache line and avoid false sharing between threads.
2) Loop unwinding: Loop unwinding is the special case of loop unrolling where the loop is entirely unrolled. it has several advantages, especially for small matrices:
• it avoids branching,
• it allows to keep temporaries into registers (scalarization),
• it helps out-of-order processors to efficiently reschedule instructions. This transform is very important as the algorithm is memory bound. One can see that the arithmetic intensity of the scalarized version is higher. This leads to algorithm 2 and reduces the amount of memory accesses (Table I).
The register pressure is higher and the compiler may generate spill code to temporarily store variables into memory.
3) Loop Unroll & Jam: Cholesky Factorization of n×n matrices involves n square roots + n divisions for a total of ∼n 3 /3 floating-point operations (see Table I). The time before the execution of two data independent instructions (also known as throughput) is smaller than the latency. The latency of pipelined instructions can be hidden by executing another instruction in the pipeline without any data-dependence with the previous one. The ipc (instructions per cycle) is then limited by the throughputof the instruction and not by its latency.
Current processors are Out-of-Order. But they are limited by the size of their rescheduling window. In order to help Algorithm 2: Cholesky system solving A • X = R unwound and scalarized for 4×4 matrices // Load A into registers
1 a00 ← A(0, 0) 2 a10 ← A(1, 0) a11 ← A(1, 1) 3 a20 ← A(2, 0) a21 ← A(2, 1) a22 ← A(2, 2) 4 a30 ← A(3, 0) a31 ← A(3, 1) a32 ← A(3, 2) a33 ← A(3, 3) // Load R into registers 5 r0 ← R(0) r1 ← R(1) r2 ← R(2) r3 ← R(3) // Factorize A 6 l00 ← √ a00 7 l10 ← a10/l00 8 l20 ← a20/l00 9 l30 ← a30/l00 10 l11 ← a11 -l10 2 11 l21 ← (a21 -l20 • l10) /l11 12 l31 ← (a31 -l30 • l10) /l11 13 l22 ← a22 -l20 2 -l21 2 14 l32 ← (a32 -l30 • l20 -l31 • l21) /l22 15 l33 ← a33 -l30 2 -l31 2 -l32 2 // Forward substitution 16 y0 ← r0/l00 17 y1 ← (r1 -l10 • y0) /l11 18 y2 ← (r2 -l20 • y0 -l21 • y1) /l22 19 y3 ← (r3 -l30 • y0 -l31 • y1 -l32 • y1) /l33 // Backward substitution 20 x3 ← y3/l33 21 x2 ← (y2 -l32 • x3) /l22 22 x1 ← (y1 -l21 • x2 -l31 • x3) /l11 23 x0 ← (y0 -l10 • x1 -l20 • x2 -l30 • x3) /l00 // Store X into memory 24 X(3) ← x3 X(2) ← x2 X(1) ← x1 X(0) ← x0
the processor to pipeline instructions, it is possible to unroll loops and to interleave instructions of data-independent loops (Unroll&Jam). Here, Unroll&Jam of factor 2, 4 and 8 is applied to the outer loop over the array of matrices. Its efficiency is limited by the throughput of the unrolled loop instructions and the register pressure.
C. Precision and Accuracy
Cholesky Factorization requires n square roots and (n 2 + 3n)/2 divisions for a n×n matrix. But these arithmetic operations are slow, especially for double precision (see [START_REF] Fog | Instruction tables: Lists of instruction latencies, throughputs and micro-operation breakdowns for Intel, AMD and VIA CPUs[END_REF]) and usually not fully pipelined. Thus, square roots and divisions limit the overall Cholesky throughput.
It is possible in hardware to compute them faster with less accuracy [START_REF] Soderquist | Area and performance tradeoffs in floating-point divide and square-root implementations[END_REF]. That is why reciprocal functions are available: they are faster but have a lower accuracy: usually 12 bits for a 23-bit mantissa in single precision.
The accuracy is measured in ulp (Unit in Last Place).
1) Memorization of the reciprocal value:
In the algorithm, a square root is needed to compute L(i, i). But L(i, i) is used in the algorithm only with divisions. The algorithm needs n 2 + 3n /2 of these divisions per n×n matrix. Instead of computing x/L(i, i), one can compute x • L(i, i) -1 . The algorithm then needs only n divisions.
2) Fast square root reciprocal estimation: The algorithm performs a division by a square root and therefore needs to
Listing 1: Simple C loop 1 for (int i = 0; i < 4; i++) { 2 s = B[i] + C[i]; 3 A[i] = s / 2; 4 } Listing 2: Simple Jinja loop 1 {% f o r i i n r a n g e ( 4 ) %} 2 s{{ i }} = B[{{ i }}] + C[{{ i }}]; 3 A[{{ i }}] = s{{ i }} / 2; 4 {% e n d f o r %} compute f (x) = 1/ √ x.
There are some ways to compute an estimation of this function depending on the precision.
Most of current CPUs have a specific instruction to compute an estimation of the square root reciprocal in single precision. In fact, some ISA (Instruction Set Architecture) like Neon and Altivec VMX do not have any SIMD instruction for the square root and the division, but do have an instruction for a square root reciprocal estimation. On x86, ARM and Power, this instruction is as fast as the multiplication and gives an estimation with 12-bit accuracy. Unlike regular square root and division, this instruction is fully pipelined (throughput = 1) and thus avoids pipeline stall.
3) Accuracy recovering: Depending on the application, the previous techniques might not be accurate enough. The accuracy recovering (if needed) can be done with Newton-Raphson method or Householder's. All current SIMD architectures have FMAs instruction to apply those methods quickly. See [START_REF] Lemaitre | Cholesky factorization on simd multi-core architectures[END_REF] for more details.
D. Code generation
In order to help writing many different versions of the code, we used Jinja2 [START_REF]Python template engine[END_REF]: a template engine in Python. Using this tool, we can easily implement unrolling (both unwinding and unroll&jam) and intrinsics code. The syntax uses custom tags/tokens that control what is being output. As it is text substitution, it is possible to manipulate new identifiers.
The generated code features all transformations and all sizes from 3×3 up to 12×12 for all the architectures supported and all SIMD wrappers. There is no actual limit for the unrolled size, but the bigger the matrices, the longer the compilation. This could be replaced by a C++ template metaprogram like in [START_REF] Masliah | Metaprogramming dense linear algebra solvers applications to multi and many-core architectures[END_REF]. The use of Jinja2 instead of more common metaprogramming methods allows us to have full access and control over the generated code. In some applicative domains, it is crucial to have access to the source code before the compilation in order to quickly track bugs.
1) unrolling: Unwinding can be done in Jinja by replacing the C for-loop (Listing 1) into a Jinja for-loop (Listing 2). The output of the template is the C code the compiler will see (Listing 3).
Unroll&Jam uses a Jinja filter: a filter is a section that is interpreted by Jinja as usual, but the output is then passed to a function to transform it directly in Python. The unrollNjam Listing 3: Simple Jinja loop output
1 s0 = B[0] + C[0]; 2 A[0] = s0 / 2; 3 s1 = B[1] + C[1]; 4 A[1] = s1 / 2; 5 s2 = B[2] + C[2]; 6 A[2] = s2 / 2; 7 s3 = B[3] + C[3]; 8 A[3] = s3 / 2;
filter duplicates lines with the symbol @, and replace the @ by 0, 1, 2. . . The template code in Listing 4 generates the code in Listing 5.
2) SIMD:
The SIMD generation is handled via a custom Clike preprocessor written in Python. The interface consists in custom Python objects accessible from Jinja.
When a Python macro is used within Jinja (Listing 6), it is replaced by a unique name that is detected by our preprocessor. It then acts like a regular C-preprocessor and replaces the macro call by its definition from the Python class (Listing 7).
It is important to have a preprocessor as all the architecture intrinsics differ not only by their name, but also by their signature. The Altivec code (Listing 8) looks completely different from for SSE despite being generated from the same template (with VSX, the output would involve vec_mul instead of vec_madd). This tool can also be used to generate code for C++ SIMD wrappers.
3) SIMD wrappers: In order to see if it is worth writing intrinsics, SIMD wrappers have been integrated into the code and compared to the intrinsics and scalar code. The following libraries have been tested: Boost.SIMD [START_REF] Est Érie | SIMD: Generic programming for portable SIMDization[END_REF], libsimdpp [START_REF] Libsimdpp | Header-only zero-overhead c++ wrapper for simd intrinsics of multiple instruction sets[END_REF], MIPP [START_REF] Cassagne | An efficient, portable and generic library for successive cancellation decoding of polar codes[END_REF], UME::SIMD [START_REF] Karpi Ński | A high-performance portable abstract interface for explicit SIMD vectorization[END_REF], vcl [START_REF] Fog | C++ vector class library[END_REF].
Eigen has also been tested but is unable to compile Cholesky Factorization when the element type is an array. It would have been possible to write manually the factorization array element type, but this would defeat the whole point of Eigen.
More libraries and tools exist like CilkPlus, Cyme, ispc [START_REF] Pharr | A SPMD compiler for highperformance CPU programming[END_REF], Sierra or VC [START_REF] Kretz | Vc: A C++ library for explicit vectorization[END_REF]. CilkPlus, Cyme and Sierra appear not to be maintained anymore. VC and ispc did not fit into our test code base without a lot of efforts, thus were not tested.
Listing 4: Unroll&Jam in Jinja
1 {% f i l t e r u n r o l l N j a m ( r a n g e ( 4) ) %} 2 s@ = B[@] + C[@]; 3 A[@] = s@ / 2; 4 {% e n d f i l t e r %} Listing 5: Unroll&Jam output
1 s0 = B[0] + C[0]; 2 s1 = B[1] + C[1]; 3 s2 = B[2] + C[2]; 4 s3 = B[3] + C[3]; 5 A[0] = s0 / 2; 6 A[1] = s1 / 2; 7 A[2] = s2 / 2; 8 A[3] = s3 / 2;
2) Incremental speedup: Figure 1 gives the speedup of each transformation in the following order: unwinding, SoA + SIMD, fast square root, unroll&jam. The speedup of a transformation is dependent of the transformations already applied: the order is significant.
If we look at the speedups on HSW (Figure 1a), we can see that unwinding the inner loops improves the performance well: from ×2 to ×3. Unwinding impact decreases when the matrix size increases: the register pressure is higher.
SIMD gives a sub-linear speedup: from ×3.2 to ×6. In fact, SIMD instructions cannot be fully efficient on this function without fast square root (see subsubsection II-C2). With further analysis, we can see that the speedup of SIMD + fast square root is almost constant around ×6. The impact of the fast square root decreases as their number becomes negligible compared to the other floating-point operations. For small matrices, unroll&jam allows to get the last part of the expected SIMD speedup. SIMD + fast square root + unroll&jam: from ×6.5 to ×9. Unroll&jam loses its efficiency for larger matrices: the register pressure is higher.
Speedups on Power8 are similar: Figure 1b.
3) Impact of unrolling: Figure 2 shows the performance for different AVX versions. Without any unrolling, all versions except "legacy" have similar performance: performance seems to be limited by the latency between data-dependent instructions.
Unwinding can help Out-of-Order engine and thus reduces data-dependencies. The performance of the "non-fast" and "legacy" versions are limited by the square root and division instruction throughput. The performance has reached a limit and cannot be improved further this limitation, even with unrolling: both unwinding and unroll&jam are inefficient in this case. The "legacy" version is more limited as it requires more divisions. For "fast" versions, both unrolling are efficient. Unroll&jam achieves a ×3 speedup on regular code and ×1.5 speedup with unwinding. This transformation reduces pipeline stalls between data-dependent instructions (subsubsection II-B3). We can see that unroll&jam is less efficient when the code is already unwound but keeps improving the performance. Register pressure is higher when unrolling (unwinding or unroll&jam).
The "unwind+fastest" versions give an important benefit. By removing the accuracy recovering instructions, we save many instructions (II-C3, Accuracy recovering).
For such large matrices, unroll&jam slows down the code when it is already unwound because of the register pressure. 4) SIMD wrappers: Figure 3 shows the performance of SIMD wrappers compared to the intrinsics version. Optimizations not related to SIMD are applied the same way on all versions. With the default version, all the wrappers seem to have the performance until a point depending on the wrapper. The drop in performance is a bug of the compiler that stops With the "fast" version, most wrappers have similar performance in single precision. However, UME::SIMD does not implement the square root reciprocal approximation (despite being part of the interface). Moreover, only Boost.SIMD supports the fast square root in double precision. In that case, Boost.SIMD is a bit slower than the intrinsics code.
5) Comparison with MKL:
The version 2018 now supports the SoA memory layout. It is designated by compact within the documentation. Figure 4 shows the performance comparison between our implementation and MKL.
The compact layout improved the performance for small matrices, compared to the old functions. However, it is still slower than our version for matrices smaller than 90×90.
First, MKL does not store the reciprocal and has to compute actual divisions during both factorization and substitution. This can be compared to our "legacy" version.
Then, it uses a recursive algorithm for the substitution that has some overhead. 6) Summary: Figure 5 shows the performance of our best SIMD version against scalar versions and libraries (Eigen and MKL) for HSW, SKX, EPYC and Power8. Due to licensing limitations, MKL has only been tested on HSW.
On aarch64, gcc has a performance bug 1 where the instrinsic vmlsq_f32(a,b,c) = a -b • c is compiled into two instructions instead of one. This bug also affects the instrinsic vfmsq_f32. As Cholesky Factorization mainly uses the latter intrinsic, the performance obtained on this machine has no meaning and was not considered here. On SKX, the scalar SoA performance drops from 9×9 matrices. This is due to the compiler icc that stops vectorizing the unwound scalar code from this point.
On all tested machines, the scaling is strong with a parallel efficiency2 above 80%.
III. KALMAN FILTER A. Kalman Filter algorithm
Kalman Filter is a well-known algorithm to estimate the state of a system from noisy and/or incomplete measurements. It is commonly used in High Energy Physics and Computer Vision as a tracking algorithm (reconstruct the trajectory). It is also used for positioning systems like GPS.
Kalman filtering involves few matrix multiplications and a matrix inversion that can be done with Cholesky Factorization (see algorithm 3).
• v1: classic version of the algorithm (algorithm 3) • v2: optimized version of the algorithm (algorithm 4)
• triangle: only half of symmetric matrices is accessed 2) Incremental speedup: Incremental speedups are reported on Figure 6. Like with Cholesky, the speedup comes mainly from unwinding and SoA memory layout that enables vectorization. The mathematical optimizations (v2+triangle) give a total extra speedup about +40%.
Unlike with Cholesky, the fast square and unroll&jam give no benefit except on Power8. Indeed, the proportion of square roots and division is much lower on Kalman. Moreover, the operations are more independent from each other (more intrinsic parallelism). Therefore, unroll&jam is not efficient here. However, it is still interesting without unwinding.
The last thing to notice is that writing SIMD intrinsics does not improve the performance, except on Power8 where gcc struggles to optimize for the Power architecture.
3) Overall performance: The machines available for testing at CERN are very different: two high-end bi-socket (Intel, ARM) and two mono-socket (AMD, Power). So in order to provide fair comparisons, we have normalized the results to focus on transform speedups, and not the raw performance.
Looking at Figure 7, it appears clearly that it is not worth writing SIMD as compilers are able to vectorize the code. We still have to supply #pragma omp simd to ensure the vectorization. Otherwise, compiler heuristics would have stopped vectorizing.
Doing that, the compiler is even able to provide slightly better code than SIMD code. The instruction scheduling and register allocation might be involved.
On A72, the SIMD code is even slower than vectorized because of the gcc bug.
Like with the Cholesky factorization, the scaling is strong with a parallel efficiency above 80%.
4) State-of-the-art: As previously said, each experiment implements some specific version of Kalman filtering, direct comparison cannot be done. Indeed, the problem dimensionality is different and the steps are different. Moreover, each step of the filter for HEP is lighter than the full Kalman filtering: no control vector, one-dimension measurement space. Nevertheless, the performance of the SIMD implementations for CMS [START_REF] Cerati | Kalman filter tracking on parallel architectures[END_REF], CBM [START_REF] Gorbunov | Fast SIMDized Kalman filter based track fit[END_REF] and LHCb [START_REF] Érez | LHCb Kalman Filter cross architecture studies[END_REF] is between 500 and 1500 cycle/iter (all steps). Our 4×4 implementation achieves 44 cycle/iter (Table III). This is an order of magnitude faster than existing implementations.
As a matter of fact, the SKX machine reaches an overall performance of 4 • 10 9 iter/s.
CONCLUSION
In this paper, we have presented a code generator used to create an efficient and portable SIMD implementation of Cholesky Factorization for small matrices ( 12×12) and Kalman Filter for 4×4 systems. The generated code supports many SIMD architectures, and is AVX512/SVE ready. Being completely general, it can be used with any system and is not limited to 4×4 systems.
Our Cholesky factorization outperforms any existing libraries. Even if there are some improvements with MKL, we are still ×3 up to ×10 faster on small matrices.
Our Kalman filter implementation is not directly comparable to the State-of-the-Art because of its general form, but appears to be one order of magnitude faster. With this, we are able to reach 4 • 10 9 iter/s on a high-end Intel Xeon 2×24C.
To reach such a high level of performance, the proposed implementation combines high level transforms (fast square root and memory layout), low level transforms (loop unrolling and loop unwinding), hardware optimizations (SIMD and OPENMP multithreading) and linear algebra optimizations. The code was automatically generated using Jinja2 to provide strong optimizations with simple source code. SIMD wrappers allow to write portable SIMD code, but require extra optimizations handled by our code generator.
With GPUs directly connected to the main memory, the transfer bandwidth is much higher; thus, it would be worth considering GPUs for future work.
Algorithm 1 : 2 s 4 s 2 5 9 s 12 s ← R(i) 13 for j = 0 : i -1 do 14 s 15 Y
1242912131415 Cholesky system solving A • X = R // Factorization 1 for j = 0 : n -1 do ← A(j, j) 3 for k = 0 : j -1 do ← s -L(j, k) Lj,j ← √ s 6 for i = j + 1 : n -1 do 7 s ← A(i, j) 8 for k = 0 : j -1 do ← s -L(i, k) • L(j, k) 10 L(i, j) ← s/L(j, j)// Forward substitution11 for i = 0 : n -1 do ← s -L(i, j) • Y (j) (i) ← s/L(i, i)
Fig. 1 :
1 Fig. 1: Speedups of the transformations for Cholesky
Fig. 2 :Fig. 3 :
23 Fig. 2: Performance of loop and square root transforms for the AVX 3×3 version of Cholesky on HSW
Fig. 4 :
4 Fig. 4: Performance comparison between intrinsics code and MKL for Cholesky on HSW inlining the wrapper functions when the outer function is too big (unwinding+unroll&jam).With the "fast" version, most wrappers have similar performance in single precision. However, UME::SIMD does not implement the square root reciprocal approximation (despite being part of the interface). Moreover, only Boost.SIMD supports the fast square root in double precision. In that case, Boost.SIMD is a bit slower than the intrinsics code.
Fig. 5 :
5 Fig. 5: Performance of Cholesky on SKX, EPYC and Power8 machines, mono-core Both Eigen and the classic routines of MKL are slower than our scalar AoS code and are barely visible on the plots. The "compact" routines of MKL are faster, but still much slower than the SIMD version.On SKX, the scalar SoA performance drops from 9×9 matrices. This is due to the compiler icc that stops vectorizing the unwound scalar code from this point.On all tested machines, the scaling is strong with a parallel efficiency 2 above 80%.
Fig. 6 :Fig. 7 :
67 Fig. 6: Incremental speedup of the Kalman filter
TABLE I :
I Arithmetic Intensity (AI)
version flop load + store AI
classic 1 6 2n 3 + 15n 2 + 7n
TABLE II :
II Benchmarked machines
CPU full name ISA frequency (GHz) cores/threads SIMD width #FMA SIMD SP parallelism (FLOP/cycle) cache (KB) per core per CPU L1 L2 L3
HSW E5-2683 v3 a AVX2 2.0 2× 14/28 256 2 32 32 256 35840
i9 i9-7900X a AVX512 3.3 10/20 512 2 64 32 1024 14080
SKX Platinum 8168 a AVX512 2.7 2× 24/48 512 2 64 32 1024 33792
EPYC EPYC 7351P b AVX2 2.4 16/32 256 1 16 32 512 65536
A72 Cortex A72 c Neon 2.4 2× 32/32 128 1 8 32 256 32768
Power8 Power 8 Turismo d VSX 3.0 4/32 128 2 16 64 512 8192
a Intel b AMD c ARM d IBM
30 unroll&jam x1 unroll&jam x2
unroll&jam x4
SIMD 256
Gflops 20 SIMD 256 fast SIMD 256 fastest SIMD 256 legacy
SIMD 256 unwind
10 SIMD 256 unwind fast
SIMD 256 unwind fastest
SIMD 256 unwind legacy
0 solve
TABLE III :
III Rough comparison with State-of-the-Art Kalman filters for HEP Timing of other implementations have been estimated from their article
Implementation steps cycle/iter
our code (4×4) FWD 44
our code (5×5) FWD 74
CMS (5×5) FWD+BWD+smooth 520
CBM (5×5) FWD+BWD+smooth 550
LHCb (5×5) FWD+BWD+smooth 1440
https://gcc.gnu.org/bugzilla/show bug.cgi?id=82074
The parallel efficiency is defined as the speedup of the multi-core code over the single core code divided by the number of cores.
SIMD wrappers in C++ are much longer to compile than plain C with intrinsics. The biggest file took more than 30 hours and required more than 10 GB of memory to compile. Thus, it was decided to stop the generation of unrolled code for matrices bigger than 12×12.
E. Benchmarks 1) Benchmark protocol:
In order to evaluate the impact of the transforms, we used exhaustive benchmarks.
The algorithms were benchmarked on six machines whose specifications are provided in Table II.
On x86, the code has been compiled with Intel icc v18.0 with the following options: -std=c99 -O3 -vec -ansi-alias. The time is measured in cycles with _rdtsc().
On other architectures, gcc 7.2 has been used with the following options: -std=c99 -O3 -ffast-math -fstrict-aliasing.
Time
is measured with clock_gettime(CLOCK_MONOTONIC, . . . ).
In all the cases, the code is run multiple times with multiple batch sizes, and the best time is kept.
The plots use the following conventions: • scalar: scalar code. The SoA versions are vectorized by the compiler though. • SIMD: SIMD intrinsics code executed on the machine.
• unwind: inner loops unwound+scalarized (ie: fully unrolled).
• legacy: no reciprocal storing (base version).
• fast: use of fast square root reciprocal estimation.
• fastest: "fast" without any accuracy recovering.
• ×k: order of the outer loop unrolling (unroll&jam)
We focus our explanations on the HSW machine and single precision as the accuracy is enough. See [START_REF] Lemaitre | Cholesky factorization on simd multi-core architectures[END_REF] for the analysis of the double precision computation. All the machines have similar behaviors unless explicitly specified otherwise.
We first present the impact of the transforms on performance. Then, we compare our best version written in intrinsics with SIMD wrappers and MKL [START_REF] Mkl | Intel(R) math kernel library[END_REF]. Finally, we show the performance on multiple machines.
We focus on 4×4 Kalman filtering in order to validate the implementation while keeping a representative filter. However, the code is not limited to 4×4 systems and actually supports all sizes. The filtered system is an inertial point in 2D with the following state: (x, y, ẋ, ẏ).
B. Transformations
All transformations applied to Cholesky Factorization have been tested on Kalman Filter. A few other optimizations have been implemented and tested: algebraic optimizations and memory access optimizations.
1) Algebraic optimizations: When optimizing an algorithm like Kalman filtering, one can try to optimize the mathematical operations.
The first thing to consider is avoiding the recomputation of temporaries that are used several times. For the Kalman filter from algorithm 3, it is possible to keep the temporary product P H (line 4) to compute K (line 5).
It is also possible to keep S in its factorized form and expand the expression of K in the expression of x and P : algorithm 4. This ends up being less arithmetic operations as long as matrix-vector products are preferred over matrixmatrix products.
Algorithm 4: Kalman filter Optimized in/out : x, P // state, covariance input : u, z // control, measure input : A, B, Q, H, R // Parameters of the Kalman filter // Predict
2) Memory access of symmetric matrices: One can save many memory loads and stores by accessing only half of the symmetric matrices. Indeed, those matrices are used a lot within Kalman filtering for covariance matrices.
When the matrices are in AoS, accessing only half of a symmetric matrix decreases a lot the vectorization efficiency, especially with small matrices. Indeed, the pattern to access the near-diagonal elements is not regular.
However, when matrices are in SoA, there is no such penalty as we always load entire registers. Therefore, the vectorization efficiency is the same as for square matrices, except with fewer operations and memory accesses.
C. Benchmarks 1) Benchmark protocol: We use essentially the same protocol to test our Kalman filter as for the Cholesky factorization. The Kalman filter considered has a 4-dimensional state space (x, y, ẋ, ẏ). Many of these systems are tracked together. The time is measured per iteration. The plots use the same conventions as for Cholesky, plus these extra: |
01760289 | en | [
"sdv.eth",
"sdv.mhep.geo",
"sdv.gen"
] | 2024/03/05 22:32:13 | 2012 | https://hal.science/hal-01760289/file/PrenatalInformationNeeds_Preprint.pdf | Caroline Huyard
email: c.huyard@orange.fr
importance of non-medical information
Keywords: prenatal diagnosis, prenatal testing, information needs, information, decision making, Down syndrome, fragile X syndrome prenatal diagnosis, prenatal testing, information needs, information, decision making, Down syndrome, fragile X syndrome
Decision making after prenatal diagnosis of a syndrome predisposing to intellectual disability: What prospective parents need to know and the importance of non-medical information
Caroline Huyard
Introduction
It is acknowledged that women and their partners need to be provided with relevant information when undergoing prenatal testing both for ethical reasons related to informed choice and for obvious practical reasons related to the nature of the decision following a positive diagnosis, for a chromosomal condition such as Down syndrome, or a congenital condition such as spina bifida. The type of information that health professionals should provide to facilitate decision making after such a diagnosis remains largely unknown. Indeed, researchers have predominantly investigated decisional needs before the prenatal tests [START_REF] Durand | Information and decision support needs of parents considering amniocentesis: Interviews with pregnant women and health professionals[END_REF][START_REF] Hsieh | What are pregnant women's information needs and information seeking behaviors prior to their prenatal genetic counseling[END_REF][START_REF] St-Jacques | Decisional needs assessment regarding Down syndrome prenatal testing: A systematic review of the perceptions of women, their partners and health professionals[END_REF], which are complex technical procedures, and therefore difficult to explain to laypersons.
Regarding the final decision (i.e., once testing has yielded a positive result), existing studies have been focused on determining its explanatory factors. Quantitative approaches have highlighted the importance of the condition for which testing occurred [START_REF] Drugan | Determinants of parental decisions to abort for chromosome abnormalities[END_REF]Zlotogora, 2002), maternal age [START_REF] Britt | Determinants of parental decisions after the prenatal diagnosis of Down syndrome: Bringing in context[END_REF][START_REF] Kramer | Maintaining equilibrium: A grounded theory study of the processes involved when women make informed choices during pregnancy[END_REF], the pregnant woman's concern for the unborn child but also for herself [START_REF] Korenromp | Maternal decision to terminate pregnancy in case of Down syndrome[END_REF], the future parents' knowledge of relevant health services, which seems to take precedence over their knowledge of disability (Roberts, Stough, & Parish, 2002). In addition, researchers have relativised cultural differences and highlighted the existence of significant individual differences [START_REF] Hewison | Attitudes to prenatal testing and termination of pregnancy for fetal abnormality: A comparison of white and Pakistani women in the UK[END_REF]. These findings are consistent with the results obtained by qualitative approaches, which point towards emotional dimensions related to a tension between a commitment to the pregnancy and a desire to protect the child, the couple, and the family from the hardship of disability [START_REF] Bijma | Decision-making after ultrasound diagnosis of fetal abnormality[END_REF]Levy, 1999), the impact of a direct experience of the condition [START_REF] France | How personal experiences feature in women's accounts of use of information for decisions about antenatal diagnostic testing for foetal abnormality[END_REF], and a concern for the quality of life of the unborn child [START_REF] Ahmed | Decisions about testing and termination of pregnancy for different fetal conditions: A qualitative study of European white and Pakistani mothers of affected children[END_REF].
Beyond these individual emotional dimensions, the data available about pregnancy termination after a positive diagnosis outline the role of national contexts [START_REF] Boyd | Survey of prenatal screening policies in Europe for structural malformations and chromosome anomalies, and their impact on detection and termination rates for neural tube defects and Down's syndrome[END_REF] and suggest that non-medical, cultural, and institutional aspects may be worth considering.
For instance, in 2006, pregnancy termination following a congenital anomaly diagnosis was more than three times higher in France (33%), which is the second highest level in Europe after Spain, than in Germany (9%), which is the lowest level in Europe among countries where termination is legal, and twice as high as in Belgium (15%), the latter matching the average European level of 18% [START_REF] Ville | Parental decisions to abort or continue a pregnancy with an abnormal finding after an invasive prenatal test[END_REF].
Lastly, the influence of a direct or indirect experience of the corresponding disability, a neglected aspect, was recently the object of promising studies that have investigated the way an "experiential" knowledge of the relevant disability is spontaneously put to use in the context of prenatal testing [START_REF] Etchegary | The influence of experiential knowledge on prenatal screening and testing decisions[END_REF][START_REF] France | How personal experiences feature in women's accounts of use of information for decisions about antenatal diagnostic testing for foetal abnormality[END_REF]. These studies included the utility of women and their partners accessing personal testimonies of people with a disability in order to be able to make an informed choice [START_REF] Ahmed | Balance' is in the eye of the beholder: Providing information to support informed choices in antenatal screening via Antenatal Screening Web Resource[END_REF].
The results of these studies suggested that non-medical information might play a crucial role in parents' decision to continue or terminate a pregnancy after a positive diagnosis. This area of research is worth developing in order to check whether such information could be useful, and what it should comprise.
The inquiry presented here took a specific perspective by interviewing parents who have a child with a congenital syndrome that predisposes to intellectual disability, regardless of whether they underwent prenatal testing. This approach made it possible to address a situation where interviewees have a very direct experience of the consequences of the decision to continue a pregnancy, whether they have actually made such a decision or not.
Thus, interviewees are able to take a different view of the information they believe they would have needed at the time, had they had to make such a decision. The aim of this study was to explore the kind of information parents, whose child has an intellectual disability, considered important and useful to them for making a decision in case of prenatal diagnosis.
Methodology
The inquiry was part of a larger project that investigated the problems and requirements of recognition of people with an intellectual disability, in the sense of the German concept of Anerkennung [START_REF] Honneth | The struggle for recognition: The moral grammar of social conflicts[END_REF], which is actually closer to the ideas of appreciation and approval. In other words, the overarching question of this larger project was: What makes it possible for people to appreciate and support their fellow citizens who have an intellectual disability? Given the contemporary organisation of procreation and birth, this recognition issue had to be addressed in this study for a very early stage of life. This inquiry covered three countries: Belgium, France, and Germany.
The interviewees were recruited through self-help groups of parents whose young or adult children have an intellectual disability, or through professionals working in schools or residential centres for people with intellectual disability, with a total of eight different groups and five different professionals. Each group and professional person was sent a letter describing the research project and interview procedure. Then they circulated the letter with an invitation to potential participants to reply directly to the research investigator, who would conduct the interviews. In this letter, the interviewees were told that they were free to interrupt or terminate the interview whenever they wanted and to not answer certain questions if they did not want to without having to provide any explanation. They were told that their data would be handled confidentially and would remain anonymous. In the quoted interviews, each child's first name was systematically replaced by the most common first name for children of the same sex and age in her country. The interviewees were reminded about these points a second time just prior to the interview and asked if they agreed with them. Their agreement for the procedure was then recorded. None of the institutions, where the research was conducted, required a formal ethical approval. The appropriate procedure to protect the interviewees was instead defined in the French and German law, respectively, in the Code de la Santé Publique, art. L 1122Publique, art. L -1 (2004)), and in the Code Pénal, art. L 223-15-2 (2000), and in the Bundesdatenschutzgesetz, § 3 and § 4 (1990), and the Strafgesetzbuch, § 291 (1997). In Belgium and France the interviews were held in French, and in Germany, in German. According to the interviewees' wishes, interviews were either conducted at their home or by phone. The interviews were semistructured, audio-recorded, and transcribed.
Using a 30-item guide, the interviewer addressed four main topics: (a) discovery of the syndrome, (b) parenting practices, (c) moral feelings regarding the child's behaviour, and (d) personal dimensions of the experience of having such a child. This last topic facilitated investigation of what decision interviewees would have made during the pregnancy if they had received the diagnosis at that time, although, with four exceptions, interviewees were not confronted with such a decision.
The study was based on 33 interviews conducted in Germany (13 interviews), France (12 interviews), and Belgium (eight interviews) between 2008 and 2010 among women, men, or couples who had at least one child with either fragile X syndrome (15 interviews), Down syndrome (15 interviews), Williams syndrome (two interviews), or a congenital diaphragmatic hernia (one interview). Prior to their child's diagnosis, parents of children with fragile X syndrome were not aware of other cases in the family. The interviewees were predominantly women (25 interviewees), but there were eight male interviewees (25% of the interviewees). Four (12%) of the interviewees had experienced a prenatal diagnosis; that is, in three cases the pregnancy was continued, and in one case it was terminated because the couple already had a child with an intellectual disability. The recruitment of the interviewees was carried out sequentially, in order to obtain a sample that was representative of the general population of each of the three countries in terms of the participants' professional occupation. However, unavoidably, the interviewees were somewhat more educated and skilled than the general population.
<Please insert Table 1 about here>
Other important characteristics of the interviewees that could have been expected to play a role in decision making are age of the mother at the time of the birth of her child with a disability [START_REF] Britt | Determinants of parental decisions after the prenatal diagnosis of Down syndrome: Bringing in context[END_REF][START_REF] Kramer | Maintaining equilibrium: A grounded theory study of the processes involved when women make informed choices during pregnancy[END_REF], marital status, and rank of the child amongst siblings (Levy, 1999). These characteristics are summarised in Tables 2, 3, and4 Strauss, 2008). The coding process started by labelling "a distressing decision" regarding pregnancy continuation or termination, and accordingly investigated what caused this distress. Coding the data with respect to the decision making highlighted the importance of three types of information, namely: (1) the foetus as a future child and individual person; (2) the couple as future parents; (3) the social environment of the future child and her parents, and especially the capacity of this environment to support them. Currently, none of these three types of information is available to parents at the time of the decision.
Findings
The sociodemographic dimensions presented in Table 1, 2, 3, and 4 did not account for differences between the interviewees. Instead, the key reasons for the decision they would have taken related to informational needs and issues. Reflecting on the experience of bringing up a child with intellectual disability, the interviewees concluded that three main types of information would be needed by prospective parents in order to make the decision to continue or to terminate a pregnancy after a prenatal diagnosis: (1) the foetus as a future child and individual person; (2) the couple as future parents; (3) the social environment of the future child and her parents, and especially its capacity to support them. The interviewees' justifications varied, depending on whether they believed they would have continued or terminated the pregnancy or whether this remained an impossible decision for them. These three categories of information are presented in relation to the decision the interviewees retrospectively think they would have made.
Continuing the pregnancy: Parenting first and foremost means accepting the children, whatever their medical characteristics
Thirteen parents stated that, had they known about their child's syndrome at a prenatal stage, they would have continued the pregnancy. Four parents had experienced this situation: their child's syndrome was diagnosed during the pregnancy and they had kept the baby in three cases, and one mother wished she had not terminated a pregnancy.
In most of these interviews (10 of 13), the justifications referred to a personal view of what parenting is and implies, namely, to accept children as they are (five interviews), and to the desire of having a child and accepting him or her however he or she may be (five interviews). Parenting is the key issue. Three interviewees provided slightly different justifications. In two interviews, the parents considered that even though their child has a syndrome that hampers her or his learning ability, this was not serious enough to justify a pregnancy termination. In a third interview, a mother explained that she gradually discovered her own ability to care for her child. Her inability to provide this care would have been the justification for termination of the pregnancy.
The view of parenting that supports pregnancy continuation for these parents is sometimes related to religious belief, as was the case for three interviewees. However, such a view was also expressed without any religious reference in two interviews, as demonstrated by this woman (France) whose 6-year-old son has fragile X syndrome: I cannot say whether I would have let Thomas go away or not, I cannot say that now.
To make a guess, I would say no, no, it would be no. Thomas would have had an ear missing or an arm missing, it would have been no. Thomas is Thomas.
The importance of the decision of having or keeping a child is best illustrated by two interviewees who were told that their child had Down syndrome, as they were themselves in complicated situations (i.e., out of a stable relationship in one case and affected by a still developing but potentially very serious illness in the other case). These interviewees decided to continue the pregnancy. The circumstances of their pregnancy had led them to ask themselves if they wanted to keep the child or not. They had chosen to keep the child, and did not change their mind when they received the diagnosis for Down syndrome. This woman from Germany whose 3-year-old daughter has Down syndrome explained: She was not really a desired child. And so the question arose, before I even knew she had Down syndrome, to keep this child or not. I said yes. Then she was diagnosed with Down syndrome. And this took me back to the same question. And I thought in the end … 1 Who am I to decide if this child will live or not? I'm only a human being.
These justifications for the continuation of the pregnancy do not take into account the medical information that may have been provided through prenatal diagnosis. These interviewees either unconditionally decided to have a child, which many of them expressed by saying "we take children as they come," or they considered that the issues to be taken into account were the "the foetus as an individual child," "being able to cope as a couple and as parents," and "being provided assistance."
The importance of the foetus as an individual child is apparent in the case of a man (Germany) whose 14-year-old daughter was diagnosed with a congenital anomaly during the second trimester of pregnancy. To help this couple in their decision, the question they needed an answer to was the future of this particular child and not the syndrome:
For one thing, a friend […] called my wife and said, 'ask the child in your belly if she wants to live.' And just then, the child […] protested loudly; that is, she kicked and fidgeted thoroughly. Then, the evening before the day when we had to decide if we would continue or interrupt the pregnancy, I said, 'Big Boss, I can't make the decision. I'm lost, so give me a sign.' It had really nothing to do with morals. I said, 'if the child must live, then tomorrow morning, before 7 a.m., send a doctor who will tell my wife that we should continue the pregnancy.' And on the next day, at 8 a.m., I called my wife at the hospital and she told me that at 7 a.m., the doctor had told her, 'forget about the abortion. I've the feeling that it will be all right with the child.' And we said, 'okay, fine. We'll do that and accept it.'
The second crucial theme for the interviewees was their ability to cope as parents. It is highlighted here by a woman (Germany) whose son, born in 1991, was diagnosed very early with fragile X syndrome, at a time when she was pregnant for a second time. Prenatal diagnosis was therefore performed very swiftly:
[The doctors] said that the child would certainly be more seriously affected than Kevin, which was an additional shock. I had to make a decision relatively quickly.
And the father said at the time that he did not want another disabled child, he felt he wouldn't be able to cope. And so, I let myself be influenced, because apart from of Lastly, the interviewees expressed concern for the assistance provided by a range of social institutions for parents of children with disability, especially dedicated schools, homes, and workplaces. This was a major issue for this woman (France) whose 23-year-old son has Williams syndrome:
As I often say, medicine has made progress, and nowadays makes it possible for children with some difficulties, children who would not have lived, to live. But society has made no progress! Because society takes care of the children until they're 18-20 year old, and then, there's no money, no more school, and we're told to manage by ourselves. This is why, sometimes, I say, we'd better not bear them. It's hard.
Medical information was not considered very important for decision making by these interviewees. Neither was it for the interviewees who said they would have taken the opposite decision.
Terminating the pregnancy: When the lack of social support turns maternal parenting into too heavy a burden
Eight parents noted that had they known about their child's syndrome at a prenatal stage, they would have terminated the pregnancy. In six of these interviews, the common justification was that the essential support, needed by the parents to bring up the child with intellectual disability, was not appropriate; the type of support was different from one interview to another, and two interviewees provided no justification. Six interviewees were not concerned about the foetus as an individual child but worried deeply about their ability to cope as parents and the social environment of the child with respect to the capacity of this environment to support the child after their death. Some female interviewees connected their ability to cope as parents to the support available from the child's father, although the seven female interviewees who brought up their child alone did not all say that they would have preferred terminating their pregnancy because this support was not available. In two interviews, however, this was the case, as a woman (Belgium) whose younger son, aged 27, has fragile X syndrome explained: Ah, I wouldn't have kept him. Maybe you can't see it now, but well, I had to cope with quite a lot of things all alone. It's not … I don't know if it's actually much easier, people say it is not much easier when people cope as a couple, because when one says black, the other says white. Maybe. But in a couple, you're two people, and you can help each other, carry each other, when there's two of you …
The interviewees shared an anxiety about what their children's life will be like as adults and especially after their parents' death. An appropriate, supportive, social environment is a key concern for decision making. Two of the interviewees refer to this anxiety when making a case for pregnancy termination, such as this woman (France) These issues, coping as a couple and a supportive social environment, intertwine in more particular personal situations or views. One of the interviewed women (France) has to look after her ailing father as well as her sister who also has an intellectual disability. This interviewee explained that because of this family responsibility, she would have preferred to avoid having a son with a disability. Another woman (Germany) said she would have terminated the pregnancy, firstly, "for her" and, secondly, for her 34-year-old son, whom she considers a burden and whose life she views as not worthwhile. She was divorced at the time of the interview, and said her husband had always borne the bulk of the education of their two children. This view relates both to the ability to cope and to the social environment, as it is apparent in her explanations: I would have aborted. Well, I would say, for me. For me. But in the first place, for him, because, as I often say, what happens to him, when his parents are not there anymore? And his sister doesn't want to have anything to do with her brother. I always wonder, what happens to these children when the parents are not there anymore? What do they feel, if they feel anything? I don't know. We can express what we want to do. But they are always told to do this and that. What kind of life is it? Is it worth it? So, had I been able to, I would have aborted, full stop. These justifications did not take medical information, which may have been provided through prenatal diagnosis, into account. The questions that parents feel they are confronted with are different: they relate both to a supportive social environment, especially with respect to the child's future, and to their own capacity to bring up such a child.
Remaining irresolute: When the essential information for making a decision is not available
Lastly, 12 interviewees were unable to say whether they would have continued with the pregnancy or not. In these interviews (except for one, where the interviewee provided no justification), these parents' indecision resulted from their being acutely aware that the information they considered as essential to make this decision was not available.
The first fundamental element of missing information is, according to two interviewees, the personality and the individual abilities of the child. At the time of the interviews, these parents had learnt a lot about their child, and not only that he or she had a particular syndrome, but also that they had often had the opportunity to notice that two persons with the same syndrome were very different and had different abilities. As a woman (France) A second crucial element, mentioned by seven interviewees, related to the parents' ability for parenting. Had they known about the syndrome during pregnancy, they believe they would have been plagued by fear, considering themselves either unable to meet the child's emotional, affective, and educational needs, or to face this unknown situation. The interviewees reported that this fear, which they somehow experienced when the child was diagnosed, disappeared as the child grew and as they realised they were actually able to cope. However, this information was not available for a decision at the time of pregnancy. For instance, the concern for what life would be like with such a child was an issue for a man (Belgium) whose 16-year-old son has Down syndrome: Imagine we're back to the starting point, and we don't know what we know now, I think we would not have been through. Now, with what we have experienced, finally, I would say it is not as complicated as one could have thought. And I think there would have been no worries The ability to cope was also a major question for this woman (France) whose 17-yearold son has Down syndrome: I did not know, and I did without knowing. Besides, I'm sure there are lots of things that we don't know. We don't know what we are up to, how resourceful we actually are … I think, these nine months, I don't know … Maybe I would have reacted well, maybe I would have done things differently … But I had nine months of happiness, and Kevin had nine months of happiness too, nobody asked any questions. Because when I see … I see my little daughters-in-law and their pregnancies, it was really medicalised. And I see some of my colleagues who had tests, they suspected there was something wrong, it is incredibly distressing. I think … It scares me. It's exaggerated.
Because I mean, anyway … I do not think this is denial, I mean, if you look at life with a certain perspective, everything is so risky that you no longer feel up to anything.
The contrast between the interviewees' ignorance about the essential elements for decision making at prenatal stage and the way these elements became naturally available over the years builds the core of the dilemma parents face with prenatal testing.
Anxieties regarding the quality and the availability of institutional support were quite present, as exemplified by this woman (France) whose 14-year-old daughter has fragile X syndrome: I don't know, I really don't know. [...] When you've got children, you look at how well they do at school. And then, they say 'I'd like to do this' or 'I'd prefer to do that job.' But I say, my daughter has no choice [she pauses]. People will decide for her.
They will impose decisions on her. [With tears in her eyes] I stopped my studies, because I didn't want to continue! But she cannot decide. She cannot. I'm sure that when she's 20, she'll have to go either in a home for disabled people or in a specific workplace. But you can't get a job without relationships, so, she'll have no choice, and she'll have to go in a home. There's no hope. No hope.
This highlights that prenatal decision making rests on a major contradiction. Over the years following the child's birth, parents have gained information that would have enabled them to decide, with confidence, about the pregnancy. However, the problem they face and ultimately cannot solve is the fact that they would actually have been forced to decide without this information.
Discussion
The findings are the result of a study that covers two syndromes in three countries, which ensured that the findings did not relate to only one particular syndrome or national context.
The interviews were performed at a time when the interviewees were no longer in a situation of distress resulting from diagnosis disclosure and could reflect upon their past experience.
The findings of this study support the insight that non-medical information is a decisive factor in decisions about the continuation or termination of a pregnancy after a positive diagnosis for a syndrome that predisposes to intellectual disability. The identified informational needs are consistent with the results of different qualitative studies about decision making at the prenatal testing stage. Some studies highlighted the influence of the future quality of life of the unborn child [START_REF] Ahmed | Decisions about testing and termination of pregnancy for different fetal conditions: A qualitative study of European white and Pakistani mothers of affected children[END_REF], which relates to the need for information about the child as an individual person, as identified in this study. Some researchers underlined the influence of the self-interested motives of the prospective mother [START_REF] Korenromp | Maternal decision to terminate pregnancy in case of Down syndrome[END_REF], which relates to the informational need about the prospective parents as parents identified here. Lastly, other studies highlighted the influence of the prospective parents' knowledge of relevant health services [START_REF] Roberts | The role of genetic counseling in the elective termination of pregnancies involving fetuses with disabilities[END_REF], which relates to the informational need about the available support for the future family as identified in this study. In addition, the views expressed by interviewees match the respective national statistics; the German interviewees said they would have continued the pregnancy more often than the French and the Belgian interviewees [START_REF] Ville | Parental decisions to abort or continue a pregnancy with an abnormal finding after an invasive prenatal test[END_REF].
The number of interviewees who said they would have continued the pregnancy (13) may seem high in comparison with the number of interviewees who said they would have terminated the pregnancy (eight). It is therefore necessary to investigate if a post hoc rationalisation could have taken place (i.e., if some parents who have and live with a child with a disability have tried to reduce a cognitive dissonance and provided a biased answer).
Fortunately, the data allowed us to compare the answers of parents who said they would have kept the child if they had known about the syndrome at the prenatal stage with the answers of parents who did know about the syndrome and kept the child. It appears that the typical justification tends to be the same: when the decision to have a child has been made, the characteristics of the child are not important. Thus, a father who had to make a decision at the prenatal stage explained: When I work with people, I don't work with percentage. I say: 'it is your decision, you have to put up with making a decision, and you have to make your own decision.
[…] No mathematics, no study, no scientific book will help you. It's your own decision. That's what my daughter taught me.
Similarly, a mother who did not have to make such a decision said: "Statistics are complex, and for the parents, it does not change much. An accident can always happen.
Those who don't accept risks should not have children!" The point in this study is not to know what people would actually have done, but to find out what they would base their decision on. The similar line of reasoning of these two categories of parents within the group of interviewees, who said they would have kept the child, is, in this perspective, reliable.
The importance of the three types of non-medical information has crucial theoretical implications for the decision-making process. This finding highlights that an appropriate recognition theory should consider as crucial the failure to identify the decisive role of the person who "recognises" and the social environment where the recognition takes places, and not so much the characteristics of the "recognised" person. This is consistent with previous studies, although here a different perspective was adopted with different implications. A number of disability studies have criticised the views that focus on individual medical characteristics and insist on the social dimensions of disability [START_REF] Asch | Prenatal diagnosis and selective abortion: A challenge to practice and policy[END_REF][START_REF] Parens | Disability rights critique of prenatal genetic testing: Reflections and recommendations[END_REF]. The main finding of this study goes further and suggests how important it is to describe and analyse the support needed by those who have to care for people with a disability. Empowering these caregivers is a stepping stone towards recognition of people with a disability, in addition to being a major practical concern. This point is particularly apparent in the group of interviewees who said they would have terminated the pregnancy; the majority of them referred to a lack of social support on which to ground this decision. The caregivers' needs are most important since traditional explanatory factors such as the parents' professional background, the mother's age or matrimonial status did not appear to play an important role in this study. The parents' needs seem to be related to very individual situations and individual biographies.
Ten interviewees reported that their main concern at the time of the disclosure of the diagnosis related to what their child would be able to do as an adult. This is the only piece of information they asked medical professionals about. Yet this, unfortunately, is a question that could not be answered. Other essential information, however, could be provided. This study supports the view that expectant parents need information regarding available social support as well as their own ability to cope. This has important practical implications. Meeting the prospective parents' informational needs requires data that cannot usually be provided by medical staff and is rarely available during pregnancy since it relates to a future state of the individual child and of the parents and, to a lesser extent, of society. This finding supports the view that it could be useful for professionals and lay people, who are familiar with the everyday life aspects of disability, to assist medical staff by bridging the traditional divide between perinatal medicine and disability policies [START_REF] Ville | Parental decisions to abort or continue a pregnancy with an abnormal finding after an invasive prenatal test[END_REF]. Prospective parents need to be provided with information about the life of a family with a child who has the corresponding syndrome in order to substantiate their own reflections about the child, themselves, and the available social support. Such information could be based on a range of testimonies covering different and contrasting experiences, as well as different aspects of daily life, including statistical data about the development of the child's abilities as they grow up. Today's digital technologies make access to a range of information possible, and users can choose the type and the amount of information they wish to have, according to their own needs.
my parents and Kevin's father, I have nobody. I said okay, maybe it is better not to keep the child. And I must say that I immensely regret it. I have regretted it to this day, because finally …Why [should I have not kept the child]? […] Of course, it would have been different. But Kevin would have had a brother! This would have been good. I have very often thought, my God, even with a second child, I'd have made it. Sure, I would have had to restrict myself. It would have been difficult, but he would have had a brother! And that, I really … Well … Yes. I regretted it.
, whose 12-year-old daughter has Down syndrome: I would abort. Sure. […] It wouldn't be because of the everyday difficulties, although they exist and we have a different life. But the biggest issue for me, it's what happens to these children after their parents' death, what their future will be like. That's what I'm afraid of. And that's why, if I had the choice now, and if I had had the choice, I would have aborted.
with a 32-year-old daughter with Down syndrome explained: Now, since Stephanie is actually what she is, I would tend to say, I did well [to continue the pregnancy]. But when you see other people with Down syndrome who have other problems, who are in much worse situations … I think … It's a difficult choice. […] Today, Stephanie is thirty-two, I can see how far she has gone in life, so I see things differently. A man (Belgium) whose 16-year-old daughter has Down syndrome similarly insisted on the importance of the child's personality: It's difficult to go back in time and say what I would have done. Now … I lived with my daughter, and I would not like a different life. I'm happy that I had Laura and not another child. […] Because I've changed. I see things differently now. I have lived with Laura, I have met people with a disability, so the way I see things has changed.
. <Please insertTable 2 about here> <Please insert Table 3 about here> <Please insert Table 4 about here> Data analysis was performed using classical grounded theory methods (Corbin &
Table 1 . Professional occupation
1
Number of interviewees Percentage of interviewees
Table 2 . Age of the mother at the time of the birth and age of the child at the time of the inquiry
2 Age of the mother at the time of the birth
Number of Percentage of Age of the child Number of Percentage
interviewees interviewees children of children
20-25 years 5 16 Less than 7 years 4 11
26-30 years 9 29 7-11 years 7 19
31-35 years 10 32 12-20 years 19 51
36-40 years 7 23 Older than 20 7 19
Total 31 100 Total 37 100
Table 4 . Rank of the child with a disability amongst siblings
4
Number of children Percentage of children
Acknowledgements
I am grateful to Nick Jones for proofreading this text, and to two anonymous reviewers, whose comments strongly contributed to improve the manuscript. I very warmly thank all the interviewees for their contribution.
Code de la Santé Publique, art. L 1122Publique, art. L -1 (2004)). Retrieved from http://www.legifrance.gouv.fr Code Pénal, art. L 223-15-2 (2000). Retrieved from http://www.legifrance.gouv.fr Corbin, J., & Strauss, A. (2008). Basics of qualitative research: Techniques and procedures for developing grounded theory (3rd ed.). Thousand Oaks, CA: Sage.
… is used when the interviewee did not finish the sentence or paused briefly.
Author note
This research project was supported by a Marie Curie Intra European Fellowship within the 7th European Framework Programme and by the Maison des Sciences de l'Homme and the Thyssen Stiftung through a joint Clemens Heller grant. The funding bodies have imposed no restriction on free access to or publication of the research data. The author declares no financial and nonfinancial conflict of interest. |
00176033 | en | [
"shs.eco"
] | 2024/03/05 22:32:13 | 2007 | https://shs.hal.science/halshs-00176033/file/Flachaire_Hollard_07c.pdf | Emmanuel Flachaire
Guillaume Hollard
Model Selection in Iterative Valuation Questions by
Keywords: starting point bias, preference uncertainty, contingent valuation JEL Classification: Q26, C81
, outperforms other standard models and confirms that, when uncertain, respondents tend to accept proposed bids.
Introduction
The NOAA panel recommends the use of a dichotomous choice format in contingent valuation (CV) surveys [START_REF] Arrow | Report of the NOAA panel on contingent valuation[END_REF]. To improve the efficiency of dichotomous choice contingent valuation surveys, follow-up questions are frequently used. While these enhance the efficiency of dichotomous choice surveys, several studies have found that they yield willingness-to-pay estimates that are substantially different from estimates implied by the first question alone. This is the so-called starting point bias. 1 Many authors have proposed some specific models to handle this problem [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF][START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF][START_REF] Deshazo | Designing transactions without framing effects in iterative question formats[END_REF][START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF][START_REF] Cooper | One and one-half bids for contingent valuation[END_REF][START_REF] Lechner | A modelisation of the anchoring effect in closed-ended question with follow-up[END_REF][START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF].
In [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], we proposed a model, called the Range model, in which individuals hold a range of acceptable values, rather than a precisely defined value of their willingness-to-pay. 2 In the Range model, starting point bias occurs as a result of respondent uncertainty when answering the first question, while existing models assume that starting point bias occurs while answering the second question. 3This paper proposes further tests of the Range model: (1) we test the Range model on another dataset and (2) we test the Range model against most existing models. An additional result of this paper is a clarification of the relation among existing models. It is shown that existing models can be derived from three general ones. In some favorable cases, this allows us to compare the performance of existing models.
The article is organized as follows. The following section presents the Range model. The subsequent sections present other standard models, the interrelation between all the models and an application. The final section concludes.
Range model
The Range model, developed in [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], is a dichotomous choice model which explains starting point bias by respondent's uncertainty. It models the individual decision process, using the principle of "coherent arbitrariness" [START_REF] Ariely | Coherent arbitrariness: Stable demand curves without stable preferences[END_REF],4 and can be estimated from a bivariate probit model.
Decision process
In dichotomous choice contingent valuation with follow-up questions, two questions are presented to respondents. The first question is "Would you agree to pay x$?". The second, or follow-up, question is similar but asks for a higher bid offer if the initial answer is yes and a lower bid offer otherwise. The Range model is based on the following decision process :
1. Prior to a valuation question, the respondent holds a range of acceptable values:
wtp i ∈ W i , W i with W i -W i = δ (1)
where W i is the upper bound of the range.
2. Confronted with a first valuation question, the respondent selects a value inside that range according to the following rule:
W i = Min wtp i |wtp i -b 1i | with wtp i ∈ W i , W i (2)
A respondent selects a value so as to minimize the distance between his range of willingness-to-pay and the proposed bid b 1i . In other words, W i = b 1i if the bid falls within the WTP range, W i is equal to the upper bound of the range if b 1i is greater than the upper bound of the WTP range, and W i is equal to the lower bound of the range if b 1i is less than the lower bound of the WTP range.
3. The respondent answers the questions according to the selected value:
[ ] W i ___ __ Wi { { WTP YES NO { ? x x x b i b i b i
He will agree to pay any amount below W i and refuse to pay any amount that exceeds W i . When the first bid falls within the WTP range, he can answer yes or no (?): we assume in such case that a respondent answers yes to the first question with a probability ξ and no with a probability 1 -ξ.
If respondents always answer yes when the first bid belongs to the interval of acceptable values (ξ = 1), the model is called the Range yes model. If respondents always answer no when the first bid belongs to the interval of acceptable values (ξ = 0), the model is called the Range no model.
Estimation
In [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], we show that the Range model can be estimated from a more general random effect probit model, that also encompasses the Shift model proposed by [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. If we use a linear model and if we assume that the distribution of WTP is Normal, the probability that the individual i answers yes to the j th question, j = 1, 2 equals to:
M 1 : P (W ji > b ji ) = Φ X i α - 1 σ b ji + λ 1 D j r 1i + λ 2 D j (1 -r 1i ) (3)
where r 1i is the response to the first payment question, D 1 = 0 and D 2 = 1, α = β/σ, λ 1 = δ 1 /σ and λ 2 = δ 2 /σ. Based on this equation, the parameters are interrelated according to:
β = α σ, δ 1 = λ 1 σ and δ 2 = λ 2 σ. (4)
When we use just the responses to the initial payment question (j = 1), this equation simplifies to:
P (yes) = P (W 1i > b 1i ) = Φ X i α - 1 σ b 1i (5)
Moreover, the probability that the individual i answers yes to the initial and the followup questions (r 1i = 1, j = 2) is equal to:
P (yes, yes) = Φ X i α - 1 σ b 2i + δ 1 σ (6)
From the estimation based on M 1 , different models can be considered:
• δ 1 < 0 and δ 2 > 0 corresponds to the Range model (with δ 2 -δ 1 = δ).
• δ 1 < 0 and δ 2 = 0 corresponds to the Range yes model
• δ 1 = 0 and δ 2 > 0 corresponds to the Range no model
• δ 1 = δ 2 corresponds to the Shift model • δ 1 = δ 2 = 0 corresponds to the Double-bounded model.
It is clear that the Range model and the Shift model are non-nested (one model is not a special case of the other); they can be tested through M 1 .
Interpretation
Estimation of the Range model provides estimates of β, σ, δ 1 and δ 2 , from which we can estimate a mean of WTP µ ξ and a dispersion of WTP σ. This last mean of WTP would be similar to the mean of WTP estimated using the first questions only, that is, based on the single-bounded model. Additional information is obtained from the use of follow-up questions: estimates of δ 1 and δ 2 allow us to estimate a range of means of WTP:
[µ 0 ; µ 1 ] = [µ ξ + δ 1 ; µ ξ + δ 2 ]
with δ 1 ≤ 0, and δ 2 ≥ 0.
The lower bound µ 0 corresponds to the case where respondents always answer no if the bid belongs to the range of acceptable values (ξ = 0). Conversely, the upper bound µ 1 corresponds to the case where respondents always answer yes if the bid belongs to the range of acceptable values (ξ = 1). How respondents answer the question when the bid belongs to the range of acceptable values can be tested as follows:
• respondents always answer no corresponds to the null hypothesis H 0 : δ 1 = 0
• respondents always answer yes corresponds to the null hypothesis H 0 : δ 2 = 0.
Interrelation with standard models
Different models are proposed in the literature to control for starting point bias: anchoring bias, structural shift effects and ascending/descending sequences. All these models assume that the second answer is sensitive to the first bid offer. They assume that a prior willingness-to-pay W i is used to answer the first bid offer, and an updated willingness-topay W ′ i is used by the respondents to answer the second bid. It follows that an individual answers yes to the first and to the second bids if:
r 1i = 1 ⇔ W i > b 1i and r 2i = 1 ⇔ W ′ i > b 2i (8)
Each model leads to a specific definition of W ′ i . In the following subsections, we briefly review some standard models, their estimation and the possible interrelations between them and the range model previously defined.
Models
Anchoring model: [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF] propose a model where the respondents combine their prior WTP with the value provided by the first bid as follows:
W ′ i = (1 -γ) W i + γ b 1i (9)
The first bid offer plays the role of an anchor: it causes the WTP to come to it.
Shift model: [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] propose a model where the WTP systematically shifts between the two answers:
W ′ i = W i + δ (10)
The first bid offer is interpreted as providing information about the cost or the quality of the object. Indeed, a respondent can interpret a higher bid offer as paying more for the same object and a lower bid offer as paying less for a lower quality object.
Anchoring & Shift model: [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions[END_REF] proposes a model that combines anchoring and shift effects:
W ′ i = (1 -γ) W i + γ b 1i + δ (11)
In addition, see [START_REF] Aadland | Incentive incompatibility and starting-point bias in iterative valuation questions: comment[END_REF] and [START_REF] Whitehead | Incentive incompatibility and starting-point bias in iterative valuation questions: reply[END_REF] for estimation details.
Framing model: DeShazo ( 2002) proposes de-constructing iterative questions into their ascending and descending sequences. His results show that the answers that follow an initial yes cause most of the problems. He recommends using the decreasing follow-up questions only:
W ′ i = W i if r 1i = 0 (12)
Using prospect theory [START_REF] Kahneman | Prospect theory: an analysis of decisions under risk[END_REF], Deshazo argues that the first bid offer is interpreted as a reference point if the answer to the first question is yes: the follow-up question is framed as a loss and the respondents are more likely to answer no to the second question.
Framing & Anchoring & Shift model: [START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF] propose applying anchoring and shift effects in ascending sequences only:
W ′ i = W i + γ (1 -W i ) r 1i + δ r 1i (13)
It takes into account questions that follow an initial yes. Empirical results suggest that gains in efficiency can be obtained compared to the Framing model. Note that this model is not based on the underlying decision process defined in section 2.
Estimation
Implementation of the Anchoring & Shift model can be based on a random effect probit model, with the probability that the individual i answers yes to the j th question, j = 1, 2 equals to:
M 2 : P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )D j + λD j ( 14
)
where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σ -γσ) and λ = δ/(σ -γσ). Based on this equation, the parameters are interrelated according to:
β = ασ, γ = θσ/(1 + θσ) and δ = λσ(1 -γ). (15)
Implementation of the Anchoring model and of the Shift model can be derived from this last probability, respectively with δ = 0 and γ = 0. The Double-bounded model corresponds to the case δ = γ = 0.
The Framing & Anchoring & Shift model differs from the previous model by the fact that anchoring and shift effects occur in ascending follow-up questions only. Its implementation can be based on a random effect probit model, with the probability that the individual i answers yes to the j th question, j = 1, 2 equals to:
M 3 : P (W ji > b ji ) = Φ X i α - 1 σ b ji + θ (b 1i -b ji )D j r 1i + λ D j r 1i (16)
where D 1 = 0 and D 2 = 1, α = β/σ, θ = γ/(σ -γσ) and λ = δ/(σ -γσ). Based on this equation, the parameters are interrelated according to (15).
Interrelation between all the models
It can be helpful to see the interrelations between all the models. Indeed, some models are nested and thus, we can test a restricted model against an unrestricted model with standard inference based on a null hypothesis. Table 1 shows the restrictions to apply to the probabilities M 1 , M 2 and M 3 , defined in equations ( 3), ( 14), ( 16), in order to estimate the different models. For instance, it is clear that the Shift and the Range models are non-nested, but they are both special cases of M 1 . Thus, a Shift model can be selected against a Range model through the general form M 1 .
Model M 1 M 2 M 3 Double δ 1 = δ 2 = 0 γ = δ = 0 γ = δ = 0 Anchoring δ = 0 Shift δ 1 = δ 2 γ = 0 Anch-Shift n. c. Fram-Anch-Shift n. c. Range δ 1 ≤ 0 ≤ δ 2 Range yes δ 1 ≤ 0, δ 2 = 0 γ = 0
Application
In this application, we use a survey that involves a sample of users of the natural reserve of the Camargue, a major wetland in the south of France. The purpose of the contingent valuation survey was to evaluate how much individuals were willing to pay as an entrance fee to contribute to the preservation of the natural reserve. The survey was administered to 218 recreational visitors during spring 1997, using face to face interviews. Recreational visitors were selected randomly in seven sites all around the natural reserve. The WTP question used in the questionnaire was a dichotomous choice with follow-up.5 For a complete description of the contingent valuation survey, see [START_REF] Claeys-Mekdade | Quelle valeur attribuer à la Camargue? Une perspective interdisciplinaire économie et sociologie[END_REF]. Mean values of the WTP were estimated using a linear model [START_REF] Mcfadden | Issues in the contingent valuation of environmental goods: Methodologies for data collection and analysis[END_REF]. Indeed, [START_REF] Crooker | Parametric and semi-nonparametric estimation of willingness-to-pay in the dichotomous choice contingent valuation framework[END_REF] show that the simple linear probit model is often more robust in estimating the mean WTP than other parametric and semi-parametric models.
The mean and the dispersion of WTP estimated from a single bounded model are: The confidence interval of μ is obtained by simulation with the Krinsky and Robb procedure, see Haab and McConnell (2003, ch.4) for more details.
μ = 113.
Let us consider the following standard models: double-bounded, anchoring, shift, anchoring & shift models. These models can be estimated from M 2 , with or without some specific restrictions, see ( 14). Table 2 As expected, the confidence interval of the mean WTP and the standard error of the dispersion of the WTP decrease significantly when we use the usual double-bounded model (Double) instead of the previous single-bounded model. However, estimates of the mean WTP in both models are very different (89.8 vs. 113.5). Such inconsistent results suggest a problem of starting-point bias. It leads us to consider the Anchoring & Shift model (Anch-Shift) to control for such effects, in which the Double, Anchoring and Shift models are nested. We can compute a likelihood-ratio statistic (LR) to test a restricted model against the Anchoring & Shift model. The LR test is twice the difference between the maximized value of the loglikelihood functions (given in the last column), which is asymptotically distributed as a Chi-squared distribution. We can test the Double model against the Anch-Shift model with the null hypothesis H 0 : γ = δ = 0, for which LR = 10.4. A P -value can be computed and is equal to P = 0.0055: we reject the null hypothesis and thus the Double model. We can test the Anchoring model against the Anch-Shift model (H 0 : δ = 0): we reject the null hypothesis (P = 0.0127). Finally, we can test the Shift model against the Anch-Shift model (H 0 : γ = 0): we do not reject the null (P = 0.1572). From this analysis, the Shift model is selected.
It is interesting to observe that, when we compare the results between the Shift and the Single-bounded models, the confidence intervals and standard errors are not significantly different. This supports the conclusion of [START_REF] Herriges | Starting point bias in dichotomous choice valuation with follow-up questioning[END_REF]: they argue that once we have controlled for the starting-point effect, the efficiency gains from the follow-up questioning can be small.
To go further, we consider a model where anchoring and shift effects occur in ascending sequences, but not in descending sequences (Fra-Anc-Shi). The case with shift effect in ascending sequences (no anchoring) corresponds to the Range model where respondents always answer yes if the initial bid belongs to their range of acceptable values. Thus, we call this last model Range yes rather than Fra-Shi in the table. The models can be estimated from M 3 , with or without some specific restrictions, see ( 16). Estimation results are given in Table 3 If we compute a LR statistic to test the Double model against the Fra-Anc-Shi model (H 0 : γ = δ = 0), we reject the null hypothesis (P = 0.0033). Conversely, if we test the Range yes model against the Fra-Anc-Shi model (H 0 : γ = 0), we do not reject the null hypothesis (P = 0.6547). From this analysis, the Range yes model is selected.
It is interesting to observe that the Range yes model provides efficiency gains compared to the single-bounded and Shift models: confidence intervals and standard errors of the mean and of the dispersion of WTP are smaller. However, the Shift model is selected from M 2 and the Range yes is selected from M 3 : these two models are non-nested and no inference is used to select one model.
Next, we consider the model developed in this article, that considers starting pointbias with respondent's uncertainty. This model can be estimated from a more general model M 1 and corresponds to the case δ 1 ≤ 0 ≤ δ 2 , see (3). An interesting feature of M 1 is that the Double and the Shift model are special cases, respectively with the restrictions δ 1 = δ 2 = 0 and δ 1 = δ 2 . Thus, even if the Range and the Shift models are non-nested, we can test them through M 1 . Estimation results are given in Table 4. The estimation result, obtained with no restrictions, provides δ1 ≤ 0 ≤ δ2 . It corresponds to the case of the Range model and thus, estimation results with no constraints are presented in the line called Range. This result suggests that the Range model is more
: δ 2 = 0).
We do not reject the null hypothesis (P = 0.5270). From this analysis, the Range yes model is selected.
Finally, inference based on M 1 , M 2 and M 3 leads us to select a Range model, where the respondents answers yes if the initial bid belongs to their range of acceptable values6 . This model gives an estimator of the mean WTP close to the single-bounded model (117.0 vs. 113.5) with a smaller confidence interval ([106.7;129.8] vs. [98.1;138.2]) and smaller standard errors (12.8 vs. 17.9). Table 5 presents full econometric results of this model with the single-bounded model. It is clear from this table that the standard errors in the Range yes are always significantly reduced compared to the standard errors in the single-bounded model. In other words, the selected Range model provides both consistent results with the single-bounded model and efficiency gains. Furthermore, we can draw additional information from the Range model. Indeed, from (7) we have: This model provides a range of values, rather than a unique WTP mean value.
From our results, we can make a final observation. Estimation of a random effect probit model with an estimated correlation coefficient ρ less than unity suggests that respondents use two different values of WTP to answer the first and the second questions. This is a common interpretation in empirical studies; see [START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF]. If we restrict our analysis to the standard models (Double, Anchoring, Shift and Anch-Shift), our results leads us to select the Shift model, for which ρ = 0.63 (significantly less than 1). However, if we consider a more general model M 1 that encompasses the Range and the Shift models, estimation results leads us to select the Range yes model for which ρ = 1 (the estimation does not restrict the parameter ρ to be equal to one, this estimated value equals to one is obtained from an unrestricted estimation). It suggests that respondents
[
Table 1 :
1
Nested Models (n.c.: no constraints)
Table 2 :
2 presents estimated means of WTP μ and the dispersion of WTP distributions σ. Standard errors are given in italics and confidence intervals of μ are presented in brackets; they are obtained by simulation with the Krinsky and Robb procedure. Random effect probit models estimated from M 2
M 2 constraint mean WTP disp WTP anchor shift corr.
µ c.i. σ s.e. γ s.e. δ s.e. ρ s.e. ℓ
Double γ = δ = 0 89.8 [84.4;96.5] 52.6 10.0 - - 0.71 0.16 -177.3
Anchoring δ = 0 133.8 [108.4;175.2] 92.0 44.5 0.51 0.23 - 0.78 0.14 -175.2
Shift γ = 0 119.4 [105.7;139.7] 69.0 19.9 - -26.7 9.1 0.63 0.17 -173.1
Anch-Shift n. c. 158.5 [122.6;210.7] 100.8 53.5 0.46 0.29 -17.1 13.9 0.73 0.16 -172.1
.
M 3 constraint mean WTP disp WTP anchor shift corr.
µ c.i. σ s.e. γ s.e. δ s.e. ρ s.e. ℓ
Double γ = δ = 0 89.8 [84.4;96.5] 52.6 10.0 - - 0.71 0.16 -177.3
Range yes γ = 0 117.0 [106.7;129.8] 65.0 12.8 - -30.7 13.2 1 -171.7
Fra-Anc-Shi n. c. 116.4 [104.6;132.7] 65.1 12.8 -0.02 0.41 -31.7 21.2 1 -171.6
Table 3 :
3 Random effect probit models estimated from M 3
Table 4 :
4 Random effect probit models estimated from M 1 appropriate than the Shift model, otherwise we would have had δ 1 and δ 2 quite similar and with the same sign. This can be confirmed by testing the Shift model against the Range model (H 0 : δ 1 = δ 2 ); we reject the null hypothesis (P = 0.0736) at a nominal level 0.1. In addition, we test the Range yes model against the Range model (H 0
M 1 constraint mean WTP disp WTP shift 1 shift 2 corr.
µ c.i. σ s.e. δ 1 s.e. δ 2 s.e. ρ s.e. ℓ
Double δ 1 = δ 2 = 0 89.8 [84.4;96.5] 52.6 10.0 - - 0.71 0.16 -177.3
Shift δ 1 = δ 2 119.4 [105.7;139.7] 69.0 19.9 -26.7 9.1 -26.7 9.1 0.63 0.17 -173.1
Range n. c. 126.0 [110.7;147.3] 73.5 21.6 -43.7 27.6 6.5 8.8 1 -171.5
Range yes δ 1 ≤ 0, δ 2 = 0 117.0 [106.7;129.8] 65.0 12.8 -30.7 13.2 - 1 -171.7
Other response effects could explain the difference between estimates of mean WTP, as framing, respondents assumptions about the scope of the program and wastefulness of the government, see[START_REF] Alberini | Modeling response incentive effects in dichotomous choice valuation data[END_REF] for a dicussion.
This is in line with studies putting forward that individuals are rather unsure of their own willingness-to-pay[START_REF] Li | Discrete choice under preference uncertainty: an improved structural model for contingent valuation[END_REF][START_REF] Ready | Contingent valuation when respondents are ambivalent[END_REF], 2001[START_REF] Welsh | Elicitation effects in contingent valuation: comparisons to a multiple bounded discrete choice approach[END_REF][START_REF] Van Kooten | Preference uncertainty in non-market valuation: a fuzzy approach[END_REF][START_REF] Hanley | What's it worth? Exploring value uncertainty using interval questions in contingent valuation[END_REF][START_REF] Alberini | Analysis of contingent valuation data with multiple bids and response options allowing respondents to express uncertainty[END_REF].
A notable exception is[START_REF] Lechner | A modelisation of the anchoring effect in closed-ended question with follow-up[END_REF]
These authors conducted a series of valuation experiments. They observed that "preferences are initially malleable but become imprinted (i.e. precisely defined and largely invariant) after the individual is called upon to make an initial decision".
The first bid b 1i is drawn randomly from{5, 10, 15, 20, 25, 30, 35, 40, 45, 50,
60, 70, 80, 90, 100}. If the answer to the first bid is no, a second bid b 2i < b 1i is drawn randomly. If the answer to the first bid is yes, a second bid b 2i > b 1i is drawn randomly. There was a high response rate (92.6 %).
The Range yes model is empirically equivalent to a special case developed in[START_REF] Flachaire | Controlling starting-point bias in double-bounded contingent valuation surveys[END_REF], with an anchoring parameter equal to zero. In this last article, the results suggested a specific behavior (shift effect) in ascending sequences only. This interpretation was based on empirical results only, with an unknown underlying decision process. Here, we obtain similar empirical results, but the interpretation of the response behavior is very different.
answer both questions according to the same value, contrary to the results obtained with the standard models.
Conclusion
In this article, we propose a unified framework that accomodates many of the existing models for dichotomous choice contingent valuation with follow-up and allows to discriminate between them by simple parametric tests of hypothese. We further test the Range model, developped in [START_REF] Flachaire | Starting-point bias and respondent uncertainty in dichotomous choice valuation surveys[END_REF], against several others standard models. Our empirical results show that the Range model outperforms other standard models and that, when uncertain, respondents tend to accept proposed bids. It confirms that respondent uncertainty is a valid explanation of various anomalies arising in contingent valuation surveys. |
01760338 | en | [
"shs.sport.ps"
] | 2024/03/05 22:32:13 | 2003 | https://insep.hal.science//hal-01760338/file/149-%20Bernard-Hausswirth_CyclingBJSP-2003-37-2-154-9.pdf | Thierry Bernard
Fabrice Vercruyssen
F Grego
Christophe Hausswirth
R Lepers
Jean-Marc Vallier
Jeanick Brisswalter
email: brisswalter@univ-tln.fr
Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes
come
Effect of cycling cadence on subsequent 3 km running performance in well trained triathletes
uring the last decade, numerous studies have investigated the effects of the cycle-run transition on subsequent running adaptation in triathletes. [START_REF] Millet | Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training[END_REF] Compared with an isolated run, the first few minutes of triathlon running have been reported to induce an increase in oxygen 2-4 fatigue and/or metabolic load induced by a prior cycling event on subsequent running performance. To the best of our knowledge, few studies have examined the effect of cycling task characteristics on subsequent running performance. [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF][START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF][START_REF] Hausswirth | Effect of two drafting modalities in cycling on running performance[END_REF] Hausswirth et al 15 16 indicated that riding in a continuous uptake (V ~O2 ) and heart rate (HR), an alteration in drafting position, compared with the no draft modality, ventilatory efficiency (V ~E), [START_REF] Hue | The influence of prior cycling on biomechanical and cardiorespiratory response profiles during running in triathletes[END_REF] and haemodynamic modifications-that is, changes in muscle blood flow. [START_REF] Kreider | Cardiovascular and thermal response of triathlon performance[END_REF] Moreover, changes in running pattern have been observed after cycling, such as an increase in stride rate 3 6 and modifications in trunk gradient, knee angle in the nonsupport phase, and knee extension during the stance phase. [START_REF] Hausswirth | Relationships between mechanics and energy cost of running at the end of a triathlon and a marathon[END_REF] These changes are generally related to the appearance of leg muscle fatigue characterised by perturbation of electromyographic activity of different muscle groups. [START_REF] Witt | Coordination of leg muscles during cycling and running in triathlon[END_REF] Recently, from a laboratory study, Vercruyssen et al [START_REF] Vercruyssen | Influence of cycling cadences on subsequent running performance in triathlon[END_REF] reported that it is possible for triathletes to improve the adaptation from cycling to running at an intensity corresponding to Olympic distance competition pace (80-85% maximal oxygen uptake (<V>O 2 MAX)). They showed a lower metabolic load during a running session after the adoption of the energetically optimal cadence (73 rpm) calculated from the V ~O2cadence relation [START_REF] Brisswalter | Energetically optimal cadence vs. freely chosen cadence during cycling: effect of exercise duration[END_REF][START_REF] Coast | Linear increase in optimal pedal rate with increased power output in cycle ergometry[END_REF][START_REF] Marsh | The association between cycling experience and preferred and most economical cadences[END_REF][START_REF] Marsh | Effect of cycling experience, aerobic power and power output on preferred and most economical cycling cadences[END_REF] compared with the freely chosen cadence (81 rpm) or the theoretical mechanical optimal cadence (90 rpm). [START_REF] Neptune | A theorical analysis of preferred pedaling rate selection in endurance cycling[END_REF] Furthermore, Lepers et al [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] indicated that, after cycling, neuromuscular factors may be affected by exercise duration or choice of pedalling cadence. They observed, on the one hand, the appearance of neuromuscular fatigue after 30 minutes of cycling at 80% of maximal aerobic power, and, on the other hand, that the use of a low (69 rpm) or high (103 rpm) cycling cadence induced a specific neuromuscular adaptation, assessed by the variation in RMS/M wave ratio interpreted as the central neural input change.
From a short distance triathlon race perspective characterised by high cycling or running intensities, these observations raise a major question about the effect of neuromuscular significantly reduced oxygen uptake during cycling and improved the performance of a 5000 m run in elite triathletes. In addition, Garside and Doran [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF] showed in recreational triathletes an effect of cycle frame ergonomics: when the seattube angle was changed from 73° to 81°, the performance of the subsequent 10 000 m run was improved-that is, there was a reduction in race time.
Therefore, the aim of this study was to examine in outdoor conditions the effects of different pedalling cadences (within the range 60-100 rpm) on the performance of a subsequent 3000 m track run, the latter depending mainly on both metabolic and neuromuscular factors. 17 18
METHODS
Participants
Nine well motivated male triathletes currently competing at the national level participated in the study. They had been training regularly and competing in triathlons for at least four years. For all subjects, triathlon was their primary activity; their mean (SD) times for Olympic distance and sprint distance triathlons were 120 minutes 37 seconds (3.2) and 59 minutes 52 seconds (3.4) respectively. Mean (SD) training distances a week were 9.1 (1.9) km for swimming, 220.5 (57.1) km for cycling, and 51.1 (8.9) km for running. The mean (SD) age of the subjects was 24.9 (4.0) years. Their mean (SD) body weight and height were 70.8 (3.8) kg and 179 (3.9) cm respectively. The subjects were asked to abstain from exhaustive training throughout the experiment. Finally, they were fully informed of the content of the experiment, and written consent was obtained before all testing, according to local ethical committee guidelines.
Maximal cycling test
Subjects first performed a maximal test to determine V ~O2 MAX and ventilatory threshold. This test was carried out on an electromagnetically braked ergocycle (SRM; Jülich, Welldorf, Germany), 19 20 on which the handle bars and racing seat are fully adjustable both vertically and horizontally to reproduce the positions of each subject's bicycle. No incremental running test was performed in this study, as previous investigations indicated similar V ~O2 MAX values whatever the locomotion mode in triathletes who began the triathlon as their first sport. 21 [22 ] This incremental session began with a warm up of 100 W for six minutes, after which the power output was increased by 30 W a minute until volitional exhaustion. During this protocol, V ~O2 , V ~E, respiratory exchange ratio, and HR were continuously recorded every 15 seconds using a telemetric system collecting gas exchanges (Cosmed K4 , Rome, Italy) previously validated by Hausswirth et al. [START_REF] Hausswirth | The cosmed K4 telemetry system as an accurate device for oxygen uptake measurements during exercise[END_REF] V ~O MAX was determined according to criteria described by Howley et al [START_REF] Howley | Criteria for maximal oxygen uptake: review and commentary[END_REF] that is, a plateau in V ~O2 despite an increase in power output, a respiratory exchange ratio value of 1.15, or an HR over 90% of the predicted maximal HR (table 1). The maximal power output reached during this test was the mean value of the last minute. Moreover, the ventilatory threshold was calculated during the cycling test using the criterion of an increase in V ~E/V ~O with no concomitant increase in V ~E/V ~CO . [START_REF] Wasserman | Anaerobic threshold and respiratory gas exchange during exercise[END_REF]
Cycle-run performance sessions
All experiments took place in April on an outdoor track. Outside temperature ranged from 22 to 25°C, and there was no appreciable wind during the experimental period. Each athlete completed in random order three cycle-run sessions (20 minutes of cycling and a 3000 m run) and one isolated run (3000 m). These tests were separated by a 48 hour rest period. Before the cycle-run sessions, subjects performed a 10 minute warm up at 33% of maximal power. [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] During the cycling bout of the cycle-run sessions, subjects had to maintain one of three pedalling cadences corresponding to 60, 80, or 100 rpm. These cycling cadences were representative of the range of cadences selected by triathletes in competition. 15 26 Indeed, it was recently reported that, on a flat road at 40 km/h, cycling cadences could range from 67 rpm with a 53:11 gear ratio to 103 rpm with a 53:17 gear ratio. [START_REF] Lepers | Effect of pedalling rates on physiological response during an endurance cycling exercise[END_REF] However, 60 rpm is close to the range of energetically optimal cadence values, [START_REF] Marsh | Effect of cycling experience, aerobic power and power output on preferred and most economical cycling cadences[END_REF] 80 rpm is near the freely chosen cadence, 6 8 and 100 rpm is close to the cadence used in a drafting situation. 15 16 According to previous studies of the effect of a cycling event on running adaptation, 2 5 the cycling bouts were performed at an intensity above the ventilatory threshold corresponding to 70% of maximal power output (80% V ~O2 MAX) and were representative of a sprint distance simulation. 15 16 The three cycling bouts of the cycle-run sessions were conducted on the SRM system next to the running track. The SRM system allowed athletes to maintain constant power output independent of cycling cadence. In addition, feedback on selected cadence was available to the subjects via a screen placed directly in front of them.
After cycling, the subjects immediately performed the 3000 m run on a 400 m track. The mean (SD) transition time between the cycling and running events (40.4 (8.1) seconds) was the same as that within actual competition. [START_REF] Millet | Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training[END_REF] During the running bouts, race strategies were free, the only instruction given to the triathlete being to run as fast as possible over the whole 3000 m.
Measurement of physiological variables during the cycle-run sessions
V ~O2 , V ~E, and HR were recorded every 15 seconds with a K4 RQ . The physiological data were analysed during the cycling bouts at the following intervals: 5th-7th minute (5-7), 9th-11th minute (9-11), 13th-15th minute (13-15), 17th-19th minute (17-19), and every 500 m during the 3000 m run (fig 1).
Measurement of biomechanical variables during the cycle-run sessions
Power output and pedalling cadence were continuously recorded during cycling bout. During the run, kinematic data were analysed every 500 m using a 10 m optojump system (MicroGate, Timing and Sport, Bolzano, Italy). From this system, speed, contact, and fly time attained were recorded every 500 m over the whole 3000 m. The stride rate-stride length combination was calculated directly from these values. Thus the act of measuring the kinematic variables had no effect on the subjects' running patterns within each of the above 10 m optical bands.
Blood sampling
Capillary blood samples were collected from ear lobes. Blood lactate was analysed using the Lactate Pro system previously validated by Pyne et al. [START_REF] Pyne | Evaluation of the lactate pro blood lactate analyser[END_REF] Four blood samples were collected: before the cycle-run sessions (at rest), at 10 and 20 minutes during the cycling bouts, and at the end of the 3000 m run.
Statistical analysis
All data are expressed as mean (SD). The stability of the running pattern was described using the coefficient of variation ((SD/mean) 100) for each athlete. [START_REF] Maruyama | Temporal variability in the phase durations during treadmill walking[END_REF] A two way analysis of variance (cadence period time) for repeated measures was performed to analyse the effects of time and cycling cadence using V ~O2 , V ~E, HR, speed velocity, stride variability, speed variability, stride length, and stride rate as dependent variables.
For this analysis, the stride and speed variability (in %) were analysed by an arcsine transformation. A Newmann-Keuls post hoc test was used to determine differences among all cycling cadences and periods during exercise. In all statistical tests, the level of significance was set at p<0.05.
RESULTS
m performances
In this study, the performance of the isolated run was significantly better than the run performed after cycling (583.0 (28.3) and 631.
Running bouts of cycle-run sessions
Table 2 gives mean values for V ~O2 , V ~E, and HR for the running bouts. The statistical analysis indicated a significant interac-running performance. A classical view is that performance in triathlon running depends on the characteristics of the preceding cycling event, such as power output, pedalling 1 29 tion effect (period time + cycling cadence) on V ~O2 during sub-cadence, and metabolic load. Previous investigations have sequent running (p<0.05). V ~O2 values recorded during the run section of the 60 rpm session were significantly higher than during the 80 rpm or the 100 rpm sessions (p<0.05, table 2). These values represent respectively 92.3 (3.0)% (60 rpm run), 85.1 (0.6)% (80 rpm run), and 87.6 (1.2)% (100 rpm run) of cycle V ~O2 MAX, indicating a significantly higher fraction of V ~O2 MAX sustained by subjects during the 60 rpm run session from 1000 to 3000 m than under the other conditions (p<0.05, fig 3). Changes in stride rate within the first 500 m of the 3000 m run were significantly greater during the 80 and 100 rpm run sessions than during the 60 rpm run session (1.52 (0.05), 1.51 (0.05), and 1.48 (0.03) Hz respectively). No significant effect of cycling cadence was found on either stride variability during the run or blood lactate concentration at the end of the cycle-run sessions (table 2).
DISCUSSION
The main observations of this study confirm the negative effect of a cycling event on running performance when compared with an isolated run. However, we observed no effect of the particular choice of cycling cadence on the performance of a subsequent 3000 m run. However, our results highlight an effect of the characteristics of the prior cycling event on metabolic responses and running pattern during the subsequent run.
Cycle-run sessions v isolated run and running performance
shown a systematic improvement in running performance when the metabolic load of the cycling event was reduced either by drafting position [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] or racing on a bicycle with a steep seat-tube angle (81°). [START_REF] Garside | Effects of bicycle frame ergonomics on triathlon 10-km running performance[END_REF] Unlike a 3000 m run which is characterised by neuromuscular and anaerobic factors, 17 18 the improvement in running performance in these previous studies was observed over a variety of long distances (5-10 km) where the performance depends mainly on the capacity of the subject to minimise energy expenditure over the whole race. 1 14 15 29 Therefore one explanation for our results is that minimisation of metabolic load through cadence choice during cycling has a significant effect on the running time mainly during events of long duration. Further research is needed into the effect of cadence choice on total performance for running distances close to those of Olympic and Iron man triathlon events.
However, despite the lack of cadence effect on 3000 m race time, our results indicate an effect of cadence choice (60-100 rpm) on the stride pattern or running technique during a 3000 m run. This difference was mainly related to the higher velocity preferred by subjects immediately after cycling at 80 and 100 rpm and to the lower velocity from 1500 to 2500 m after cycling at high cadences. These results may suggest that the use of a low pedalling cadence (close to 60 rpm) reduces variability in running velocity-that is, one of the factors of running technique-during a subsequent run.
For running speeds above 5 m/s (> 18 km/h) and close to maximum values, the change in stride rate is one of the most
To our knowledge only one study has analysed the effect of important factors in increasing running velocity. In our cycling events on subsequent running performance when compared with an isolated run. [START_REF] Hausswirth | Effects of cycling alone or in a sheltered position on subsequent running performance during a triathlon[END_REF] The study showed, during a sprint distance triathlon (0.75 km swim, 20 km bike ride, 5 km run), a significant difference betweena5 km run after cycling (alone and in a sheltered position) and the run performed study, the significant increase in running speed observed during the first 500 m of the 80 and 100 rpm run sessions was associated with a significantly higher stride rate (1.51-1.52 Hz) than in the 60 rpm run session (1.48 Hz). The relation between stride rate and cycling cadence has been reported by 16 without a prior cycling event (isolated run). The cycling event
Hausswirth et al in elite subjects participating in a sprint discaused an increase in mean 5 km race time (1014 seconds) and a decrease in mean running velocity (17.4 km/h) compared with the isolated run (980 seconds and 18.2 km/h).
Our results are in agreement, showing an impairment in running performance after the cycling event whatever the choice of pedalling cadence. There was an increase in mean running time (631 seconds) and a decrease in mean running velocity (17.2 km/h) compared with the performance in the isolated run (583 seconds and 18.5 km/h). Therefore, one finding of our study is that a prior cycling event can affect running performance over the 3 km as well as the 5 km and 10 km distances. 1 29 One hypothesis to explain the alteration in running performance after cycling could be the high metabolic load sustained by subjects at the end of cycling characterised by an increase in blood lactate concentration (4-6 mmol/l) associated with a high V ~O2 MAX (81-83%) and HR max (88-92%). On the other hand, Lepers et al [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] have recently shown in well trained triathletes a reduction in muscular force relating to both central and peripheral factors-that is, changes in M wave and EMG RMS-after 30 minutes of cycling performed at different pedalling cadences (69-103 rpm). We hypothesise that these modifications of neuromuscular factors associated with increasing metabolic load during cycling could increase the development of fatigue just before running, whatever the choice of pedalling cadence.
Cycling cadences and physiological and biomechanical characteristics of running
Our results show no effect of different cycling cadences (60-100 rpm) commonly used by triathletes on subsequent tance triathlon, indicating a significantly higher stride rate after cycling at 102 rpm (1.52 Hz) than after cycling at 85 rpm (1.42 Hz) for the first 500 m of the run.
These observations suggest that immediately after the cycle stage, triathletes spontaneously choose a race strategy directly related to the pedalling cadence, but this effect seems to be transitory, as no significant differences between conditions were reported after the first 500 m of running. This is in agreement with previous studies in which changes in stride pattern and running velocity were found to occur only during the first few minutes of the subsequent run. 1 3 5 6 Furthermore, the fact that triathletes prefer to run at a high pace after cycling at 80 and 100 rpm seems to confirm different anecdotal reports of triathletes. Most triathletes prefer to adopt a high pedalling cadence during the last few minutes of the cycle section of actual competition. Three strategies may be evoked to characterise the choice of cycling cadence: speeding up in the last part of the cycle stage in order to get out quickly on the run (when elite triathletes compete in draft legal events) [START_REF] Millet | Physiological and biomechanical adaptations to the cycle to run transition in Olympic triathlon: review and practical recommendations for training[END_REF] ; reducing power output and spin to minimise the effects of the bike-run transition; maintaining power output while increasing cadence. However, our results show that such a strategy is associated with higher metabolic cost during the cycling stage and greater instability in running pattern, suggesting that it is not physiologically beneficial for the athlete to adopt high pedalling cadences in triathlon competition.
During our study, cycling at 100 rpm was associated with an increase in metabolic cost as classically observed in previous studies for a high cadence such as an increase in V ~O2 , HR, V ~E, [START_REF] Hagan | Effect of pedal rate on cardiorespiratory responses during continuous exercise[END_REF] and blood lactate concentration. [START_REF] Brisswalter | Energetically optimal cadence vs. freely chosen cadence during cycling: effect of exercise duration[END_REF] At the end of the 100 rpm cycling task, mean blood lactate concentration was 7.0 (2.0) mmol/l, suggesting a high contribution of anaerobic metabolism, 8 whereas it was 4.6 (2.1) mmol/l after cycling at 60 rpm. The effect of pedalling rate on physiological adaptation during prolonged cycling has recently been investigated. 8 13 32 Brisswalter et al [START_REF] Brisswalter | Energetically optimal cadence vs. freely chosen cadence during cycling: effect of exercise duration[END_REF] indicated that cycling at a cadence higher than 95 rpm induces a significant increase in V ~O2 , V ~E, and lactate concentration after 30 minutes of exercise in triathletes.
Moreover, our results show an effect of cycling cadence on aerobic contribution during maximal running performance. The subjects were able to sustain a higher fraction of V ~O2 MAX during the 60 rpm run session-that is, 92%-than during the 80 and 100 rpm run sessions-84% and 87% of V ~O2 MAX respectively-(fig 3). These results suggest that the contribution of the anaerobic pathway 17 is more important after the higher cycling rates (80 and 100 rpm) than after the 60 rpm ride and could lead during a prolonged running exercise to earlier appearance of fatigue caused by metabolic acidosis. 33 34 In conclusion, our results confirm the alteration in running performance after a cycling event compared with an isolated run. The principal aim of our investigation was to evaluate the impact of different pedalling rates on subsequent running performance. No significant effect of cycling cadence was found on 3000 m running performance, despite some changes in running strategies, stride rate, and metabolic contributions. We chose a running distance of 3000 m to analyse the possible effect of neuromuscular fatigue-previously reported after a 30 minute cycling exercise at the same intensity [START_REF] Lepers | Effect of cycling cadence on contractile and neural properties of knee extensors[END_REF] -on running performance when neuromuscular and anaerobic factors make important contributions. 17 18 As the effect observed was not significant, the choice of cadence within the usual range does not seem to influence the performance of a middle distance run. One limiting factor of this study may be the choice of a short exercise duration because an effect of metabolic load reduction during the cycling stage on running performance was previously observed for a run longer than 5000 m. For multidisciplinary activities such as triathlon and duathlon, further applied research on the relation between cycling cadence and performance of the subsequent run is required to evaluate the influence of the practical conditions and constraints of actual competition.
Take home message
Compared with an isolated run, completion of a cycling event impairs the performance of a subsequent run independently of the pedalling cadence. However, running strategy, stride rate, and metabolic contribution seem to be improved by the use of a low pedalling cadence (60 rpm). The choice of cycling cadence may have an effect on the running adaptation during a sprint or short distance triathlon. Much research has been conducted on the effects of cycling on physiological variables measured during subsequent running in triathletes. Few authors, however, have examined the effect of variation in cycling task characteristics on either such variables or overall run performance. This study, examining the effect of different pedalling cadences during a cycle at about 80% V ~O2 MAX on performance within a succeeding 3 km run by well trained male triathletes, adds to the published work in this area.
V Vleck
Chair, Medical and Research Committee of the European Triathlon Union and Senior Lecturer, School of Chemical and Life Sciences, University of Greenwich, London, UK Veronica@vleck.fsnet.co.uk
Figure 1
1 Figure 1Representation of the three cycle-run sessions. TR, Cycle-run transition; BS, blood samples taken; M 1 -M 4 , measurement intervals during cycling at 5-7, 9-11, 13-15, and 17-19 minutes; M 5 -M 10 , measurement intervals during running at 500, 1000, 1500, 2500, and 3000 m; WU, warm up for each condition.
Figure 2 3
23 Figure 2 Race strategies expressed as the evolution in running Figure 3 Changes in fraction of V ˙ O MAX (FV ˙ O MAX) sustained by
Ergonomie et performance sportive, UFR STAPS, Université de Toulon-Var, France C Hausswirth, Laboratoire de physiologie et biomécanique, INSEP, Paris, France R Lepers, Groupe analyse du mouvement, UFR STAPS, Université de Bourgogne, France
.
................. COMMENTARY ..................
Table 1
1 Physiological characteristics of the subjects obtained during a maximal cycling testValues are expressed as mean (SD). V ˙ O 2 MAX, maximal oxygen uptake (ml/min/kg); V ˙ EMAX, maximal ventilation (litres/min); HR max , maximal heart rate (beats/min); VT, ventilatory threshold; MAP, maximal power output (W).
RQ
2
Table 2
2 Mean values for power output and speed, oxygen uptake, expiratory flow, heart rate, blood lactate, and running performance obtained during the cycle-run sessions
Cycle Cycle Cycle
Parameter (60 rpm)_ Run (80 rpm) Run (100 rpm) Run
Power output (W)/speed (km/h) 275.4 (19.4) 17.3 (1.1) 277.1 (18.6) 17.2 (1.20 277.2 (17.2) 17.1 (1.5)
Oxygen uptake (ml/min/kg) 55.6 (4.6) 62.8 (7.3)* 55.3 (4.0) 57.9 (4.1) 56.5 (4.3) 59.7 (5.6)
Expiratory flow (litres/min) 94.8 (12.2) 141.9 (15.9) 98.2 (9.2) 140.5 (14.6) 107.2 (13.0)* 140. 5 (21.8)
Heart rate (beats/min) 163.5 (9.5) 184.2 (4.6) 166.1 (10.4) 185.8 (3.1) 170.7 (4.7)* 182. 6 (5.0)
Lactataemia (mmol/l) 4.6 (2.1) 9.0 (1.9) 5.1 (2.1) 9.2 (1.2) 7.0 (2.0)* 9.9 (1.8)
Stride rate (Hz) 1.48 (0.01) 1.49 (0.01) 1.48 (0.02)
Running performance (s) 625.7 (40.1) 630.0 (44.8) 637.6 (57.9)
*Significantly different from the other cycle-run sessions, p<0.05.
1 (47.6) seconds for the isolated run and mean cycle-run sessions respectively). No significant effect of cycling cadence was observed on subsequent 3000 m running performance. Running times were 625.7 (40.1), 630.0 (44.8), and 637.7 (57.9) seconds for the 60, 80, and 100 rpm run sessions respectively (table2). The mean running speed during the first 500 m (fig2) was significantly lower after the 60 rpm ride than after the 80 and 100 rpm cycling bouts(17.
Cycling bouts of cycle-run sessions
During the 20 minutes at 60, 80, and 100 rpm cycling bouts,
average cadences were 61.6 (2.6), 82.7 (4.3) and 98.2 (1.7)
rpm respectively. Mean HR and V ~E recorded during the 100
rpm cycling bout were significantly higher than in other
cycling conditions. Furthermore, blood lactate concentrations
were significantly higher at the end of the 100 rpm bout than
after the 60 and 80 rpm cycling bouts (7.0 (2.0), 4.6 (2.1) and
5.1 (2.1) mmol/l respectively, p<0.05). Conversely, no effect of
either pedalling rate or exercise duration was found on V ~O2
(table 2, p>0.05).
5
(1.1), 18.3 (1.1), and 18.3 (1.2) km/h respectively). In addition, the speed variability (from 500 to 2500 m) was significantly lower during the 60 rpm run session than for the other cycle-run conditions (2.18 (1.2)%, 4.12 (2.0)%, and 3.80 (1.8)% for the 60, 80, and 100 rpm run respectively).
2 velocity during the run bouts (60, 80, 100 rpm). *Significantly different from the running velocity during the 60 rpm run session, p<0.05. subjects during the running bouts (60, 80, and 100 rpm). *Significantly different from the initial period, p<0.05; †significantly different from the other conditions, p<0.05. |
01760346 | en | [
"spi.nrj"
] | 2024/03/05 22:32:13 | 2018 | https://hal.science/hal-01760346/file/L2EP_2018_TPWRD_GRUSON.pdf | Student Member, IEEE Julian Freytes
Gilbert Bergna
Member, IEEE Jon Are Suul
Salvatore Are
François D'arco
Frederic Gruson
Hani Colas
Xavier Saad
Guillaud
Salvatore D ' Arco
François Gruson
Frédéric Colas
Member, IEEE Hani Saad
Member, IEEE Xavier Guillaud
Improving Small-Signal Stability of an MMC With CCSC by Control of the Internally Stored Energy
Keywords: HVDC Transmission, Modular Multilevel Converter, State-Space Modeling, Small-Signal Stability Analysis
L'archive ouverte pluridisciplinaire
I. INTRODUCTION
The Modular Multilevel Converter (MMC) is currently the most promising topology for High Voltage DC transmission systems (HVDC) [START_REF] Lesnicar | An innovative modular multilevel converter topology suitable for a wide power range[END_REF]. Several advantages of the MMC compared to other topologies may be enumerated, such as lower losses, modularity, scalability and low harmonic content in the output AC voltage [START_REF] Lesnicar | An innovative modular multilevel converter topology suitable for a wide power range[END_REF], [START_REF] Antonopoulos | On dynamics and voltage control of the modular multilevel converter[END_REF]. Nevertheless, the internal dynamics of the MMC topology makes the modeling, control and stability studies of this converter highly challenging [START_REF] Debnath | Operation, control, and applications of the modular multilevel converter: A review[END_REF].
Without control of the internal dynamics, a three-phase MMC will experience large second harmonic currents circulating between the different phases, and potential resonances between the internal equivalent capacitance and the filter inductor [START_REF] Ilves | Steady-state analysis of interaction between harmonic components of arm and line quantities of modular multilevel converters[END_REF]. Thus, a Circulating Current Suppression Controller (CCSC) is commonly applied for eliminating the double frequency circulating currents [START_REF] Tu | Circulating current suppressing controller in modular multilevel converter[END_REF]. However, recent studies have demonstrated that poorly damped oscillations or even instability associated with the DC-side current can occur for MMCs with conventional CCSC-based control [START_REF] Jamshidifar | Small signal dynamic dq model of modular multilevel converter for system studies[END_REF]- [START_REF] Freytes | Small-signal model analysis of droop-controlled modular multilevel converters with circulating current suppressing controller[END_REF].
Due to the multiple frequency components naturally appearing in the internal circulating currents and the arm capacitor voltages of an MMC [START_REF] Harnefors | Dynamic analysis of modular multilevel converters[END_REF], traditional power-system-oriented approaches for state-space modeling, linearization and eigenvalue analysis cannot be directly applied. Thus, in [START_REF] Chaudhuri | Stability analysis of vector-controlled modular multilevel converters in linear time-periodic framework[END_REF], the stability of an MMC was studied by application of time-periodic system theory (Poincaré multipliers), to demonstrate how the double frequency dq current control loops of the CCSC can make the system unstable. Similar conclusions were obtained in [START_REF] Jamshidifar | Small signal dynamic dq model of modular multilevel converter for system studies[END_REF], [START_REF] Li | Stability of a modular multilevel converter based hvdc system considering dc side connection[END_REF], [START_REF] Li | Harmonic instability in mmc-hvdc converters resulting from internal dynamics[END_REF], by eigenvalue analysis within a modeling framework based on dynamic phasors and harmonic superposition, for separately representing the different frequency components of the internal MMC variables. However, recent modeling efforts have lead to the development of state-space models that avoid the approximation of harmonic superposition. Indeed, the MMC model from [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF] is first expressed in a form that separates the variables into groups with only one steady-state oscillation frequency, before the variables are transformed into their associated Synchronously Rotating Reference Frames (SRRFs). The resulting model can be linearized for application of traditional eigenvalue analysis, as demonstrated in [START_REF] Freytes | Small-signal model analysis of droop-controlled modular multilevel converters with circulating current suppressing controller[END_REF] where only the Classical CCSC strategy was considered.
This paper extends the MMC model from [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF] by including a simplified representation of the DC bus voltage dynamics in a Multi-Terminal DC grid (MTDC), represented by an equivalent capacitance and a power source. A DC voltage droop control, as expected for HVDC operation in MTDC grids [START_REF] Beerten | Identification and small-signal analysis of interaction modes in vsc mtdc systems[END_REF], [START_REF] Freytes | Dynamic impact of mmc controllers on dc voltage droop controlled mtdc grids[END_REF], is also included in the model.
Furthermore, the adapted state-space model is linearized, and the impact of the equivalent DC-side capacitance and the droop gain on the poorly damped oscillation modes that occur with the Classical CCSC are investigated by eigenvalue analysis. Participation factor analysis is applied to identify the source of these oscillations, which is shown to be mainly the interaction between the uncontrolled DC-side current and the voltage or the energy of the internal capacitance of the MMC. This indicates that the stability and control system performance can be improved by introducing closed loop control of the DC-side current and the total stored energy within the MMC. By introducing such Energy-based control, with a structure simplified from [START_REF] Freytes | Dynamic impact of mmc controllers on dc voltage droop controlled mtdc grids[END_REF], the stability problems are avoided and good performance of the control system can be ensured for the full range of expected operating conditions. The obtained improvement in the small-signal stability is demonstrated by time-domain simulations as well as by eigenvalue analysis.
II. AVERAGED MMC MODELS FOR MATHEMATICAL ANALYSIS
In this section, the Arm Averaged Model (AAM) from [START_REF] Saad | Modular multilevel converter models for electromagnetic transients[END_REF], [START_REF] Saad | Mmc capacitor voltage decoupling and balancing controls[END_REF] is first presented to describe the basic mathematical relations of an MMC. Then, a time-invariant model that can be linearized for small-signal eigenvalue analysis is developed according to the approach presented in [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF].
A. Continuous time-periodic Arm Averaged Model
The Arm Averaged Model (AAM) of the MMC is recalled in Fig. 1. The model presents for each phase j (j = a, b, c), a leg consisting of an upper and a lower arm. Each arm includes an inductance L arm , an equivalent resistance R arm and an aggregated capacitance C arm [START_REF] Saad | Modular multilevel converter models for electromagnetic transients[END_REF].
v G c v G b v G a R f L f R f L f R f L f i ∆ c i ∆ b i ∆ a i U a i U b i U c i L a i L b i L c i dc i dc v U ma v L ma i U ma Carm v U Ca m U a m U b m L a v dc C dc DC Bus i l i l = P l v dc AC source R arm L arm R arm L arm R arm L arm L arm R arm L arm R arm L arm R arm i U mb Carm v U Cb i U mc Carm v U Cc m U c i L mc Carm v L Cc m L c i L mb Carm v L Cb m L b i L ma Carm v L Ca Figure 1
. Schematic of the reference configuration with MMC connected to a DC bus capacitor A simplified model is assumed for the DC bus dynamics consisting of an equivalent capacitor C dc which emulates the capacitance of the DC cables and other converter stations connected to the grid. Also, in parallel with C dc there is a controlled current source i l whose output power is P l as an equivalent model of the power exchanged in the HVDC system.
The modulated voltages v U mj and v L mj , as well as the currents i U mj and i L mj of each arm j are described as follows:
v U mj = m U j v U Cj , v L mj = m L j v L Cj (1) i U mj = m U j i U j , i L mj = m L j i L j (2)
where m U j (m L j ) is the corresponding insertion index, and v U Cj (v L Cj ) is the voltage across the upper (lower) equivalent arm capacitance C arm . The voltage and current of the equivalent capacitor are related by the following equation:
C arm dv U Cj dt = i U mj , C arm dv L Cj dt = i L mj . (3)
For deriving the current dynamics of the AAM, the modulation indexes m ∆ j and m Σ j as well as modulated voltages v ∆ mj and v Σ mj are introduced [START_REF] Saad | Mmc capacitor voltage decoupling and balancing controls[END_REF]:
m ∆ j def = m U j -m L j , m Σ j def = m U j + m L j (4) v ∆ mj def = (-v U mj + v L mj )/2, v Σ mj def = (v U mj + v L mj )/2 (5)
The MMC currents can be expressed as:
i ∆ j def = i U j -i L j , i Σ j def = (i U j + i L j )/2 (6)
where i ∆ j corresponds to the AC grid current, and i Σ j is the common-mode current flowing through the upper and lower arm. The current i Σ j is commonly referred as "circulating current" or "differential current" [START_REF] Saad | Mmc capacitor voltage decoupling and balancing controls[END_REF]. However, the more general term "common-mode current" is preferred in this paper, since i Σ j is calculated as a sum of two currents. The AC grid current dynamics are expressed as:
L ac eq di ∆ j dt = v ∆ mj -v G j -R ac eq i ∆ j ( 7
)
where R ac eq def = (R arm + 2R f )/2 and L ac eq def = (L arm + 2L f )/2. The common-mode arm currents dynamics are given by:
L arm di Σ j dt = v dc 2 -v Σ mj -R arm i Σ j (8)
Finally, the addition and difference of the terms in (3) yields,
2C arm dv Σ Cj dt = m ∆ j i ∆ j 2 + m Σ j i Σ j ( 9
)
2C arm dv ∆ Cj dt = m Σ j i ∆ j 2 + m ∆ j i Σ j ( 10
)
where
v ∆ Cj def = (v U Cj -v L Cj )/2 and v Σ Cj def = (v U Cj + v L Cj )/2.
With the new definitions, the modulated voltages v ∆ mj and v Σ mj can be expressed as follows:
v ∆ mj = - 1 2 m ∆ j v Σ Cj + m Σ j v ∆ Cj (11) v Σ mj = 1 2 m Σ j v Σ Cj + m ∆ j v ∆ Cj (12)
In steady state, "∆" variables are sinusoidal at the fundamental grid frequency ω, while "Σ" variables contain a sinusoidal oscillation at -2ω superimposed to a DC-component [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF]. Thus, the variables can be classified as summarized in Table I.
i ∆ j , v ∆ mj , m ∆ j , v ∆ Cj i Σ j , v Σ mj , m Σ j , v Σ
Cj
In the following section, a non-linear time-invariant model is obtained from the MMC model in "Σ-∆" representation given by ( 7)- [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF].
B. Non-linear time-invariant model using voltage-based formulations in Σ-∆ representation
This section summarizes the time-invariant model of the MMC with voltage-based formulation as proposed in [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF]. To achieve a time-invariant model, it is necessary to refer the MMC variables to their corresponding SRRFs, following the frequency classification shown in Table I. For generic variables x Σ and x ∆ , time-invariant equivalents are obtained with the Park transformation defined in [START_REF] Beerten | Identification and small-signal analysis of interaction modes in vsc mtdc systems[END_REF] as (bold variables means matrix or vectors):
ω ⇒ x ∆ dqz def = x ∆ d x ∆ q x ∆ z = P ω x ∆ a x ∆ b x ∆ c -2ω ⇒ x Σ dqz def = x Σ d x Σ q x Σ z = P -2ω x Σ a x Σ b x Σ c 1 2 1 2 1 2 (13)
The formulation of the MMC variables such that this initial separation of frequency components can be achieved constitutes the basis for the used modelling approach, as illustrated in Fig. 2. As shown in ( 7)-( 12), the Σ and ∆ quantities are not fully decoupled. This results in time-periodic variables in the equations after applying the above transformations. For the Σ variables, time-periodic terms of 6ω are neglected without compromising the accuracy of the model. Conversely, the zero sequences of the vectors in "∆" variables present time-periodic terms of 3ω which has to be taken into account. This component was modeled by means of an auxiliary virtual variable, 90°s
+ω 3ω v ∆ Cabc P ω v Σ Cabc P -2ω v ∆ Cz T 3ω ⊥ 90 • v ∆⊥90 • Cz v Σ Cdqz v ∆ Cdq v ∆ CZ i ∆ abc P ω i Σ abc P -2ω i Σ dqz i ∆ dq m Σ abc P -2ω m Σ dqz m ∆ abc P ω m ∆ z = 0 m ∆ dq -2ω DC MMC Internal State Variables Control variables
hifted from the real one, and by using a transformation T 3ω at +3ω to achieve a model with a defined equilibrium point.
3ω + ⇒ x ∆ Z def = x ∆ Z d x ∆ Zq = cos(3ω) sin(3ω) sin(3ω) -cos(3ω) T3ω x ∆ z x ∆90°z
Using the above definitions, the MMC dynamics in their "Σ-∆" representation can be reformulated as a state-space model where all states reach constant values in steady-sate [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF]. The resulting system is recalled in the following.
1) AC grid currents: Applying the Park transformation at ω to [START_REF] Li | Stability of a modular multilevel converter based hvdc system considering dc side connection[END_REF], the SRRF representation of the AC side current dynamics i ∆ dq are given as:
L ac eq di ∆ dq dt = -v G dq + v ∆ mdq -R ac eq i ∆ dq -J ω L ac eq i ∆ dq ( 14
)
where v G dq is the grid voltage at the point of interconnection and J ω is the cross-coupling matrix at the fundamental frequency, as defined in [START_REF] Saad | Modular multilevel converter models for electromagnetic transients[END_REF],
J ω def = 0 ω -ω 0 . ( 15
)
The AC-side modulated voltage v ∆ mdq is defined in ( 16) as a function of the modulation indexes m ∆ dq and m Σ dqz ,
v ∆ mdq = 1 4 V ∆ m ∆ dq , m Σ dq , m Σ z . ( 16
)
V ∆ is defined as the following 2 × 5 voltage matrix, where all elements are represented in their associated SRRFs:
V ∆ def = -2v Σ Cz -v Σ Cd ; v Σ Cq ; -v ∆ Cd -v ∆ CZd ; v ∆ Cq + v ∆ CZq ; -2v ∆ Cd v Σ Cq ; v Σ Cd -2v Σ Cz ; v ∆ Cq -v ∆ CZq ; v ∆ Cd -v ∆ CZd ; -2v ∆ Cq . (17)
2) Common-mode arm currents: Similarly, applying the Park transformation at -2ω to (8), the dynamics of the commonmode arm currents in their time invariant representation i Σ dq and i Σ z are obtained, shown in (18a).
L arm di Σ dq dt = -v Σ mdq -R arm i Σ dq -2J ω L arm i Σ dq (18a) L arm di Σ z dt = -v Σ mz -R arm i Σ z + v dc 2 (18b)
with v dc representing the voltage at the MMC DC terminals.
The modulated voltages driving the currents i Σ dq and i Σ z are v Σ mdq and v Σ mz . These voltages are defined in [START_REF] Freytes | Smallsignal state-space modeling of an hvdc link with modular multilevel converters[END_REF], as a function of the modulation indexes:
v Σ mdqz = 1 4 V Σ m ∆ dq , m Σ dq , m Σ z . (19)
V Σ corresponds to the following 3 × 5 voltage matrix:
V Σ def = v ∆ Cd + v ∆ CZd -v ∆ Cq + v ∆ CZq 2v Σ Cz 0 2v Σ Cd -v ∆ Cq -v ∆ CZq -v ∆ Cd + v ∆ CZd 0 2v Σ Cz 2v Σ Cq v ∆ Cd v ∆ Cq v Σ Cd v Σ Cq 2v Σ Cz . (20)
3) Arm capacitor voltages sum: Applying the Park transformation at -2ω to (9), the time invariant dynamics of the voltage sum vector v Σ Cdqz can be expressed by (21).
C arm dv Σ Cdqz dt = i Σ mdqz - -2J ω 0 2×1 0 1×2 0 C arm v Σ Cdqz ( 21
)
with i Σ mdqz representing the modulated current as defined in (22), as a function of the modulation indexes,
i Σ mdqz = 1 8 I Σ m ∆ dq , m Σ dq , m Σ z . ( 22
)
I Σ is the following 3 × 5 time invariant current matrix:
I Σ def = i ∆ d -i ∆ q 4i Σ z 0 4i Σ d -i ∆ q -i ∆ d 0 4i Σ z 4i Σ q i ∆ d i ∆ q 2i Σ d 2i Σ q 4i Σ z . ( 23
)
4) Arm capacitor voltages difference: Finally, the steadystate time invariant dynamics of the voltage difference vectors v ∆ Cdq and v ∆ CZ are now recalled. Results are obtained by applying the Park transformation at ω and 3ω to [START_REF] Harnefors | Dynamic analysis of modular multilevel converters[END_REF]. For the sake of compactness, the voltage difference vector is defined
as v ∆ CdqZ def = v ∆ Cd , v ∆ Cq , v ∆ CZ d , v ∆ CZq . C arm dv ∆ CdqZ dt = i ∆ mdqZ - J ω 0 2×2 0 2×2 3J ω C arm v ∆ CdqZ (24) with i ∆
mdqZ representing the modulated current as defined in (25), as a function of the modulation indexes,
i ∆ mdqZ = 1 16 I ∆ m ∆ dq , m Σ dq , m Σ z . ( 25
)
I ∆ is the following 4 × 5 time-invariant current matrix:
I ∆ def = 2i Σ d + 4i Σ z -2i Σ q i ∆ d -i ∆ q 2i ∆ d -2i Σ q -2i Σ d + 4i Σ z -i ∆ q -i ∆ d 2i ∆ q 2i Σ d 2i Σ q i ∆ d i ∆ q 0 -2i Σ q 2i Σ d i ∆ q -i ∆ d 0 . (26)
5) DC bus dynamics:
The DC bus dynamics are modelled by (27), where C dc is the cable model terminal capacitance and P l represents the power injection as seen from the MMC station.
C dc
dv dc dt = P l v dc -3i Σ z ( 27
)
An overview of the model structure corresponding to the MMC and DC bus equations is shown in Fig. 3.
Grid currents v G dq vdc i Σ dqz
Eq. ( 14)
Eq. ( 18)
Calculations Calculations
Eq. ( 16)
Calculations v ∆ mdq v Σ mdqz Eq. (19)
Eq. ( 24)
v ∆ CdqZ Calculations Eq. (21) v Σ Cdqz m Σ dqz m ∆ dq Common-mode currents i Σ dqz i ∆ dq DC bus vdc vdc i Σ z ÷ Pl il
Eq. ( 27) This section presents the control scheme for an MMC as shown in Fig. 4. For the AC-side, the MMC control strategy is based on a classical scheme with two cascaded loops. The outer loop controls the active power P ac following a P ac -v dc droop characteristic with gain k d [START_REF] Freytes | Dynamic impact of mmc controllers on dc voltage droop controlled mtdc grids[END_REF], and reactive power Q ac . Furthermore, v Σ * md and v Σ * mq are the outputs of the CCSC [START_REF] Tu | Circulating current suppressing controller in modular multilevel converter[END_REF], forcing the circulating currents i Σ dq to zero. The gains of the controllers are calculated to achieve a response time τ Σ i of 5 ms and ζ Σ i of 0.7. The output DC current of the converter is left uncontrolled with this strategy, and it is naturally adjusted to balance the AC and DC power flow.
Inputs States Algebraic equations Differential equations v ∆ CdqZ v Σ Cdqz i ∆ dq v Σ Cdqz v ∆ CdqZ v Σ Cdqz v ∆ CdqZ v Σ mdqz v ∆ mdq
ωL ac eq ωL ac eq v G d v G q i ∆ * d i ∆ * q i ∆ d i ∆ q + - - + v ∆ * md v ∆ * mq + + + - + + PI i ∆ PI i ∆ m ∆ dq m Σ dqz Grid currents control v G d i Σ d + - - + v Σ * md v Σ * mq CCSC i Σ q i Σ * d = 0 i Σ * q = 0 v dc 2 2ωLarm 2ωLarm - - - + ÷ + - v * dc vdc ÷ v G d DC voltage droop control -1 k d + P * ac0 Q * ac P * ac v dc 2 3 2 3 v Σ * mz (28) Equation PI i Σ PI i Σ + -
The MMC insertion indexes are calculated directly from the output of the control loops for the ac-side and CCSC as shown in (28).
m Σ d = 2 v Σ * md v dc , m Σ q = 2 v Σ * mq v dc , m Σ z = 2 v Σ * mz v dc (28a) m ∆ d = -2 v ∆ * md v dc , m ∆ q = -2 v ∆ * mq v dc (28b)
where, for this controller, m Σ z is equal to 1 and v dc is the measured DC voltage. Instead of v dc , a fixed value (i.e. the nominal voltage) can be used, but this choice can reduce the stability region of the MMC [START_REF] Freytes | Small-signal model analysis of droop-controlled modular multilevel converters with circulating current suppressing controller[END_REF]. Calculation of the insertion indexes in this way will be referred to as the "Un-Compensated Modulation" (UCM) [START_REF] Diaz | Small-signal state-space modeling of modular multilevel converters for system stability analysis[END_REF], since there is no compensation for the impact of the oscillations in the equivalent arm capacitor voltages on the modulated output voltages.
IV. SMALL SIGNAL STABILITY ANALYSIS OF AN MMC WITH CLASSICAL CCSC A. Model linearization and time domain validation
The non-linear time-invariant model presented in Section II with the control from Section III are connected as shown in Fig. 5. This interconnected model is represented by a subset of differential equation f as expressed in (29), where x represents the states of the system as in (30) and u the inputs as in (31).
MMC + DC bus
ẋ(t) = f (x(t), u(t)) (29) x = [ξ i ∆ dq ξ i Σ dq Controllers i ∆ dq i Σ dqz v Σ Cdqz v ∆ CdqZ MMC v dc DC bus ] ( 30
) u = [v * dc P * ac0 Q * ac i Σ * d i Σ * q Controllers v G d v G q AC grid ] (31)
The non-linear model from (29) can be linearized around a steady-state operating point by means of the Jacobian linearization method, resulting in a Linear Time-Invariant (LTI) representation as expressed in (32) [START_REF] Wang | Modern power systems analysis[END_REF]. It is recalled that each element A ij and B ij of the matrices A and B are related to the equations f as shown in (33).
∆ ẋ(t) = A(x 0 , u 0 )∆x(t) + B(x 0 , u 0 )∆u(t) (32)
A ij = ∂f i (x, u) ∂x j (x0;u0) ; B ij = ∂f i (x, u) ∂u j (x0;u0) (33)
The equilibrium point, defined by (x 0 , u 0 ), is calculated from the direct resolution of the system equations from (29) by setting ẋ(t) equal to zero. The obtained LTI model is used for evaluating small-signal dynamics and stability by eigenvalue analysis.
To validate the developed small-signal model of the MMC with Classical CCSC, results from a time-domain simulation of two different models will be shown and discussed in the following:
1) EMT: The system from Fig. 1 implemented in EMTP-RV with 400 submodules. The MMC is modeled with the socalled "Model # 2: Equivalent Circuit-Based Model" from [START_REF] Saad | Modular multilevel converter models for electromagnetic transients[END_REF]. It is worth noticing that the modulation indexes are transformed from "Σ-∆" and dq frame to "Upper-Lower" and abc frame. 2) LTI: This model represents the linearized time-invariant model of the interconnected system from Fig. 5, implemented in Matlab/Simulink. The operating point corresponds to the nominal ratings.
The main system parameters are listed in Table II. Starting with a DC power transfer of 1 pu (from DC to AC), a step is applied on P l of -0.1 pu at 0.05 s. The reactive power is controlled to zero during the event. Simulation results are gathered in Fig. 6.
The dynamic response of the DC power P dc is shown in Fig. 6(a) (i.e. measured power in the reference model and the calculated power for the linearized model as P dc = 3i Σ z v dc ). The results of the common-mode currents are shown in Fig. 6(b) (only dq components). The EMT model presents oscillations at 6ω in steady state. These oscillations were neglected during the development of the time-invariant model [START_REF] Bergna | State-space modelling of modular multilevel converters for constant variables in steady-state[END_REF]. As seen in the comparisons from Fig. 6(b), the model captures the average dynamics with reasonable accuracy even if the 6 th harmonic components are ignored (notice the scale). For all other variables, there are negligible differences between the different models.
The step applied on P l produces power imbalance in the DC bus, and the MMC reacts with the droop controller and its internal energy to achieve the new equilibrium point. The DC voltage results are shown in Fig. 6(c). Due to the DC voltage droop control (proportional controller with gain k d ), a steady-state error is obtained after the transient. The internal energy of the MMC participates in the dynamics of the DC voltage regulation by discharging its internal capacitors into the DC bus during the transients, as seen in the voltage v Σ Cz from Fig. 6(d). The behavior of v Σ Cz is similar to the DC bus voltage as expected from the discussion in [START_REF] Freytes | Dynamic impact of mmc controllers on dc voltage droop controlled mtdc grids[END_REF].
i Σ q -LTI i Σ d -LTI i Σ q -EMT i Σ d -EMT i Σ dq × 10 -3 [pu]
(c) v dc [pu] v Σ Cz -LTI v Σ Cz -EMT v Σ Cz [pu]
B. Stability analysis
Since the linearized model from Fig. 5 has been validated, it can be used for small signal stability analysis. The impact of three main parameters influencing the DC voltage dynamics are evaluated: the DC capacitance, the droop gain k d [START_REF] Freytes | Dynamic impact of mmc controllers on dc voltage droop controlled mtdc grids[END_REF] and the response time of the current loops.
1) Influence of the DC capacitance: In MTDC systems, the value of the equivalent DC capacitance depends on the number of MMC stations connected to the grid as well as the cable lenghts [START_REF] Freytes | Dynamic impact of mmc controllers on dc voltage droop controlled mtdc grids[END_REF]. This value may vary because some converters could be disconnected or the grid topology reconfigured. For this reason, the MMC should be able to operate under different situations that can result from changes of the DC grid topology and parameters. For evaluating the impact of the DC side capacitance on the small-signal stability, a parametric sweep is performed. The electrostatic constant H dc is defined as,
H dc = 1 2 C dc v 2 dcn P n . ( 34
)
The value of H dc is varied from 40 ms down to 5 ms. This last value represents a small capacitance of the DC bus (24, 4µF << (6 × C arm )), which could represent the DC capacitance of a short cable. The first results consider a power direction from DC to AC side of 1 GW (1 pu) of power transfer. Results are shown in Fig. 7(a). In this case, for the selected values the system remains stable. It is known that the converters dynamics depend on the operating point [START_REF] Freytes | Smallsignal state-space modeling of an hvdc link with modular multilevel converters[END_REF]. The same parametric sweep as the previous example is performed with the opposite power transfer direction (i.e. from AC to DC side). The results are shown in Fig. 7(b), demonstrating that the system becomes unstable when the equivalent DC capacitor decreases. 2) Influence of the droop parameter: In this case, the droop parameter k d is varied from 0.2 pu down to 0.05 pu. The considered power direction is from AC to DC since it is the worst case from the previous section. Results are shown in Fig. 8. When lower values of droop are used, the eigenvalues λ 1,2 shift to the right-hand plane (RHP) resulting in unstable behavior.
C. Identification of unstable eigenvalues
As observed in the previous results (Figs. 7, 8 and 9), the system may become unstable due to the same pair of eigenvalues for all cases (λ 1,2 ). For understanding the origin of these eigenvalues, participation factor analysis is performed for the case from the previous sub-section and the results are shown in Fig. 10, where the considered parameters and operating point are given in Table II. Results from Fig. 10 indicate that the states with the highest participation in the critical mode are i Σ z (i.e. the DC current), v Σ Cz (the state of the MMC which represents the internally stored energy) and v dc (DC voltage). It shows also that the internal circulating currents i Σ dq do not have significant influence on these eigenvalues and neither do the integral part of the controllers (with the chosen bandwidths).
Participation Ratio λ 1,2 0 0.1 0.2 0.3 0.4 v dc v ∆ CZq v ∆ CZ d v ∆ Cq v ∆ Cd v Σ Cz v Σ Cq v Σ Cd i Σ z i Σ q i Σ d i ∆ q i ∆ d ξ i Σ q ξ i Σ d ξ i ∆ q ξ i ∆ d
Controllers
The impact of the proportional gains of the controllers are evaluated by calculating the participation factors for each point from Figs. 9 and the results are shown in Fig. 11. In Fig. 11(a), a similar pattern is observed for the participation factors as in Fig. 10. For fast response times of the CCSC, the dq components of v Σ C participate more on the studied eigenvalues, but the system is unstable as shown in Fig. 9(b). Nevertheless, for realistic values of response times, the most important states are i Σ z , v Σ Cz and v dc , which corresponds to the results in Fig. 10. Since the instabilities identified in the studied cases are due mainly to the uncontrolled output current i Σ z , the natural further step is the explicit control of this current for improving the behavior of the system.
Part. Ratio λ 1,2 τ ∆ i [ms]
v dc v ∆ Cq v ∆ Cd v Σ Cz v Σ Cq v Σ Cd i Σ z (a) Results from participation factor analysis -Eigenvalues λ 1,2 v dc v ∆ Cq v ∆ Cd v Σ Cz v Σ Cq v Σ Cd i Σ z Part. Ratio λ 1,2 τ Σ i [ms] 1.5 2
A. Energy-based controller
The considered control strategy from previous section control two out of three common-mode currents i Σ . The uncontrolled zero-sequence component of i Σ may cause interactions with the DC bus and the internal capacitor voltages, and can potentially make the system unstable. To improve the stability of the studied system, it is proposed to add a DC current control loop (or what is the same, a controller for i Σ z ). In the Classical CCSC strategy from last section, the energy is naturally following the DC bus voltage. The DC current is adjusting itself to achieve an implicit balance of energy and between AC and DC power in steady state. When controlling the DC current, this natural balance is lost so the DC current has to be determined explicitly to regulate the internally stored energy and balance the AC and DC power flow.
1) Inner control loop -Z-sequence Σ current: The design of the controller for i Σ z is based on the second equation of (18a), and a simple PI can be deduced as shown in Fig. 12. For tuning purposes, v Σ mz is supposed to be equal to v Σ * mz .
vdc 2 PI i Σ i Σ z i Σ * z + - v Σ * mz - + Z-seq. Σ Current Controller 1 Larms+Rarm i Σ z vdc 2 + - v Σ mz Equation (18b) ÷ m Σ z Eq. vdc 2 (19) Figure 12. Block diagram of the i Σ z current control
2) Outer control loop -Energy controller: For generating the current reference i Σ * z an outer loop is needed. The proposed strategy is based on the explicit control of the the total stored energy W Σ z on the MMC capacitors C arm given by the power balance between AC and DC sides [START_REF] Saad | Mmc capacitor voltage decoupling and balancing controls[END_REF]. For designing this controller, a model with the explicit relation between the DC current i dc and the total stored energy W Σ z is needed. Assuming P * ac ≈ P ac , a simplified expression of the sum energy dynamics can be defined as [START_REF] Saad | Mmc capacitor voltage decoupling and balancing controls[END_REF]:
dW Σ z dt ≈ P dc -P ac ≈ v dc 3i Σ z i dc -P * ac (35)
The deduced controller structure is shown in Fig. 14. For tuning purposes, the inner i Σ z current controller is considered as a unity gain. Finally, the complete control structure is shown in Fig. 14, where the response time for the total energy τ Σ W is set to 50 ms (i.e. 10 times slower than the inner Σ current loop). The energy reference W Σ * z is set to 1 pu in this paper, for maintaining a constant level of stored energy (corresponding to the rated capacitor voltages). As explained in [START_REF] Freytes | State-space modelling with steady-state time invariant representation of energy based controllers for modular multilevel converters[END_REF], the energy
P * ac PI W Σ W Σ z W Σ * z + - P * dc + Total Energy Controller 1 s W Σ z P * ac - + Equation (35) v dc × i dc 3 i Σ z P dc i Σ * z v dc ÷ 1 3 i * dc i Σ z Ctrl. loop
W Σ z is vdc 2 v dc ÷ 1 3 + - W Σ * z W z + P * ac P * ac i Σ z i Σ * z + - i * dc v Σ * mz - + P * dc Z-seq. Σ Current Controller Total Energy Controller CCSC Grid currents control DC voltage droop control i ∆ * dq v ∆ * mdq i Σ * dq = 0 v Σ * mdq m ∆ dq m Σ dqz v dc PI i Σ PI W Σ Eq. (28)
W Σ z ≈ C arm v Σ Cd 2 2 + v Σ Cq 2 2 + v Σ Cz 2 + ... (36)
...
+ C arm k=d,q,Z d ,Zq v ∆ Ck 2 2
B. Model linearization and time domain validation
In a similar way as in Section IV, the system comprising the non-linear model from Fig. 3 and the controller from Fig. 14 are connected as shown in Fig. 15. The state variables of the new controllers ξ i Σ z and ξ W Σ z , which correspond to the integral part of the PI controllers, are concatenated to the system states x. Also, the energy reference W Σ * z is now included in the inputs vector u. The new interconnected model can be linearized around any operating point for stability analysis. To validate the developed small-signal model of the MMC with Energy based control, results from time domain simulations are shown in Fig. 16. The event and parameters are the same as for Section IV, but it can be observed that the transient behavior of the DC power and voltage are well controlled contrary to the oscillatory behavior from the Classical CCSC strategy (Fig. 6). As shown in Section IV-B, when the Classical CCSC from Fig. 4 was considered, some instabilities were observed with low values of droop gain k d or low equivalent capacitance on the DC side (i.e. low values of H dc ). For demonstrating the stability improvement with the Energy-based controller from Fig. 14, the same parametric sweep is performed as for Fig. 7 and Fig. 8. The results are gathered in Fig. 17. For both situations, it is only shown the case where the power flow is from the DC side to the AC side since it was the case where the instabilities occurred in Section IV-B. However, with this controller, the system presents similar behavior from both power directions.
For the case of the variation of H dc in Fig. 17(a), as well for the variation of k d in Fig. 17(b) it can be clearly observed that stability is guaranteed for the studied cases.
VI. STABILITY COMPARISON
A. DC Capacitance
For comparing the stability improvement, the eigenvalues from Fig. 7(b) and Fig. 17 The stability improvements with the Energy-based controller are highlighted by a time domain simulation. The operating point is the same as for Fig. 18, and results are shown in Fig. 19. Since, for this set of parameters, the configuration of the MMC with Classical CCSC is unstable, the simulation is started with an extra capacitor connected in parallel with C dc for stabilizing the system, which is disconnected at t = 0 s. The frequency of the oscillations corresponds to the frequency of the unstable eigenvalues from Fig. 18.
B. Active power reversal
In this section, an active power reversal from 1 pu to -1 pu is considered. Fixing a DC capacitance with H dc equal to 10ms, and a droop parameter k d equal to 0.1 pu, a parametric sweep for both control strategies is performed and the results are gathered in Fig. 20. For the Classical CCSC in Fig. 20(a) the system is unstable for DC power values lower than -0.15 pu approximately, whereas in Fig. 20(b), i.e. the case with Energy based controller, the system remains stable for all the power range.
VII. CONCLUSIONS
This paper has identified the source of the poorly damped and even possible unstable oscillations that can appear for MMCs relying on a CCSC without control of the internally stored energy. It has been demonstrated by participation factor analysis that the source of these potential stability problems is that the classical CCSC leaves the DC-side current uncontrolled, which leads to potential interaction with the DC bus voltage and the internally stored energy of the the MMC. To avoid these interactions, a control loop for regulating the energy stored in the MMC is introduced. The output of this control loop is the reference for controlling the DC-side current, which is corresponding to the zero-sequence component resulting from the dq-transformation used for the implementation of the classical CCSC. The stability improvement due to the additional control loops is clearly demonstrated by small-signal eigenvalue analysis based on linearization of the presented state-space model, which is accurately representing the internal dynamics of the MMC. The applied state-space modeling approach and the improved dynamic response obtained with the added energyand DC-side current controllers is validated by time-domain simulations in comparison to a detailed EMT model of an MMC with 400 sub-modules per arm. Further developments should consider the dynamic analysis of a more advanced management of the energy in the MMC, i.e. where the energy sum and difference are explicitly controlled.
Figure 2 .
2 Figure 2. The used modelling approach based on three Park transformations to achieve SSTI state variables
Figure 3 .
3 Figure 3. MMC and DC bus equations resume
Figure 4 .
4 Figure 4. MMC droop-controlled with Classical CCSC
Figure 5 .
5 Figure 3
P dc[pu]
6 .
6 Time domain validation Classical CCSC -Step applied on P l of 0.1 pu -EM T : EMTP-RV simulation with detailed converter, LT I: Linear time-invariant state-space model in Simulink
Figure 7 .
7 Figure 7. Parametric sweep of DC capacitor H dc -DC Operating point v dc0 = 1 pu -k d = 0.1 pu -Classical CCSC
Figure 8 .Figure 9 .
89 Figure 8. Parametric sweep of Droop parameter k d -DC Operating point v dc0 = 1 pu, P dc0 = -1 pu -Classical CCSC
Figure 10 .
10 Figure 10. Results from participation factor analysis -Eigenvalues λ 1,2τ ∆ i = 10 ms; τ Σ i = 5 ms; k d = 0.1 pu; H dc = 40 ms
2 Figure 11 .
211 Figure 11. Participation factors for different response times -Dashed lines correspond to the states that are not listed in the legend -DC Operating point v dc0 = 1 pu, P dc0 = -1 pu
Figure 13 .
13 Figure 13. Controller design for W Σ z
Figure 14 .
14 Figure 14. Complete Energy based control -Current and Energy controllers calculated from the dqz components as in (36).
Figure 15 .
15 Figure 14 controller
Figure 16 .
16 Figure 16. Time domain validation Energy based controller -Step applied on P l of 0.1 pu -EM T : EMTP-RV simulation with detailed converter, LT I: Linear time-invariant state-space model in Simulink
Figure 17 .
17 Figure 17. Parametric sweeps for Energy-based controlv dc0 = 1 pu, P dc0 = -1 pu, k d = 0.1 pu -Power flow: DC ⇐ AC
Figure 18 .
18 Figure 18. Eigenvalues comparison Energy-based and Classical CCSC control -DC Operating point v dc0 = 1 pu, P dc0 = -1 pu, Power flow: DC ⇐ AC k d = 0.1 pu
Figure 19 .
19 Figure 19. Time domain comparison in EMTP-RV with detailed converters with Classical CCSC and Energy based strategies -Step applied on P l of 0.1 pu -H dc = 14.2 ms
Figure 20 .
20 Figure 20. Parametric sweep of DC power reversal with Classical CCSC and Energy based strategies -H dc = 10 ms -k d = 0.1 pu Results from Fig. 20 are validated in time domain simulations as shown in Fig. 21. The AC power reference P * ac0 Fig. 4) is also ramped during the transient with the same rate for taking into account the voltage deviation. This represents the case where the power reversal signal is coordinated by a dedicated
Figure 21. Time domain comparison in EMTP-RV with detailed converters with Classical CCSC and Energy based strategies -H dc = 10 msk d = 0.1 pu
Modular Multilevel Converters connected to Multi-Terminal DC grids in Ecole Centrale de Lille, France. His main research interests include the modeling and control of power electronics for HVDC systems.
Gilbert Bergna received his electrical power engineering degree from the "Simón Bolívar University", in Caracas, Venezuela, in 2008; his "Research Master" from SUPÉLEC (École Supérieure d'Électricité), in Paris, France, in 2010; and a joint PhD degree between SUPÉLEC and the Norwegian University of Science and Technology (NTNU) in 2015. In march 2014 he joined SINTEF Energy Research as a research scientist, and since august 2016 started a post-doctoral fellowship at NTNU working on control and modelling of power electronic systems. Xavier Guillaud (M'04) has been professor in L2EP -Lille since 2002. First, he worked on the modeling and control of power electronic systems. Then, he studied the integration of distributed generation and especially renewable energy in power systems. Nowadays, he is studying the high voltage power electronic converters in transmission system. He is leading the development of an experimental facility composed of actual power electronic converters interacting with virtual grids modelled in real-time simulator. He is involved on several projects about power electronic on the grid within European projects and different projects with French companies. He is member of the Technical Program Committee of Power System Computation Conference (PSCC) and associated editor of Sustainable Energy, Grids and Networks (SEGAN). |
01760353 | en | [
"info.info-ro"
] | 2024/03/05 22:32:13 | 2017 | https://hal.science/hal-01760353/file/ifacTechReport.pdf | Oscar Tellez
email: oscar.tellez@insa-lyon.fr
Samuel Vercraene
email: samuel.vercraene@insa-lyon.fr
Fabien Lehuédé
email: fabien.lehuede@imt-atlantique.fr
Olivier Péton
email: olivier.peton@imt-atlantique.fr
Thibaud Monteiro
email: thibaud.monteiro@insa-lyon.fr
Dial-a-ride problem for disabled people using vehicles with reconfigurable capacity
Keywords: Transportation logistics, optimization, dial-a-ride problem, large neighborhood search meta heuristic, set covering problem
The aim of this paper is to address the dial-a-ride problem with heterogeneous users in which the vehicle capacity can be modified en-route by reconfiguring its internal layout. The work is motivated by the daily transport of children with disabilities performed by a private company based in Lyon Métropole, France. Every day, a fleet of configurable vehicles is available to transport children to medico-social establishments. The objective of this work is then to help route planners with the fleet dimensioning and take reconfiguration opportunities into consideration in the design of routes. Due to the number of passengers and vehicles, real-size instances are intractable for mix-integer programing solvers and exact solution methods. Thus, a large neighborhood search meta-heuristic combined with a set covering component is proposed. The resulting framework is evaluated on real life instances from the transport company.
INTRODUCTION
The standard Dial-a-Ride Problem (DARP) consists in designing vehicle routes in order to serve transportation demands scattered through a geographic area. The global objective is to minimize the transportation cost while satisfying demands.
In contrast to the Pickup and Delivery Problem (PDP), DARP applications concern the transportation of persons. Hence constraints or objectives related to the quality of service should be taken into consideration. In the context of door-to-door transportation of elderly and disabled people, the number of applications has considerably grown recently. Population is aging in developed countries, and many people with disabilities cannot use public transport. As a result, new transport modes, public and private, arise to satisfy their transportation needs. Demand from people with disabilities differ in their need of special equipments such as wheelchair spaces or stretchers thus obliging the use of adapted vehicles. [START_REF] Parragh | Introducing heterogeneous users and vehicles into models and algorithms for the dial-a-ride problem[END_REF] introduced the DARP with heterogeneous users and vehicles, and solved instance with up to 96 user requests. [START_REF] Qu | The heterogeneous pickup and delivery problem with configurable vehicle capacity[END_REF] extended this problem by considering vehicles with configurable capacity. Different categories of users express special needs such as regular seats or wheelchair spaces. These demands are served by configurable vehicles. The goal is to find the most convenient vehicle configuration for each route.
In this paper we present a generalization of the PDP with configurable vehicle capacity. Contrary to the work by [START_REF] Qu | The heterogeneous pickup and delivery problem with configurable vehicle capacity[END_REF], we allow vehicles to be reconfigured enroute on. The other difference is that we determine the fleet dimension instead of starting with limited fleet. We call this variant dial-a-ride problem with reconfigurable vehicle capacity (DARP-RC). Note it is not an heterogeneous variant because only one vehicle type is considered.
The use of hybrid methods or matheuristics has become quite popular in recent years for routing problems. Our solution method has been inspired by the framework of [START_REF] Grangier | A matheuristic based on large neighborhood search for the vehicle routing problem with crossdocking[END_REF] to solve the vehicle routing problem with cross-docking. We combine a Large Neighborhood Search (LNS) metaheuristic with a Set Covering Problem (SCP).
The contribution of this paper is therefore to introduce and solve the DARP-RC. Moreover, we compare the results of the combined LNS-SCP approach with pure LNS and adaptive large neighborhood search (ALNS).
INDUSTRIAL CONTEXT
This work is motivated by the daily transport of people with disabilities at Lyon Métropole. One segment of this service is operated by the GIHP company on regular basis. Every day, a fleet of configurable vehicles transport around 500 children from and to Medico-Social Establishments (MSE) for rehabilitative treatment. GIHP serves around 60 MSEs with around 180 adapted vehicles. One of the particularities of the vehicle fleet is the capacity to reconfigure its internal layout to trade off seats by wheelchair spaces as per convenience. 1 For MSEs, transportation is often considered the second biggest expense after wages. As a consequence, optimizing the transport becomes a priority. Every year, the company makes strategic choices in the definition and constitution of this fleet. Then, routing decisions are re-revalued daily by route planners.
These decisions are often taken without help of decision making tools like a vehicle routing software. This is why route planners conceive suboptimal assumptions such as designing separate routes for each MSE or ignore vehicle reconfiguration possibilities. As such, this measures can reduce pooling gains and increase operating cost.
PROBLEM DEFINITION
In the classic DARP, a homogeneous vehicle fleet is assumed. All vehicles have the same single capacity type and are located at a single depot [START_REF] Cordeau | A tabu search heuristic for the static multi-vehicle dial-a-ride problem[END_REF]. The proposed DARP-RC constitutes an extension for DARP problem considering more realistic assumptions such us heterogeneous users (e.g. seats, wheelchairs, stretchers) and vehicles with reconfigurable capacity.
Reconfigurable vehicles
Vehicles with configurable capacity were introduced in [START_REF] Qu | A Branch-and-Price-and-Cut Algorithm for Heterogeneous Pickup and Delivery Problems with Configurable Vehicle Capacity[END_REF]. In their problem, vehicles were not allowed to change configurations all along the route. This assumption grant configurations to be treated as vehicles types with some extra dimensioning constraints. DARP-RC instead, by allowing reconfigurations, introduces the challenge of tracking each configuration for every visited node. Vehicles can have one or several configurations. Each configuration is characterized by the capacity for each user type. Consider for example vehicle in Fig. 1.
In the first configuration it can handle 7 seated people and 1 wheelchair; in the second one, 6 seated people and 2 wheelchairs; in the third one, 4 seated people and 3 wheelchairs. Note that there is no linear relationship between the capacity in the two types of users (one wheelchair cannot be simply converted into one or two seat spaces). Also that unused chairs are folded and not removed from the vehicle.
Example.
The following example illustrates how vehicles with reconfigurable capacity can reduce operating cost. Consider a vehicle with two configurations {c 1 , c 2 } as shown in Fig. 2. The first configuration consists of 2 seats and 1 wheelchair space. The second configuration has 4 seats only. Users a and c go to destination M 1 while b, d, e and f go to M 2. In order to satisfy all user demands, the reconfigurable vehicle can follow the route
D → a → b → c → M 1 using configuration c 1 and d → e → f → M 2 → D with configuration c 2 .
Performing the same route without reconfiguring capacity would imply making an extra detour d → M 2 → d using configuration c 1 only (see dotted line); therefore increasing transportation costs.
Problem Description
The DARP-RC is defined as a network composed of a set V of vertices containing the set O + of vehicle starting depots, the set O -of vehicle arrival depots, the set P of pickup locations and the set D of delivery locations. Without loss of generality, we address the case of morning routes where the set P corresponds to people home or any nearby address, and the set D corresponds to MSEs.
The set of users is partitioned into several categories u ∈ U, i.e. seat, wheelchairs. An user request r ∈ R is composed of a pickup at location p r ∈ P, a delivery at location d r ∈ D and the number q ru of persons of each type u ∈ U to be transported. Moreover, each request has a maximum ride time Tr that guarantees a given quality of service.
The fleet of vehicles is homogeneous. Each vehicle has the set C of possible configurations that provides different capacity vectors. Hence, the vehicle capacity is expressed by quantities Q c u representing the maximal number of users type u ∈ U that can be transported at a time by the same vehicle when using configuration c ∈ C. Without loss of generality, we consider that if i ∈ P represents a pickup node, then the node i+n is the delivery node associated with the same request. It is assumed that different nodes can share the same geographical location.
Consider the directed graph G = (A, V) shown in Fig. 3. V = P ∪ D ∪ O corresponds to the set of nodes A is the set of arcs connecting the nodes. Every arc (i, j) in the graph represents the shortest path in time between nodes i and j. Its travel time is denoted as t ij while its length is denoted as d ij .
Note that reconfiguration time is not considered because it is negligible compared with service times and it can be performed at MSEs while massive drop-offs take place.
The objective function is to minimize the total transportation cost. Three main costs are considered: a fixed cost associated with each vehicle (amortization cost), a time related cost proportional to the total route duration (driver cost) and a distance related cost (vehicle fuel and maintenance costs).
RESOLUTION METHOD
An exact method has been proposed in [START_REF] Qu | A Branch-and-Price-and-Cut Algorithm for Heterogeneous Pickup and Delivery Problems with Configurable Vehicle Capacity[END_REF] to solve the heterogeneous PDP with configurable vehicle capacity for up to 50 requests. For large scale applications usually found in real-life situations, metaheuristics provide a good alternative. We propose a matheuristic that combines the metaheuristic LNS with SCP. It will be denoted as LNS-SCP.
Matheuristic framework (LNS-SCP)
The matheuristic framework consists of an LNS with a nested SCP solved periodically. LNS was first proposed by [START_REF] Shaw | Using constraint programming and local search methods to solve vehicle routing problems[END_REF] and introduced under the name ruin and recreate in [START_REF] Schrimpf | Record breaking optimization results using the ruin and recreate principle[END_REF]. In LNS the current solution is improved following an iterative process of destroying (i.e. removing parts of the solution) and repairing the current solution. This process is repeated until a stopping criterion is reach. In our case the stopping criterion is a maximal number of iterations or a maximum computational time.
The potential of the approach was revealed by Ropke and Pisiuger [START_REF] Ropke | An Adaptive Large Neighborhood Search Heuristic for the Pickup and Delivery Problem with Time Windows[END_REF] who proposed an ALNS consisting in multiple search operators adaptively selected according to their past performance in the algorithm.
Algorithm
s ← s If (cost of s is better than cost of s ) s * ← s If (it modulo η = 0) { /*set covering component*/ s ← solve set covering problem(Ω) Update s * ← s if s is cheaper than s* Update s ← s if s is cheaper than s Perform pool management } it ← it + 1 } Return s * -End -
LNS manages 3 solutions at a time: the current solution s, the temporary solution s generated after destroying and repairing a copy of s, and the best solution found so far s * . LNS consists in 3 fundamental steps:
(1) Determine the number of requests to remove Φ. We first randomly select the percentage of requests to be removed in the interval [α, β].
(2) Destroy and repair the current solution with a randomly selected operator σ -∈ Σ -and σ + ∈ Σ + respectively. This step results in a new temporary solution.
(3) Accept or reject the new temporary solution s , according to the record-to-record criterion of [START_REF] Dueck | New optimization heuristics[END_REF]: if objective(s ) ≤ (1+χ) * objective(s * ), where χ is a small positive value, s is accepted as the new current solution.
The SCP component is performed every η iterations. The purpose of the SCP component is to correct the LNS bias of discarding good routes that are part of costly solutions. Every new route is a candidate to be stored into a pool Ω of routes. Implementation details are presented in Section 4.3.
LNS operators
A key aspect of the LNS are the set of destroy and repair operators. With the goal of keeping the framework as simple as possible, we try to reduce the number of used operators without sacrificing the solution quality. At the end, only 2 destroy and 2 repair operators were kept in the framework based on the performance obtained in benchmark instances in Section 5.
Destroy operators determine the set of requests to be removed from the solution according to a certain criterion. In the framework of [START_REF] Pisinger | A general heuristic for vehicle routing problems[END_REF] 7 destroy operators are proposed. After testing our framework on literature instances we find out that random removal and historical node-pair removal were sufficient to obtain competitive results. For details about the implementation of these operators, please refer to [START_REF] Pisinger | A general heuristic for vehicle routing problems[END_REF].
Repair operators rebuild a partially destroyed solution in order to restore a complete feasible solution. This operation consists in reinserting nodes one by one from the request bank into the solution according to a specific criterion. Every insertion must satisfy all problem constraints. In our case time windows, ride times and capacity constraints have to be respected. If the operator does not fully repair the solution due to feasibility requirements, the objective function is strongly penalized by multiplying it by a big constant value. The two most common repair operators for the DARP are the cheapest insertion and the k-regret heuristics, both employed in the LNS-SCP. We implemented the k-regret heuristics with values of k varying from 2 to 4.
Set Covering Problem (SCP)
In LNS, a solution is rejected solely based on its cost with respect to the best solution so far. A rejected solution may contain some good routes, which are also removed. This issue is addressed by storing the routes found by LNS and using them in a SCP to find new solutions. In the following lines, we present the mathematical model and key components for its implementation.
Let Ω be the set of routes in the pool collected through the LNS iterations, and V ω ∈ R + the cost of route ω ∈ Ω. To describe the itinerary followed by each route, we define the values R rω , which are set at 1 if request r ∈ R is served by route ω ∈ Ω, and 0 otherwise.
The set covering problem aims at determining the value of binary variables y ω , where y ω = 1 if route ω ∈ Ω is part of the solution and 0 otherwise. The set covering problem is defined by the following model.
min ω∈Ω V ω y ω , (1) s
.t. ω∈Ω R rω y ω ≥ 1 ∀r ∈ R, (2)
y w ∈ {0, 1} ∀ω ∈ Ω. (3)
In every iteration, after calling the repair operator, the current routes are memorized in the pool Ω. In order to reduce the number of variables for the SCP, only nondominated routes are saved according to Proposition 1. Proposition 1. (Route dominance). Route ω 1 dominates route ω 2 , if ω 1 visits the same set of nodes as ω 2 at a lower cost.
The SCP is solved with a MILP solver every η iterations.
The solver is initialized with the best known solution and solved given a time limit T limit . This implies the SCP is not always solved to optimality. If the obtained solution is better that the current solution s then the current solution is updated. Otherwise s remains unchanged.
Similarly, the best solution s * is updated if a better solution is found. As constraint (2) of the set covering model allows request duplicates in the output solution, all duplicates are removed to obtain a consistent solution.
The cheapest one is always conserved.
If the solver fails to find an optimal solution within the time limit T limit , the pool is cleared and filled again with the routes of the best known solution. This step is refereed as pool management in Alg.1.
RESULTS
In order to determine the added value of the set covering component, we implemented three LNS variants: a classic LNS (using operators: k-regret, random removal, historical node-pair removal, worse removal, time related removal and distance related removal), an Adaptive LNS using the same set of operators and the proposed LNS-SCP (using only best insertion, k-regret, random removal and historical node-pair removal). In all cases k=2,3,4 regret was used. The MILP solver for solving the SCP is IBM Ilog Cplex 12.6, running on a single thread. The SCP is solved every η = 1000 iterations with a time limit of T limit = 3 seconds. The acceptance criteria is set to χ = 5% and the percentages used in destroy operators are α = 10% and β = 40% as in [START_REF] Pisinger | A general heuristic for vehicle routing problems[END_REF].
In order to test the framework, a set of 9 instances of different sizes is proposed. The first number in the instance name refers to the number of considered requests. The service time for valid people was set to 2 minutes for pickup and 1 minute for delivery. While the service time for wheelchairs was 2 minutes for pickup and 5 minutes for delivery. There is not time window in pickup locations.
There is a single depot for vehicles with infinite capacity. Vehicles are all of the type illustrated in Fig 1.
Two experiments were completed. One limiting the number of iterations (Table 1) and the other one limiting the computation time (Table 2). In both cases it is compared LNS, ALNS, and LNS-SCP metaheuristics. 5 runs per instance type were considered. The average gap (Gap avg ) was computed with respect to the best known solution (BKS) found throughout both experiments.
In Table 1 can be observed that in most of cases, SCP component improves the objective function on average in 2.96% (= 3.99% -1.03%) with regard to the LNS and in 1.21% (= 2.24% -1.03%) with respect to the ALNS version of the algorithm. By limiting the computational May 2017
time to 30 minutes it can be observed a similar behavior as shown Table 2, however with a higher gap for the lasts instances. This is expected as the computation time is not big enough as to obtain good results in big instances.
Looking at very large instances, in both benchmarks, ALNS outperforms the other methods which may indicate that ALNS-SCP may be relevant for evaluation. On going experimentation also aim to establish the best value of parameter T limit . In particular a greater value for this parameter may be beneficial for large size instances. Table 3 shows detailed information of the best solution found for each instance. It shows computational time (t), the best cost among the five runs (obj), the number of reconfigurations (rec), the number of routes in the solution(routes) and the average number of requests per route (requests/route). The maximum number of reconfigurations was 1 for most of instances. These results is influenced by the vehicle fixed cost and maximum ride time constrains. Nevertheless, a deeper study should be done to determine precisely the key factor for reconfiguration. Finally we can observe the average number of requests per route of 7.31 that is higher than the vehicle capacity.
CONCLUSION
Through the paper we have described the dial-a-ride problem using vehicles with en-route reconfigurable capacity and a solution procedure based on LNS and SCP. By analyzing the performance of the LNS-SCP on real life instances, we could observe some significant gains when compared with LNS and ALNS.
We also observed that best LNS-SCP solutions reconfigure en-route at most once for most of the instances. The perspectives are therefore to extend the study with heterogeneous vehicles and to make an exhaustive evaluation to characterize the key factors of vehicle reconfiguration.
From the solution method side, we aim to establish the best value for the SCP time limit in particular for large size instances and include the ALNS-SCP for evaluation.
Fig. 1 .
1 Fig. 1. Example of reconfigurable vehicle (source: www.handynamic.fr)
Fig. 2. Comparison of routes with and without capacity reconfiguration 2
1 shows the general structure of the LNS-SCP. Algorithm 1. (The LNS-SCP framework).Input: Σ -: set of destroy operators, Σ + : set of repair operators, η: nb. of iterations between two calls to the SCP. Output: best solution found s * .
-Begin-
Pool of routes: Ω ← ∅
Request bank: B ← ∅
s, s : current and temporary solutions
it ← 0
While (termination criterion not met) {
s ← s
Destroy quantity: randomly select a value Φ
Operator selection: select σ
-∈ Σ -and σ + ∈ Σ + Destroy: apply σ -to remove Φ requests from s Copy the Φ requests into B Repair: apply σ + to insert requests from B into s Ω ← Ω ∪ routes(s ) /* add routes into the pool */ If (acceptance criterion is met)
Table 2 .
2 Performance comparison of metaheuristics in 30 minutes over 5 runs
Table 3 .
3 Best results |
01760362 | en | [
"phys.qphy"
] | 2024/03/05 22:32:13 | 2018 | https://pastel.hal.science/tel-01760362/file/82038_PINNA_2018_archivage.pdf | M Lorenzo Pinna
Rapporteur Présidente
Examinateur Rapporteur
Antonella Diego
Missie Pablo
On the controllability of the quantum dynamics of closed and open systems
First I would like to thank my italian supervisor, Gianluca Panati, who introduced me to Quantum Mechanics when I was a master student and guided me during these three years. Thanks for every little lesson you gave me in
Résumé
La mécanique quantique a profondément changé notre compréhension des phénomènes physiques à échelles atomiques. La première révolution quantique a introduit un cadre théorique mathématiquement riche, cadre qui produit toujours des résultats intéressants et de nouvelles idées dans beaucoup de domaines de la physique.
De nos jours, avec l'avancement de la recherche et des nouvelles technologies, la mécanique quantique révolutionne (ou est sur le point de) notre vie quotidienne. Dès lors que les capacités effectif de manipuler la matière et la lumière à des échelles atomiques est en croissance constante, dans plusieurs de contextes expérimentaux de nouveaux outils technologiques permettent de manipuler l'état d'un système quantique au cours de son évolution avec une grande précision. Par conséquent, un grand nombre de possibilités sont disponibles pour les applications pratiques à la vie quotidienne.
Cela a conduit à l'analyse des systèmes quantiques d'un nouveau point de vue: la contrôlabilité. À ce sujet, l'intérêt est de développer des techniques, à la fois expérimentales et théorique, pour conduire efficacement l'évolution d'un système quantique. Dans les dernières décennies, cette approche a produit un grand nombre d'applications pratiques, par exemple réactions moléculaires entranées par des lasers, stabilisation des systèmes optiques, Josephson-junctions, piège à ions, etc.
Aux fins de la contrôlabilité, les systèmes quantiques sont généralement considérés comme fermés. Cela signifie que la description d'un système quantique est donnée par un opérateur auto-adjoint sur un espace de Hilbert et son évolution est régie par l'équation de Schrödinger . Théoriquement, de nombreuses approches ont été développées pour contrôler la dynamique de Schrödinger . On remarque en particulier : critères de contrôlabilité abstraits de la théorie du contrôle géométrique, techniques de Lyapunov, méthodes de contrôle optimales, analyse spectrale des résonances, techniques de contrôle adiabatique.
De nouvelles perspectives d'application pour les systèmes quantiques sont liées à la théorie d'information. Le but est de comprendre s'il est possible de construire une nouvelle génération de ordinateurs basés sur des phénomènes quantiques. Pour accomplir cette tâche, nous devons avoir la capacité de d'utiliser des systèmes quantiques pour stocker, manipuler et récupérer des informations, et cela nécessite un degré de contrôle et de stabilité qui dépasse encore nos possibilités. Par conséquent, le vrai défi en matière de contrôle quantique sont les systèmes ouverts.
Les systèmes quantiques ouverts permettent de décrire une grande variété de phénomènes liés à la stabilité. Dans cette cadre un système quantique est vu en interaction avec un plus grand, qui représente l'environnement, dont seuls les paramètres qualitatifs sont connus, au moyen de l'équation de Lindblad sur les états. L'environnement dans cette approche peut être vu comme l'ensemble des facteurs qui provoquent la perte du comportement quantique du système. Trouver des stratégies pour éviter ce processus est indispensable pour assurer l'utilisabilité des systèmes quantiques dans la théorie de l'information.
Dans cette thèse, nous apportons quelques contributions aux problèmes de contrôle des systèmes quantiques dans le cadres fermé et ouvert. La structure du manuscrit est la suivante.
Au chapitre 2, on passe en revue la théorie fondamentale du contrôle géométrique et son application au contrôle quantique. Une explication détaillée des différentes notions de contrôlabilité pour le système quantique est présentée, soulignant les problèmes liés à la dimension infinie des espaces utilisés en mécanique quantique.
Dans les chapitres 5 et 6, on se focalise sur la classe intéressante des systèmes spinboson, qui décrivent l'interaction entre un système quantique à deux niveaux et un nombre fini de modes distingués d'un champ bosonique. On considère deux exemples prototypiques, le modèle de Rabi et le modèle de Jaynes-Cummings qui sont encore très populaires dans plusieurs domaines de la physique quantique. Notamment, dans le contexte de la Cavity Quantum Electro Dynamics (C-QED), ils fournissent une description précise de la dynamique d'un atome à deux niveaux dans une cavité micro-onde en résonance, comme dans les expériences récentes de S. Haroche. Au chapitre 5, on étude les propriétés de contrôlabilité de ces modèles avec deux types différents d'opérateurs de contrôle agissant sur la partie bosonique, correspondant respectivement dans l'application à la C-QED à un champ électrique et magnétique externe. On passe en revue quelques résultats récents et prouvons la contrôlabilité approximative du modèle de Jaynes-Cummings avec ces contrôles. Ce résultat est basé sur une analyse spectrale exploitant les non-résonances du spectre.
Dans le chapitre 6, nous présentons une dérivation partielle du modèle de Jaynes-Cummings a partir du modèle de Rabi avec des techniques adiabatiques. On formule le problème comme une limite adiabatique dans lequel la fréquence de detuning et le paramètre de force d'interaction tombent à zero, ce cas est connu sous le nom de régime de weak-coupling. On prouve que, sous certaines hypothèses sur le rapport entre le detuning et le couplage, la dynamique de Jaynes-Cumming et Rabi montrent le même comportement, plus précisément les opérateurs d'évolution qu'ils génèrent sont proches à la norme. C'est un résultat préliminaire que nous comptons améliorer. Bien que cela pourrait ne pas être lié à la théorie du contrôle, l'approximation conduisant du modèle de Rabi au modèle de Jaynes-Cummings, qui est communément appelé l'approximation d'onde tournante, est largement utilisée dans la communauté de contrôle pour un grand nombre de modèles. La difficulté ici est que nous sommes dans contexte avec espaces à dimension infinie, o ce type d'approximations n'a pas un rigoureux mathématique justification.
Dans le chapitre 3, on examine un formalisme qui décrit l'évolution des systèmes quantiques ouverts, à savoir l'équation de Lindblad et la classification des semigroupes quantiques dynamiques. C'est le cadre que nous utiliserons au chapitre 4 pour étudier possibilité de contrôler adiabatiquement les systèmes ouverts de dimension infinie. Dans ce chapitre on examine la contrôlabilité de l'équation de Lindblad pour laquelle on considére un contrôle agissant adiabatiquement sur la partie interne du système, que nous voyons comme un degré de liberté qui peut être utilisé pour contraster l'action de l'environnement. L'action adiabatique du contrôle est choisie pour produire une transition robuste. En particulier, nous traiterons le cas d'un système ouvert à deux niveaux. Inspirés par les méthodes de contrôle adiabatique pour les systèmes fermés, nous montrerons que nous pouvons encore récupérer certaines caractéristiques de ces résultats mais sur un sous-ensemble plus petit de l'espace d'états. On prouve, dans le cas prototype d'un système à deux niveaux, que le système approche un ensemble de points d'équilibre déterminés par l'environnement, plus précisément les paramètres qui spécifient l'opérateur de Lindblad. Sur cet ensemble, le système peut être piloté adiabatiquement en choisissant un contrôle approprié. L'analyse est fondée sur l'application de méthodes de perturbation géométrique singulière.
Chapter 1
Introduction
Quantum mechanics deeply changed our understanding of physical phenomena at atomic scales. The first quantum revolution introduced a theoretical, and mathematically rich, framework that still produces interesting results and new insights in a lot of physic's fields.
Nowadays, with the advancement of research and new technologies, quantum mechanics is revolutionizing (or is about to) our daily lives. As the effective capabilities of manipulating matter and light at atomic scales is steadily growing, in a lot of experimental contexts new technological tools allow to manipulate the state of a quantum system during its evolution with great precision. Therefore, a large number of possibilities are available for daylife applications.
This led to the analysis of quantum systems from a new point of view: the controllability. In this regard, the interest is to develop techniques, both experimental and theoretical, to efficiently drive the evolution of a quantum system. In the last decade this approach produced a large number of practical applications, e.g. laserdriven molecular reactions, pulse sequences design in nuclear magnetic resonance, stabilization of optical systems, Josephson-junctions, ion traps, etc.
For the purpose of controllability quantum systems are usually considered closed. This means that the description of a quantum system is given by a self-adjoint operator on a Hilbert space and its evolution is governed by the Schrödinger equation. Theoretically, numerous approaches were developed to control the Schrödinger dynamics. We remark in particular: abstract controllability criteria of geometric control theory, Lyapunov-based techniques, optimal control methods, spectral analysis of resonances, adiabatic control techniques.
Newer perspectives of application for quantum systems are related to the theory of information. The goal is to understand if it is possible to build a new generation of computers based on quantum phenomena. To achieve this task we must possess the ability of employing quantum systems to store, manipulate and retrieve information, and this requires an unprecedented degree of control and stability. Therefore the actual challenge in quantum control are open systems.
Open quantum systems allow to describe a wide variety of phenomena related to CHAPTER 1. INTRODUCTION stability. The environment in this approach can be seen as the ensemble of factors that interacting with a system cause the loss of its unitary behaviour. Find strategies to avoid this process is indispensable to ensure usability of quantum systems in information theory.
In this thesis we make some contributions in the control of both closed and open systems.
In Chapter 2 we review the basic geometric control theory and its application to quantum control. A detailed explanation of the various controllability notions for quantum system is presented, underlining the problems related to the infinite dimension of spaces that are used in quantum mechanics.
In Chapter 5 and 6 we treat an interesting class of closed systems, namely the spin-boson models. These type of systems are of great interest because they describe the interaction between matter and light. In Chapter 5 our analysis is dedicated to show the controllability of a fundamental spin-boson model, the Jaynes-Cummings model. This result is based on a spectral analysis exploiting the non resonances of the spectrum. In Chapter 6 we present a partial derivation of the Jaynes-Cummings model from the Rabi model with adiabatic techniques. This is a preliminary result that we count to improve. Although this might seem not related to control theory, the approximation leading from the Rabi to the Jaynes-Cummings model, commonly known as rotating wave approximation, is widely used in the control community for a large number of models. The difficulty here is that we are in an infinite dimensional context, where this type of approximations have not a rigorous mathematical justification.
In Chapter 3 we review a formalism that describes the evolution of open quantum systems, namely the Lindblad equation and the classification of quantum dynamical semigroups. This is the framework that we will use in Chapter 4 to study the possibility of adiabatically control finite dimensional open systems. In particular we will treat the case of a two-level open system. Inspired by the adiabatic control methods for closed systems, we will show that we can still recover some features of these results but on a smaller subset of the state space.
Chapter 2
Controllability of dynamical systems
This chapter wants to be a brief introduction to standard control theory for dynamical systems, as well as a review of recent applications of control theory to quantum systems, which are the objects of our study. For an extended exposition one can consult the monographs [Jur], [D'Al], which will be our main references.
In particular in the first sections we will introduce basic concepts and recall some classic results about controllability of generic nonlinear systems. A special attention will be given to affine systems, which are often of great interest in the quantum framework.
When we introduce quantum system we will be forced to consider state spaces of infinite dimension. In this settings controllability concepts must be redefined, mainly due the impossibility of having strong forms of control. We will define the notion of approximate controllability and present a spectral criterion to determine controllability.
The last part of the chapter will be devoted to adiabatic control methods for quantum systems.
Standard geometric control theory
2.1.1 General framework
Consider the system ẋ = F (x, u), x ∈ R n , u ∈ U ⊂ R m (2.1)
where F is assumed to be a smooth function of its arguments. The variable u is called control and its domain U is called control set. The family of vector fields generated by F is
F = {F (•, u) | u ∈ U }.
(2.2) 11
Here and thereafter we assumed that every element X of F is a complete vector field, i. e. X generates a one-parameter family of diffeomorphism {exp tX | t ∈ R}.
Remark 2.1. In this discussion we consider systems on R n . However, one can assume to deal with a generic smooth manifold M and a smooth field F : M × U → T M . All statements of theorems that we will illustrate can be reformulated in this more general framework.
A continuous curve x : [0, T ] → R n is called an integral curve of F if there exist a partition 0 = t 0 < t 1 < • • • < t m = T and a vector fields X 1 , ..., X m in F such that u(t) = u i , x(t) is differentiable and d x dt (t) = X i (x(t)) = F (x(t), u i ) for all t ∈ [t i-1 , t i ], for each i = 1, ..., m. Hence x(t) is the solution of (2.1) where F (x, u(t)) is a time-varying vector field given by the piecewise-constant control function u(t).
A basic question in control problems is the following: given a finite time T > 0, an initial state x 0 ∈ R n and final state x f ∈ R n find a control function u : [0, T ] → R m such that x(t; u(•)) the solution of (2.1) with input control u(t) satisfies x(0; u(•)) = x 0 and x(T ; u(•)) = x f . Definition 2.2.
(i) For each T > 0 and each x 0 ∈ R n , the set of points reachable from x 0 at time T , denoted by A(x 0 , T ), is equal to the set of the terminal points x(T ) of integral curves of F that originates at x 0 .
(ii) The union
A(x 0 , ≤ T ) = ∪ t∈[0,T ] A(x 0 , t) (2.3) is called set reachable from x 0 in time T . (iii) The union A(x 0 ) = ∪ t∈[0,∞) A(x 0 , t) (2.4) is called set reachable from x 0 .
There is no particular a priori reason to restrict the class of control function to piecewise-constant functions. However, such a class is rich enough to characterize controllability properties of system (2.1) through the vector field family F. Consider the group of diffeomorphism
G(F) := {Φ = e t k Xx e t k-1 X k-1 • • • e t 1 X 1 | k ∈ N, t 1 , ..., t k ∈ R, X 1 , ..., X k ∈ F}.
(2.5) The action of G on R n partitions it into orbits. For each x 0 ∈ R n the sets of reachable point of Def.2.2 are obtained by the action of a particular subgroup or subset of G(F) on x 0 :
A(x 0 ) = G + (F).x 0 = {e t k Xx • • • e t 1 X 1 x 0 | k ∈ N, t 1 , ..., t k ≥ 0, X 1 , ..., X k ∈ F}; A(x 0 , T ) = G T (F).x 0 = {e t k Xx • • • e t 1 X 1 x 0 | k ∈ N, t 1 , ..., t k ≥ 0, k 1 t i = T, X 1 , ..., X k ∈ F};
while we will refer to G(F).x 0 as the orbit of x 0 . Whenever we want to emphasize the dependence of A(x 0 , ≤ T ), A(x 0 ), A(x 0 , T ) from F we will denote them adding the pedix • F , respectively with A F (x 0 , ≤ T ), A F (x 0 ), A F (x 0 , T ).
Given this basic formulation, one can define the notions of controllability as follow:
Definition 2.3.
(i) The system (2.1) is said to be small time locally controllable at x 0 if x 0 belongs to the interior of A(x 0 , ≤ T ) for every T > 0.
(ii) The system (2.1) is said to be completely controllable if A(x 0 ) = R n for every x 0 ∈ R n .
(iii) The system (2.1) is said to be strongly controllable if A(x 0 , ≤ T ) = R n for every x 0 ∈ R n and T > 0.
We shall proceed to analyse these different types of controllability in order of strength.
The small times local controllability (STLC) is at first sight a condition on the dimension of sets A(x 0 , ≤ T ) as subsets of R n but, if analysed in detail, gives a lot of information about local properties of the trajectories realizable by means of controls. Roughly speaking STLC at a point x 0 guarantees the possibility of steering the system in any direction starting from x 0 , with a velocity that generally depends on the direction as well as the initial point x 0 . We will not discuss sufficient or necessary conditions for STLC but we will treat a weaker condition that is accessibility. For an extended discussion one can consult [Sus] and reference therein.
Orbits and accessibility
The first step to obtain sufficient conditions that guarantee controllability of system (2.1) is to study topological properties of its orbits. In particular, our main goal is to show that local orbit structure is determined by local properties of the family F. The results in this section descend from the so called orbit theorem and the Frobenius theorem, however a complete explanation of those topics is beyond the scope of our discussion and we refer to [START_REF] Jurdjevic | Geometric Control Theory[END_REF]Chap. 2] for a detailed exposition.
The crucial object in the study of the controllability of (2.1) is the Lie algebra generated by its family of vector fields F.
Denote by F ∞ (R n ) the space of smooth vector fields on R n . F ∞ (R n ) is obviously a vector space on R under the pointwise addiction of vectors. For any smooth vector fields X, Y on R n , their Lie bracket is defined by
[X, Y ](x) = DY (x)X(x) -DX(x)Y (x) (2.6)
where
DX(x) = [(∂ X i /∂x j )(x)] n i,j=1 . Notice that [•, •] is linear in each variable, antisymmetric i. e. [X, Y ] = -[Y, X], and satisfies the Jacobi identity [X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0.
The vector space F ∞ (R n ) is an algebra with the product given by the bracket (2.6). Moreover since the bracket is bilinear, antisymmetric and satisfies the Jacobi identity is a Lie algebra.
Definition 2.4. For any family of vector fields F ⊂ F ∞ (R n ) we denote Lie(F) the Lie algebra generated by F, the smallest vector subspace S of F ∞ (R n ) that also satisfies [X, S] ⊂ S for any X ∈ F.
Remark 2.5. Lie(F) introduced in the latter definition could be shown to be equivalent to
Lie(F) = span [X 1 , [X 2 , [..., [X k-2 , [X k-1 , X k ]]...]]] k ∈ N, X 1 , ..., X k ∈ F . As vector space Lie x (F) := {X(x) | X ∈ Lie(F)}, where x ∈ R n , has a dimen- sion.
Definition 2.6. We say that the family F is bracket-generating at point x if the dimension of Lie x (F) is equal to n. We say that the family F is bracketgenerating if this condition is verified for every x ∈ R n .
The following results state that dim Lie x (F) determines the dimension of orbits of (2.1) as submanifolds of R n .
Theorem 2.7 ( [START_REF] Jurdjevic | Geometric Control Theory[END_REF]Thm.2.3] ). Suppose that F is bracket-generating at point x ∈ R n . Then the orbit G(F).x is open in R n . In addition if F is bracket-generating, then there exists only one orbit of F equal to R n .
Theorem 2.8 (the orbit theorem, [START_REF] Jurdjevic | Geometric Control Theory[END_REF]Corollary of Thm.2.5] ). Let F be a family of smooth vector fields such that the dimension of each vector space Lie x (F) is constant as x varies in R n . Then for each x ∈ R n , the tangent space at x of orbit G(F).x coincide with Lie x (F). Consequently, each orbit of F is a k-dimensional submanifold of R n .
Remark 2.9. Previous theorems hold for families of smooth vector fields on a general smooth manifold M . Extensions also hold if the manifold and the vector families are analytic and for Lie groups (see [START_REF] Jurdjevic | Geometric Control Theory[END_REF]Sect.2.3]).
Theorems 2.7,2.8 imply that to have the controllability of F, the condition to be bracket-generating is necessary under the assumption that Lie x (F) is constant as x varies in R n . However this is not enough, as we will see from topological properties of reachable sets. We will denote cl(•) and int(•) the topological closure and interior of a set.
Theorem 2.10. Suppose that F is a smooth family of vector fields on R n and F is bracket-generating at x ∈ R n . Then for each T > 0 and ε > 0,
(a) A(x, ≤ T ) contains non-empty open sets of R n , (b) A(x, ≤ T ) ⊂ cl(intA(x, ≤ T )), (c) int(cl A(x, ≤ T )) ⊂ intA(x, ≤ T + ε), (d) int(cl A(x)) = intA(x).
Property (a) in Theorem 2.10 is called accessibility. It means that the trajectories starting from a point can reach (in an arbitrarily small time) a set having non-empty interior. While accessibility guarantees the existence of open reachable sets from any x initial point, it does not say anything on x belonging to it.
The previous result is crucial for a characterization of orbits that derives from it. Notice that we can regard a reachable set of type A F (x) or A F (x, ≤ T ), as a set with a topology inherited from R n (the entire space), or inherited by the submanifold of the orbit G(F).x, at least under hypothesis of Theorem 2.8. This motivates next definition.
Definition 2.11. A smooth family of vector fields F is Lie-determined if the tangent space of each point x in an orbit of F coincides with Lie x (F).
Obviously, bracket-generating systems are Lie-determined and, by Theorem 2.8, every system F such that Lie x (F) has constant dimension for every x ∈ R n is Lie-determined.
Corollary 2.12. For Lie-determined systems, the reachable sets A F (x) cannot be dense in an orbit of F without being equal to the entire orbit.
For such systems, each reachable set A(x, ≤ T ) has a non-void interior in the topology of the orbit manifold, and the set of interior points grows regularly with T. This is equivalent to saying that the system is accessible in each of its orbits.
Thus, the essential property of a Lie-determined system is the one we claimed at the beginning of the section, its orbit structure is determined by the local properties of the elements of F and their Lie derivatives.
Moreover, this suggest that different families of vector fields could generate the same orbits if they differ by 'inessential' directions. More precisely, the closure of reachable sets could be taken as invariant to classify families of vector fields that generate the same orbits.
Compatible vector fields and completions
Definition 2.13. A vector field Y is said to be compatible with the family F if defining F = F ∪ {Y } we have the following. For every x 0 , the reachable set
A F (x 0 ) ⊂ cl A F (x 0 ).
Clearly this defines an equivalence relation between family of vector fields: F,F are equivalent if
A F ∪F (x 0 ) ⊂ cl A F (x 0 ), A F ∪F (x 0 ) ⊂ cl A F (x 0 ),
for all x 0 ∈ R n . A simple reformulation of Corollary 2.12 states that compatible fields does not change the orbits of systems.
Proposition 2.14. If F is a bracket-generating family of vector fields, Y is compatible with F and F ∪ {Y } is controllable, then F is controllable as well.
Therefore, it is interesting to understand under which operation on a family F, the closure of reachable sets remains invariant. Natural candidates are topological operation on F as subset of the vector space F ∞ (R n ). Denote as cl(F) the topological closure of the set F in F ∞ (R n ). Define also co(F), the convex hull of F, as the set
co(F) := { m i=1 λ i X i | m ∈ N, λ 1 , ..., λ m ≥ 0, λ i = 1, X 1 , ..., X n ∈ F}. (2.7)
Notice that the zero vector field is always compatible with any family F. Hence one can consider also co(F ∪ {0}), the positive convex 'semi-cone' through co(F) and the positive convex cone
cone(F) := { m i=1 λ i X i | m ∈ N, λ 1 , ..., λ m ≥ 0, X 1 , ..., X n ∈ F}.
(2.8)
Theorem 2.15. Let F be a family of smooth vector fields on R n . For any x ∈ R n and T > 0
(a) A cl(F ) (x, T ) ⊂ cl(A F (x, T )), (b) cl(A F (x, T )) ⊂ cl(A co(F ) (x, T )), (c) A co(F ∪{0}) (x, ≤ T ) ⊂ cl(A F (x, ≤ T )), (d) A cone(F ) (x) ⊂ cl(A F (x)).
Given the equivalence relation defined in Definition 2.13 and the completion operations of the latter theorem, each family of smooth vector field F has a largest extension which is represented by the maximal element of its equivalent class.
Definition 2.16. Let F be a Lie-determined family of vector fields.
(i) The strong Lie saturate of F, denoted LS s (F), is the largest subset F of Lie(F) such that cl(A F (x, ≤ T )) = cl(A F (x, ≤ T ))
for each x ∈ R n and T > 0.
(ii) The Lie saturate of F, denoted LS(F), is the largest subset
F of Lie(F) such that cl(A F (x)) = cl(A F (x))
for each x ∈ R n .
For such a families it is possible to state an abstract criterion of controllability Theorem 2.17 [START_REF] Jurdjevic | Geometric Control Theory[END_REF]Thm.2.12] ). Suppose that F is a Lie-determined family of vectors fields. Then F is strongly controllable if and only if LS s (F) = Lie(F) and Lie(F) is bracket-generating. F is controllable if and only if LS(F) = Lie(F) and Lie(F) is bracket-generating.
As elegant as the latter criterion is, its applicability is nevertheless restricted to situations in which there are further symmetries that allow for explicit calculations of the Lie saturate. For that reason we will illustrate an application of this theorem in the next subsection, where our objects will be affine systems.
A notable case in which symmetries of the systems implies controllability is pointed out in the following theorem.
Theorem 2.18 (Chow). Suppose that F is bracket-generating and for each X ∈ F then -X ∈ F. Then A(x 0 ) = R n for every x 0 ∈ R n .
Affine systems
An affine system is a differential system on R n of the form
ẋ = f 0 (x) + m i=1 u i f i (x), x ∈ R n , u = (u 1 , ..., u m ) ∈ U ⊂ R m (2.9)
with f 0 , ..., f m smooth vector fields on R n , and functions u 1 , ..., u m that are the controls. The vector field f 0 is called the drift, and the remaining vector fields f 1 , ..., f m are called controlled vector fields. A control affine system (2.9) defines the family of vector fields
F(U ) = {f 0 + m i=1 u i f i | u = (u 1 , ..., u m ) ∈ U }.
(2.10)
It will be convenient to consider only the constraint subsets U of R m that contain m linearly independent points of R m , in that case the Lie algebra generated by F(U ) is independent of U and is generated by the vector fields f 0 , ..., f m , i. e.
Lie(F(U )) = Lie{f 0 , ..., f m }.
Classification of control affine systems is based on properties of the drift.
Definition 2.19. A control affine system is called driftless if f 0 = 0.
Driftless systems are immediately treated with the theory developed in the previous section.
Theorem 2.20. Assume that the system (2.9) is driftless and bracket-generating, i. e. dim Lie x {f 1 , ..., f m } = n for all x ∈ R n . Then, (a) whenever there are no restrictions on the size of controls the corresponding control affine system is strongly controllable, (b) in the presence of constraints U ⊂ R m , (2.9) remains controllable (but not necessarily strongly controllable) if int(co(U )) ⊂ R m contains the origin.
Close to driftless systems are those systems in which the drift generates a dynamics that have almost closed orbits.
Definition 2.21. A complete vector field f on R n is said to be recurrent if for every point x 0 ∈ R n , every neighborhood V of x 0 and every time t > 0 there exists t * > t such that e t * f (x 0 ) ∈ V .
Notice that every field f that generates periodic trajectories is recurrent. The following lemma is the key to investigate controllability of affine systems with recurrent drift.
Lemma 2.22. If f is recurrent and compatible with F, then -f is also compatible with F.
This leads to the following
Theorem 2.23. Assume that the system (2.9) has recurrent drift, is bracket-generating and int(co(U )) contains the origin of R m . Then the system is controllable.
When the drift has no particular properties, it is in general an obstacle to controllability. In that case the control field should be strong enough to contrast the effect of the drift and steer the system in every direction.
Theorem 2.24. Assume that the system (2.9) has unbounded controls, i. e. U = R m . If G = {f 1 , ..., f m } is bracket-generating then the system is controllable.
Controllability of quantum systems
Let H be an Hilbert space. A quantum system on H is given by a self-adjoint operator H 0 which governs its dynamics, namely the evolution of the wavefunctions ψ ∈ H, by means of the Schrödinger equation
i d dt ψ = H 0 ψ, ψ ∈ H. (2.11)
Let H 1 , ..., H m be linear operators on H, we will call them control operators, and u 1 , ..., u m piecewise-constant control functions with the constraint u = (u 1 , ..., u m ) ∈ U ⊂ R m . A bilinear Schrödinger equation (sometimes called bilinear systems) is an affine control system of the form (2.9) that reads
i d dt ψ = H(u)ψ = H 0 + m k=1 u k H k ψ, ψ ∈ H, u ∈ U (2.12)
Remark 2.25. For the moment we will not assume anything on operators H 1 , ..., H m but we recall that the operator H 0 + u k H k must be a self-adjoint operator to represent the Hamiltonian of a quantum system. To ensure this, different assumptions are needed depending whether the dimension of H is finite or infinite. In the following discussion we will clarify these hypothesis.
Finite dimensional quantum systems
Suppose that dim C H = N < ∞, then H 0 , ..., H m belongs to the set of linear bounded operators on H, denoted B(H). Assume moreover that H 0 , ..., H m are Hermitian operators, i. e. H † k = H k , k = 0, ..., m, hence every real linear combination of them is Hermitian. Therefore, for every u 1 , ..., u m piecewise-constant control functions (assume them right-continuous) the solution of (2.12) with initial datum ψ 0 reads
U u (t)ψ 0 with U u (t) := e -i(t-j i=1 t i )(H 0 + u k (t j )H k )/ e -it j (H 0 + u k (t j-1 )H k )/ • • • e -it 1 (H 0 + u k (0)H k )/
(2.13) where t ∈ [t j , t j+1 ] and the sequence of times 0 = t
0 < t 1 < • • • < t n < • • • is taken such that u k (t) is constant on [t j , t j+1
] for all k = 1, ..., m and n ∈ N.
Identifying the Hilbert space H with R 2N , the whole theory on controlled affine systems developed in the previous section can be applied in this case. However this approach is deceitful from a physical viewpoint. In fact, a (pure) state of a quantum system H is an equivalent class in the complex projective space on H (see Sect.3.3). So, if two vectors ψ 1 , ψ 2 ∈ H differ by a non zero complex number, namely ψ 1 = zψ 2 , z ∈ C * , they represent the same state. Moreover, being -iH 0 -i u k H k skew-adjoint, the propagator U u (t) defined in (2.13) is a unitary operator (see Stone theorem [RS 1 ]), hence the norm of vectors ψ ∈ H is preserved during the evolution.
For those reasons it is useful to state a notion of controllability specific for quantum systems Definition 2.26. (Equivalent State Controllability) The quantum system (2.12) is equivalent state controllable if for every pair ψ 0 , ψ 1 ∈ H with ψ 0 = ψ 1 = 1, there exists T > 0 and piecewise-constant functions u k : [0, T ] → R m k = 1, ..., m such that the solution ψ(t) of (2.12) satisfies ψ(0) = ψ 0 and ψ(T ) = e iθ ψ 1 for some θ ∈ [0, 2π).
Nevertheless, we can exploit in a useful way the control theory of affine systems considering system (2.12) at the propagator level, i. e. as an equation for U u (t). Set A k = -iH k / , k = 0, ..., m, then matrices A k are skew-Hermitian, i. e. A † k = -A k . The set of skew-Hermitian N × N matrices, denoted u(N ), is a Lie algebra of dimension N 2 with respect to the usual bracket [A, B] = AB -BA. The subset of skew-Hermitian N × N matrices with zero trace, denoted su(n), is a Lie subalgebra of u(n) of dimension n 2 -1. With those notations the equation for U reads
d dt U = A 0 + m k=1 u k (t)A k U, (2.14) U (0) = 1.
This system is an affine control system on the Lie group of unitary matrix N × N , denoted U(N ), or if tr A k = 0 for all k = 0, ..., m, then U has determinant 1 and belong to the Lie subgroup SU(N ).
Definition 2.27. (Operator controllability)
The quantum system (2.12)
is operator controllable if A G (1) = U(N ) where G = {A 0 , A 1 , ..., A m }. In case tr A k = 0 for every k = 0, ..., m is operator controllable if A G (1) = SU(N ).
Criteria of controllability for affine systems on compact Lie groups are in some sense analogs to the ones we stated for R n . In particular, the hypothesis of being bracket-generating must be reformulated accordingly to the characterization of tangent space of a Lie group, for a complete treatment see [D'Al]. However, the controllability of the system (2.14) still relies on the fact that G must generate the whole tangent space.
Theorem 2.28 ( Lie algebra rank condition ). The system (2.12) is operator controllable if and only if Lie{A 0 , ..., A m } is equal to u(n) or, respectively, su(n) in case that tr A k = 0 for every k = 0, ..., m.
Remark 2.29. In the special case of system (2.14) the bracket-generating condition is also known as the Lie algebra rank condition, the theorem rephrase this condition for affine systems on U(N ) or SU(N ). The crucial point in the proof of the latter theorem is that there exists a one-to-one correspondence between Lie subalgebras of u(n) and connected Lie subgroups of U(n). The theorem is based on the fact that A G (1) is the Lie group corresponding to the subalgebra Lie{A 0 , ..., A m }.
Remark 2.30. The notion of operator controllabilty is obviously stronger than the notion of equivalent state controllability. Results on compact and effective Lie groups acting transitively on S N -1 C , the unit sphere of C N and also the projective complex space, allow to characterize the equivalent state controllability in terms of Lie(G) [AD'A].
We conclude this section with an explicit example.
Example 2.31. Consider the Lie algebra su(2) with basis {iσ 1 , iσ 2 , iσ 3 } where
σ 1 = 0 1 1 0 σ 2 = 0 -i i 0 σ 3 = 1 0 0 -1 (2.15)
are the Pauli matrices. A natural choice for the (uncontrolled) Hamiltonian of a two-level quantum system is
H 0 = ω eg 2 σ 3 .
(2.16)
Consider the control operator H 1 = σ 1 , then (2.12) reads
i d dt ψ = ω eg 2 σ 3 + u(t) 2 σ 1 ψ where ψ ∈ C 2 . If ω eg = 0 one immediately obtains [iσ 3 , iσ 1 ] = -2iσ 2
then Lie(iσ 3 , iσ 1 ) = su(2) and by Theorem 2.28 the system is controllable.
Infinite dimensional quantum systems
We now introduce the controllability problem for an affine system in a general infinite dimensional setting. Let H be a separable Hilbert space endowed with an Hermitian product • , • , let Φ I be an Hilbert basis for H and consider the equation
d dt ψ = (A + uB)ψ, ψ ∈ H (2.17)
where A, B are skew-adjoint linear operator on H with domain D(A) and D(B) respectively, u is a time depending function with values in U ⊂ R.
Assumption 2.32. The system (A, B, U, Φ I ) is such that:
(A 1 ) Φ I = {φ k } k∈I is a Hilbert basis of eigenvectors for A associated to the eigenvalues {iλ k } k∈I ;
(A 2 ) φ k ∈ D(B) for every k ∈ I;
(A 3 ) A + wB : Span k∈I {φ k } → H is essentially skew-adjoint 1 for every w ∈ U ; (A 4 ) if j = k and λ j = λ k , then φ j , Bφ k = 0.
Under these assumptions A + wB generates a unitary group t → e (A+wB)t for every constant w ∈ U . Hence we can define for every piecewise constant function
u(t) = i=1 u i χ [t i-1 ,t i ] (t) with 0 = t 0 < t 1 < ... < t n < .., the propagator Υ u (t) := e (t-t j )(A+u j+1 B) • e (t j -t j-1 )(A+u j B) • • • • • e t 1 (A+u 1 B) for t j < t ≤ t j+1 .
(2.18) The solution of (2.17) with initial datum ψ(0
) = ψ 0 ∈ H is ψ(t) = Υ u (t)(ψ 0 ).
Although in this framework it is possible to define a notion of operator controllability, since we already saw that the progator of the dynamics is well defined under Assumption 2.32, this type of controllability is in general too strong for bilinear infinite dimensional systems. This issue about the infinite dimension case is well known and studied in a more general context in a series of paper of Ball, Marsden e Slemrod [BMS]. Applications of this result to quantum system was given in [Tur], and reviewed in [ILT], [BCS].
Proposition 2.33 ([Tur, Theorem 1] ). Let (A, B, U, Φ I ) satisfy Assumption 2.32 and let B bounded. Then for every r > 1 and for all ψ 0 ∈ D(A), the set of reachable states from ψ 0 with control functions in L r , {Υ u (t)ψ 0 | u ∈ L r (R, R)} is a countable union of closes sets with empty interior in D(A). In particular this attainable set has empty interior in D(A).
Remark 2.34. The proposition implies that, under its hypothesis, the set of attainable states can't be the whole domain D(A), therefore we can't have equivalent state controllability or operator controllability for the system (A, B, U, Φ I ). This does not mean that we can't recover strong forms of controllability under different hypothesis but clearly a general equivalent state controllability criteria for systems of type (A, B, U, Φ I ) cannot exist under Assumption 2.32.
Example 2.35. A notable non-controllable quantum system is the harmonic oscillator. In [MiRo], Mirrahimi e Rouchon proved the non-controllability of system
i d dt ψ = 1 2 (P 2 + X 2 )ψ -u(t)Xψ, ψ ∈ L 2 (R) (2.19)
where the position operator X :
D(X) ⊂ L 2 (R) → L 2 (R) is defined (Xψ)(x) := xψ(x) (2.20)
on the domain
D(X) = ψ ∈ H | R xψ(x) 2 dx < ∞ , (2.21)
and the pulse operator P :
D(P ) ⊂ L 2 (R) → L 2 (R) is the derivative (P ψ)(x) := -i ∂ ∂x ψ(x) (
i d dt φ = 1 2 ( P 2 + Z 2 )φ + 1 2 ( X 2 -P 2 -2u X )φ
where Z is the multiplication operator by z and P = ∂/∂z. Applying another unitary transformation
φ(t, z) = e -i t 0 ( X 2 -P 2 -2u X ) ϕ(t, z) one obtains i d dt ϕ = 1 2 ( P 2 + Z 2 )ϕ.
(2.24)
From equation (2.24) together with (2.23) one sees that (2.19) decomposes in two independent parts. One part is controllable while the other one is not because it does not depend on the control function u. Therefore the quantum harmonic oscillator is non-controllable.
As explained before, we need to introduce a weaker notion of controllability for infinite dimensional quantum systems. The natural way to proceed is to ask that reachable sets must be dense subsets of the state space.
Definition 2.36. Let (A, B, U, Φ I ) satisfy Assumption 2.32. We say that (2.17) is approximately controllable if for every ψ 0 , ψ 1 ∈ H with ψ 0 = ψ 1 = 1 and for every ε > 0 there exist a finite T ε > 0 and a piecewise constant control function
u : [0, T ε ] → U such that ψ 1 -Υ u (T ε )(ψ 0 ) < ε.
Remark 2.37. The definition says that to have approximate controllability the set of attainable state from ψ 0 , namely A(ψ 0 ) = {Υ u (t)ψ 0 | u piecewice-const. function} must be dense in the unit sphere of H for every ψ 0 ∈ H.
The notion of approximate controllability is clearly weaker that the exact controllability but one may search conditions under which the two coincides. We recall that an analogous result holds on orbits of Lie-determined systems (see Corollary 2.12), where reachable sets cannot be dense in an orbit without being equal to the entire orbit. For finite dimensional systems the two notions coincides.
Theorem 2.38 ( [START_REF] Boscain | Approximate Controllability, Exact Controllability, and Conical Eigenvalue Intersections for Quantum Mechanical Systems[END_REF]Theorem 17] ). Suppose dim H = N < ∞. System (2.12) is approximately controllable if and only if is exactly controllable.
Notable works on approximately controllable quantum systems concern spinboson systems. In particular we mention the paper of Puel [ErPu], inspired to the work of Eberly and Law [EbLa].
An alternative to the introduction of a weak notion of controllability as in Definition (2.36) is to investigate controllability of infinite dimensional systems on smaller functional spaces. More precisely, consider a system (A, B, U, Φ I ) satisfying Assumption 2.32. The idea is to choose a functional space S contained in D(A) such that B is unbounded on S and prove exact controllability on this space. That method is carried out in [Be], [BeC], [BeL].
A spectral condition for controllability
In this section we present a criterion for approximate controllability which will be useful later in this thesis. This general result gives a sufficient condition for approximate controllability based on the spectrum of A and the action of the control operator B. More precisely, if σ(A) has a sufficiently large number of non-resonant transitions, i. e. pairs of levels (i, j) such that their energy difference |λ i -λ j | is not replicated by any other pair, and B is able to activate these transitions, then the system is approximately controllable.
This idea is made precise in the following definition Definition 2.39. Let (A, B, U, Φ I ) satisfy Assumption 2.32. A subset S of I 2 connects a pair (j, k) ∈ I 2 , if there exists a finite sequence s 0 , ..., s p such that (i) s 0 = j and s p = k;
(ii) (s i , s i+1 ) ∈ S for every 0 ≤ i ≤ p -1; (iii) φ s i , Bφ s i+1 = 0 for every 0 ≤ i ≤ p -1.
S is called a chain of connectedness for (A, B, U, Φ I ) if it connects every pair in
I 2 . A chain of connectedness is called non-resonant if for every (s 1 , s 2 ) ∈ S it holds |λ s 1 -λ s 2 | = |λ t 1 -λ t 2 | for every (t 1 , t 2 ) ∈ I 2 \ {(s 1 , s 2 ), (s 2 , s 1 )} such that φ t 1 , Bφ t 2 = 0
Intuitively, if two levels of the spectrum are non-resonant and the control operator B couples them, one can tune the control function u in such a way to arrange arbitrarily the wavefunction's components on these levels, without modifying any other component. Therefore, having a non-resonant connectedness chain allow us to reach the target state by sequentially modifying the wavefunction. This idea is crucial to the proof of the following criterion by Boscain et al.
Theorem 2.40. [START_REF] Boscain | A Weak Spectral Condition for the Controllability of the Bilinear Schrödinger Equation with Application to the Control of a Rotating Planar Molecule[END_REF]Theorem 2.6] Let c > 0 and let (A, B, [0, c], Φ I ) satisfy Assumption 2.32. If there exists a non-resonant chain of connectedness for (A, B, [0, c], Φ I ) then the system (2.17) is approximately controllable.
Theorem 2.40 gives also an estimate on the norm of control functions.
Proposition 2.41. [START_REF] Boscain | A Weak Spectral Condition for the Controllability of the Bilinear Schrödinger Equation with Application to the Control of a Rotating Planar Molecule[END_REF]Proposition 2.8] Let c > 0. Let (A, B, [0, c], Φ I ) satisfy Assumption 2.32 and S be a non-resonant chain of connectedness. Then for every ε > 0 and (j, k) ∈ S there exists a piecewice-constant control function u : [0, T u ] → [0, δ] and θ ∈ R such that Υ u (T u )φ j -e iθ φ k < ε and
u L 1 ≤ 5π 4 φ k , Bφ j .
Adiabatic control of quantum systems
Our previous analysis of controllability of quantum systems is entirely non constructive, in the sense that we studied criteria to determine abstractly the controllability of the system without producing control functions. In this section we will present a constructive method that relies on adiabatic theory. Consider the general Schrödinger equation
i d dt ψ = H(u(t))ψ, ψ ∈ H, (2.25)
where u : [0, T ] → U ⊂ R m . The starting point is a spectral analysis of the operator family H(u).
Assumption 2.42. Assume that {H(u) | u ∈ U } is a family of compact resolvent operators and that eigenfunctions and eigenvalues of the family are analytic functions of the variable u.
Assumption 2.42 holds for a wide range of quantum systems, e. g. bilinear systems (2.12) on a finite dimensional Hilbert space, bilinear system on L 2 (R n ) with
H 0 = -∆ and H i = V i with V i ∈ L 2 + L ∞ or any system (A, B, U, Φ I ) satisfying Assumption 2.32.
For the sake of clarity we will consider from here throughout the section a bilinear system (2.12) on H with two controls
i d dt ψ = H(u(t))ψ = (H 0 + u 1 (t)H 1 + u 2 (t)H 2 ) ψ (2.26)
where u = (u 1 , u 2 ) ∈ U ⊂ R 2 and U is assumed connected. However, what we explain can be generalized to the case
U ⊂ R m m ≥ 2. Assumption 2.43. Suppose that Σ(u) = {λ 1 (u), ..., λ n (u)} with λ 1 (u) ≤ λ 2 (u) ≤ ... ≤ λ n (u)
is a portion of the spectrum of H(u) isolated from the rest, i. e. there exists C > 0 such that
inf u∈U inf λ∈σ(H(u)\Σ(u)) dist(λ, Σ(u)) > C.
(2.27) Moreover, suppose that λ i are non degenerate and Φ(u) = {φ 1 (u), ..., φ n (u)} are the corrisponding eigenvectors.
Definition 2.44. We will call ū ∈ U a conical intersection between the eigenvalues λ j and λ j+1 if λ j (ū) = λ j+1 (ū) has multiplicity two and there exists c > 0 such that
λ j (ū + tv) -λ j+1 (ū + tv) > ct
We will say that Σ(u) is conically connected if for every j = 1, ..., n -1 there exists ūj conical intersection between λ j and λ j+1 .
Remark 2.45. Under Assumption 2.42 the intersections between the eigenvalues of H(u) are generically conical if m = 2, 3 [BGRS].
In this framework the adiabatic theorem gives a lot of qualitative informations about the dynamics of the system:
a) Suppose that u = (u 1 , u 2 ) : [0, 1] → U is a path such that λ j (u(t)) is simple for every t ∈ [0, 1].
For every ε > 0 let us consider the reparametrization
u ε = (u ε 1 , u ε 2 ) : [0, 1/ε] → U defined as (u ε 1 (t), u ε 2 (t)) := (u 1 (εt), u 2 (εt)).
Then, the solution ψ ε of the equation
i d dt ψ = H(u ε (t))ψ = (H 0 + u ε 1 (t)H 1 + u ε 2 (t)H 2 ) ψ (2.28) with initial state ψ ε (0) = φ j (u(0)) satisfies ψ ε (1/ε) -e iθ φ j (u(1/ε)) < C j ε.
So, if the control u is slowly varying, the system follows the eigenvector φ j (u) of eigenvalue λ j (u) with an error of order ε. This is a classical result in quantum mechanics that goes back to Kato [Ka][Teu]. The constant C j depends on the distance between λ j and the closest of the remaining eigenvalues.
b) Now suppose to design a path u = (u 1 , u 2 ) : [0, 1] → U passing once through a conical intersection ūj = u(t * ). The qualitative behavior of the system undergoing this dynamics is the following: starting from the eigenspace of eigenvalue λ j and treading adiabatically the path, the system follows the eigenstate φ j (u) until time t * . During the passage through the conical intersection there is a non zero probability that the system jumps in the eigenspace of eiganvalue λ j+1 , i. e. a population transfer between two energy level of the system is realized. So, conical intersections could be used as "stairs" to move population between energy level of the spectrum. In particular the population transferred on the higher level depends on the angle between the ingoing and outgoing velocity vector of the path in the point of conical intersection [BCMS].
α
Passing through the intersection with zero angle cause a complete transfer of population at the higher level, i. e. if u ∈ C 1 for every ε > 0 the solution ψ ε of the equation (2.28) with initial state ψ ε (0) = φ j (u(0)) satisfies
ψ ε (1/ε) -e iθ φ j+1 (u(1/ε)) < C j √ ε.
Choosing special paths we can improve the latter estimate obtaining an error estimate of higher order, C j ε.
Combining together these different behaviours we are able to steer the state of a conically connected closed quantum system from an eigenstate of eigenvalue λ 0 to an eigenstate of eigenvalue λ n with an error of order ε. More generally, we are able to steer the dynamics from an eigenstate of eigenvalue λ j to an arbitrary superposition of eigenstates. The main theorem is the following.
Theorem 2.46 ( [BCMS]). Let Σ(u) = {λ 1 (u), ..., λ n (u)} be an isolated portion of the spectrum of H(u). For every j = 1, ..., n -1 let u j ∈ U be a conical intersection between λ j and λ j+1 and assume λ j (u) simple for u = ūj-1 , ūj . Given u 0 , u 1 ∈ U such that Σ(u 0 ), Σ(u 1 ) are non degenerates, ψ 0 ∈ Φ(u 0 ) and ψ 1 = n p i φ j (u 1 )
with ψ 1 = 1, then there exist C > 0 and a continuous path γ :
[0, 1] → U with γ(0) = u 0 , γ(1) = u 1 such that for every ε > 0 ψ ε (1/ε) - n p j e iθ j φ j (u 1 ) < Cε
where ψ ε is the solution of (2.28) with u ε (t) = γ(εt) and initial state ψ ε (0) = ψ 0 .
The latter results implies exact controllability in case of a finite dimensional Hilbert space and approximate controllability otherwise (see Lemma 9,14 [BGRS]).
Chapter 3
General theory of open quantum systems
This Chapter is an introduction to the formalism that describes the evolution of open quantum systems. The crucial point is to view the dynamics at the level of states, which are a particular class of observables. After a brief review of the Heisenberg picture of quantum mechanics, we will discuss the fundamental properties of maps between state spaces. This will led finally to the definition of quantum dynamical semigroup, whose generator are the object that we will use to treat open quantum systems.
One-parameter groups on Hilbert spaces
The Schrödinger equation (2.11), as we have seen in the previous chapter, determines the evolution of the wavefunctions ψ ∈ H. The operator valued function U : t → e iHt/ is a strongly continuous one-parameter group of unitary transformation accordingly to the following definition. Definition 3.1. If B is a Banach space a one-parameter group on B is a family {T t } t∈R of bounded linear operators on B satisfying T 0 = 1 and T t T s = T t+s for every t, s ∈ R. The group is called strongly continuous if
lim t→0 T t φ -φ = 0, ∀φ ∈ D with D dense linear subspace in B.
Given a strongly continuous group T t the generator of the group is defined as
Aψ = lim t→0 t -1 (T t ψ -ψ) (3.1)
and the domain D(A) of A is the set of all ψ ∈ B such that the above limit exists.
The strongly continuous property implies that A is a densely defined closed linear operator.
29
In Hilbert spaces there exists a correspondence between self-adjoint operators and strongly continuous one-parameter unitary group. This correspondence is given by the functional calculus and the Stone's theorem [RS 1 , Chap.VIII]. Let us recall briefly the main results.
Theorem 3.2 ( [RS 1 ] Thm VIII.7 ). Let H be a self-adjoint operator on H and define U (t) = e itH by means of the functional calculus [RS 1 , Thm.VIII.5]. Then U (t) is a strongly continuous one-parameter unitary group. Moreover,
a) For ψ ∈ D(H), t -1 (U (t)ψ -ψ) → iH as t → 0. b) If lim t→0 t -1 (U (t)ψ -ψ) exists then ψ ∈ D(H).
The converse result is the following.
Theorem 3.3 ( [RS 1 ] Thm VIII.8 ).
Let U (t) be a strongly continuous one-parameter unitary group on a Hilbert space H. Then, there is a self-adjoint operator H on H so that U (t) = e itH .
Observables and Heisenberg picture
Let A be a self-adjoint operator on an Hilbert space H endowed with an Hermitian product • , • . We will refer to such an operator as an observable. The spectrum σ(A) represents the values that the operator can assume. Given an Hamiltonian H on H (also an operator itself) and the evolution generated by the Schrödinger equation, the expected values of the observable A at time t ∈ R is defined as the real value
A t = ψ(t) , Aψ(t) = e -itH/ ψ 0 , Ae -itH/ ψ 0 , (3.2)
where ψ 0 is the initial datum. Consider now a family of self-adjoint operators A(t), t ∈ R (to avoid technicalities we refer to [Ka] for the minimal assumptions that one should assume on an operator valued function A : t ∈ A t ). Let us define the operator
A H (t) := e itH/ A(t)e -itH/ (3.3)
to which we will refer as Heisenberg representation of A at time t. It immediately follows that
A(t) t = A H (t) 0 .
This suggests that instead of considering the evolution of wavefunctions one can alternatively consider the evolution of operators. In fact, at least formally, the operator A H (t) satisfies
d dt A H (t) = ∂A ∂t H + 1 i [A H (t), H H (t)]. (3.4) known as Heisenberg equation [Co 2 ][Hall].
Observe that if the operator A does not depend on t, the previous eq.( 3.4) reads
d dt A = 1 i [A, H]. (3.5)
It is easy to verify that the solution to the previous equation is
A(t) = e itH/ Ae -itH/ .
(3.6)
States
A special class of observables is of particular importance. Let us recall some preliminary definitions in order to introduce this class.
Let A be a positive linear operator on a separable Hilbert space H. We define tr(A) as the possibly infinite series The set T (H) is a subset of the set of compact operators Com(H) and is a Banach space with the norm Thm.VI.20]. We recall that a sequence {A n } ⊂ B(H) is said to be weakly convergent to A if lim n→∞ φ , A n ψ = φ , Aψ for all φ, ψ ∈ H and will denote the weak limit w. lim n→∞ A n = A. Similarly, we say that {A n } ⊂ B(H) converges ultraweakly to A if lim n→∞ tr(A n ρ) = tr(Aρ) for every ρ ∈ T (H). Ultraweak convergence implies weak convergence but on norm-bounded sequences they coincide. In particular they coincide if
A 1 = tr |A| , (3.8) which satisfies the relation A ≤ A 1 [RS 1 ,
A n → A ∈ B(H).
As vector space, T (H) is stable under the composition with linear bounded operators and under the adjunction, i. e
. if A ∈ T (H), B ∈ B(H) then AB, BA, A * ∈ T (H), namely T (H) is a * -ideal of B(H). Moreover, AB 1 ≤ A 1 B , BA 1 ≤ B A 1
This allows one to define for every A ∈ T (H) a map
φ A : B → tr(AB), (3.9)
which is a bounded linear functional on B(H). Or analogously, for every B ∈ B(H) the map
B : A → tr(AB) (3.10)
is a bounded linear functional on T (H). These types of maps represent special classes of functionals as stated by the next theorem.
Theorem 3.5 ( [RS 1 ] Thm.VIII.26 ). The map φ is an isometric isomorphism of T (H) in Com(H) * . The map is an isometric isomorphism of B(H) into T (H) * .
It is also possible to define subspaces of B(H),T (H) and Com(H) between which the above dualities still hold. We will denote B s (H),T s (H) and Com s (H) the set of linear bounded, trace-class and compact set of self-adjoint operators. It is still true that T s (H) = Com s (H) * and B s (H) = T s (H) * . We will denote B s (H) + , T s (H) + the sets of self-adjoint non-negative linear and trace-class operators. Observe that
A ∈ T s (H) + ⇔ tr(AB) ≥ 0 ∀B ∈ B s (H) + , and conversely B ∈ B s (H) + ⇔ tr(AB) ≥ 0 ∀A ∈ T s (H) + . (3.11)
Definition 3.6. The state space of an Hilbert space H is defined as the Banach space T s (H) with the norm • 1 . The states are defined as self-adjoint non-negative trace-class operators of trace one. The states are also called mixed states or density matrices.
By the spectral theorem each state ρ admits a decomposition
ρ = n∈N λ n |ψ n ψ n | 1 , λ n ≥ 0 for all n ∈ N, n∈N λ n = 1. (3.12)
where ψ n ∈ H have norm one (it is always possible to find a decomposition in which {ψ n } n∈N is an orthonormal basis). A state is said to be pure if tr(ρ) = tr(ρ 2 ), therefore if and only if ρ = |ψ ψ| for some ψ ∈ H. The definition of state generalizes the concept of wavefunction in the following sense. For each initial state ψ 0 ∈ H its evolution is given by the solution of the 1 We make use hereafter of the standard Dirac notation for elements of an Hilbert space H endowed with an Hermitian product S : [START_REF] Hall | Quantum Theory for Mathematicians[END_REF]Sect.3.12]. We denote an element ψ ∈ H with the symbol |ψ (ket). We denote with the symbol φ| (bra) the element of the dual space H * defined by φ| : H -→ C ψ → S(φ, ψ) namely the dual element of |φ through the natural isomorphism between H and his dual space H * .
H × H → C [Co1, Sect.II.B]
Therefore we choose to adopt the notation •|• for the Hermitian product. In this way given two elements |ψ , |φ ∈ H the complex number S(φ, ψ) is obtained by the evaluation of the functional φ| on |ψ , i. e. φ|ψ = S(φ, ψ). Conversely, the "exterior product" |ψ φ| denotes the linear operator
|ψ φ| : H -→ H |ξ → |ψ φ|ξ .
Schrödinger equation, i. e. by the action of the one-parameter group e itH/ on ψ 0 . If we consider the pure state ρ 0 = |ψ 0 ψ 0 |, it evolves accordingly to eq.(3.5)
ρ(t) = e iHt/ ρ 0 e -iHt/ = e iHt/ |ψ 0 ψ 0 | e -iHt/ = |e -iHt/ ψ 0 e -iHt/ ψ 0 | (3.13)
which is exactly the pure state associated with ψ(t) = e -iHt/ ψ 0 . Moreover, each expected value of an observable A can be computed by means of ρ(t)
A t = ψ(t) , Aψ(t) = tr(ρ(t)A). (3.14)
Instead, if the system is prepared in a statistical mixture of states ψ k ∈ H, each one with probability p k ∈ [0, 1] such that k p k = 1, its initial state is described by
ρ 0 = k p k |ψ k ψ k | .
(3.15)
In fact the expected value of the observable
P k := |ψ k ψ k | on ρ 0 is P k 0 = tr(ρ 0 P k ) = p k ,
which means that the system has probability p k of being in the state ψ k . Even if this state is not pure its evolution is still given by eq. (3.6) and the expectation of observables by
A t = tr(ρ(t)A) = k p k tr(P k (t)A) = k p k tr(e iHt/ |ψ k ψ k | e -iHt/ A). (3.16)
Thus the expectation of an observable A on the state ρ(t) is an average of the expectations of A on the wavefunctions ψ k (t), weighted as the initial mixture was.
Operations on state spaces
As pointed out in the previous section, the evolution of the state space of an Hilbert space H is sufficient to recover all the information about observable quantities, whatever the initial state of the system is. For this reason we are interested in studying general maps between state spaces, or elsewhere known as operations.
The map T t : ρ → e -iHt/ ρe iHt/ , (3.17) that we already encountered, is a linear bounded map on T s (H) which is also trace preserving, i. e. tr(T t (ρ)) = tr(ρ) (by the ciclicity of the trace), and positive according to the following definition.
Definition 3.7. Let S : B(H 2 ) → B(H 1 ) be a linear map. We will say that S is
positive if A ≥ 0 implies S(A) ≥ 0. Moreover, a positive linear map is called normal if for each sequence {A n } n∈N ⊂ B(H 2 ) such that w. lim n→∞ A n = A ∈ B(H 2 ) then w. lim n→∞ S(A n ) = S(A).
Moreover, a series of measurements on a quantum system with Hilbert space H can be seen as a positive linear bounded map T on T s (H) (see [START_REF] Davies | Quantum Theory of Open Systems[END_REF]Sect 2.2.1]) which generally satisfies 0 ≤ tr(T (ρ)) ≤ tr(ρ), ∀ρ ∈ T s (H) + .
(3.18)
For these reasons we are interested in classifying linear bounded maps on state spaces which satisfies also (3.18). The following results goes in that direction.
Lemma 3.8 ( [Da, Lemma 2.2.2] ). If T : T s (H 1 ) → T s (H 2 ) is a positive linear map then the adjoint T * : B s (H 2 ) → B s (H 1 ) is a positive linear normal map. Moreover, 0 ≤ T * (1) ≤ 1 (3.19) if and only if 0 ≤ tr(T (ρ)) ≤ tr(ρ), ∀ρ ∈ T s (H 1 ) + .
Every normal positive linear map S :
B s (H 2 ) → B s (H 1 ) is the adjoint of a unique positive linear map T : T s (H 1 ) → T s (H 2 ). Proof. If A ∈ B s (H 2 ) and ρ ∈ T s (H 1 ), by Thm.3.5 T * (A) is the unique element of B s (H 1 ) representing φ A = ρ → tr(AT (ρ)), i. e. such that tr(T * (A)ρ) = tr(AT (ρ)), ∀A ∈ B s (H 2 ), ρ ∈ T s (H 1 ). Moreover, since tr(T * (A)ρ) = tr(AT (ρ)) ≥ 0, ∀A ∈ B s (H 2 ) + , ρ ∈ T s (H 1 ) + , then T * (A)
). If {A n } n∈N ⊂ B(H) converge to A ∈ B(H) weakly, then A n converge to A ultraweakly, so tr(T * (A n )ρ) = tr(A n T (ρ)) → tr(AT (ρ)) = tr(T * (A)ρ), which means that T * (A n ) converges to T * (A) ultraweakly, then T * (A n ) → T * (A)
weakly and T * is normal. This proves one implication of the one to one correspondence stated above. The converse descends again from Thm.3.5.
Remark 3.9. The condition (3.19) must be understood in the following sense. The operation T : T s (H 1 ) → T s (H 2 ) acts on density matrices ρ which represent ensembles according to (3.15). So T (ρ) contains information about the new distribution of the mixture as well as the form of the new state. In fact, from (3.15) we get
T (ρ) = k p k T (|ψ k ψ k |),
and normalizing each state of the new mixture
T (ρ) = k p k tr(T (|ψ k ψ k |)) T (|ψ k ψ k |) tr(T (|ψ k ψ k |))
, from which we see that
tr(T (ρ)) = k p k tr(T (|ψ k ψ k |)1) = k p k tr(|ψ k ψ k | T * (1)).
Thus the map T * (1), commonly called effect, determines the probability of trasmission of a given state but not its form, given by T . Since 0
≤ T * (1) ≤ 1 then 0 ≤ p k tr(T (|ψ k ψ k |)) ≤ p k T * (1) ≤ p k .
An interesting case is when a linear positive map transforms pure states in pure states.
Definition 3.10. Let T : T s (H 1 ) → T s (H 2 ) be a positive linear map. We say that
T is pure if T (ρ) ∈ T s (H 2 ) + is a pure element whenever ρ ∈ T s (H 1 ) + is pure.
Those type of maps admit a simple classification.
Theorem 3.11 ( [Da, Thm 3.1] ). Every pure positive linear map T :
T s (H 1 ) → T s (H 2 ) is of one of the following form: (i) T (ρ) = BρB * (3.20)
where B : H 1 → H 2 is bounded and linear;
(ii)
T (ρ) = Bρ * B * (3.21)
where B : H 1 → H 2 is bounded and conjugate linear;
(iii)
T (ρ) = tr(ρB) |ψ ψ| (3.22)
where B ∈ B(H 1 ) + and ψ ∈ H 2 .
In cases (i) and (ii) the operator B is uniquely determined up to a constant of modulus one.
This results gives a lot of information about invertible linear positive maps. In fact, if T :
T s (H 1 ) → T s (H 2 ) has a positive inverse T -1 , then both T ,T -1 are pure. Moreover since T is pure T (ρ) = T (ρ) * .
Thus the following corollary is immediate.
Corollary 3.12. Let T : T s (H) → T s (H) a positive linear map with positive inverse. Let also T be trace preserving. Then there exists a unitary or antiunitary map U on H such that
T (ρ) = U ρU * .
By duality a similar results holds for linear positive maps S : B s (H) → B s (H).
Corollary 3.13. Let S : B s (H) → B s (H) a positive linear map with positive inverse.
Let also S be unital, i. e. S(1) = 1. Then there exists a unitary or antiunitary map
U on H such that S(A) = U AU * , A ∈ B s (H).
There is a complex linear extension of S to B(H) which is either an algebra automorphism or an algebra antiautomorphism.
To conclude the section we recall that the map (3.17) happens to be a linear positive map with positive inverse, so {T t } t∈R is a strongly continuous one-parameter group of those maps. The following theorem states that there are no other type of such groups.
Theorem 3.14. Let T t : T s (H) → T s (H) a strongly continuous one-parameter group of positive linear map such that
tr(T t (ρ)) = tr(ρ), ∀ρ ∈ T s (H).
Then there exists a self-adjoint operator H on H such that
T t (ρ) = e -iHt/ ρe iHt/ , ∀t ∈ R.
Dynamical semigroups
The assumption of having a closed system, namely an Hilbert space H with a state dynamics T t : T s (H) → T s (H) that posses the group properties and moreover preserves probabilities, lead us to evolutions of Hamiltonian type (Thm 3.14). That means that the self-adjoint operator H on H, which determines the state evoltution through (3.17), is completely determined by H and T t and does not depend of any parameter or external factor.
For these reasons, one is induced to ask whether an evolution on the state space can in principle be related to an Hamiltonian on a larger Hilbert space. This Hilbert space should take into account all the part of the environment that affect the dynamics of the system, as an ideal quantization of the external world. We will call such a type of system an open quantum system.
Let H be the Hilbert space of the system, H ε the Hilbert space of the environment and consider the tensor product H ⊗ H ε , the total space. Let H be a self-adjoint operator on H⊗H ε and suppose that the initial state of the total system is factorized, namely ρ ⊗ ρ ε ∈ T (H) ⊗ T (H ε ). Therefore the evolution of the total system is
e -iHt/ (ρ ⊗ ρ ε )e iHt/ ,
and generally this is no more a factorized state (except in absence of interaction which is a trivial case). However, there exists a map on T (H ⊗ H ε ) that act as a projection onto T (H), which means it gives a sort of reduced state in the following sense. Let us define a map tr ε :
T (H ⊗ H ε ) → T (H) such that on factorized states tr ε (ρ ⊗ σ) = tr Hε (σ)ρ, then tr H (A tr ε (ρ ⊗ σ)) = tr H⊗Hε [(A ⊗ 1)(ρ ⊗ σ)] ,
(3.23) for each A ∈ B(H), where tr H is the trace on the Hilbert space H. Observe that eq. (3.23) shows that tr ε acts naturally on factorized states, reducing a state on tr H⊗Hε to a partial one which is compatible with the composition with tr H . That map could be extended by duality. The application
A → A ⊗ 1 is a positive normal linear map from B(H) to B(H ⊗ H ε ), then by Lemma 3.8 tr H (A tr ε (ρ)) = tr H⊗Hε [(A ⊗ 1)ρ]
defines a positive normal linear map for every ρ ∈ T (H ⊗ H ε ), which coincides with the one defined above for factorized states. We will call tr ε partial trace on H ε . According to the argument above, we set
Λ t (ρ) := tr ε e -iHt/ (ρ ⊗ ρ ε )e iHt/ (3.24)
which is a map on T (H) that describes the partial evolution of the subsystem H given an initial environment state ρ ε .
The concept of partial trace can be translated into a similar one for observables. Given an environment state
ρ ε the map ρ → ρ ⊗ ρ ε is positive linear from T (H) into T (H ⊗ H ε ). Therefore by Lemma 3.8 tr H (E ρε (B)ρ) = tr H⊗Hε [B(ρ ⊗ ρ ε )] , ρ ∈ T (H) (3.25)
defines a linear map E ρε : B(H ⊗ H ε ) → B(H) which is positive, normal and such that E ρε (A ⊗ 1) = A tr(ρ ε ).
We are now able to define the evolution of observables X ∈ B(H) given the initial environment state ρ ε , Λ t (X) := E ρε e iHt/ (X ⊗ 1)e -iHt/ .
(3.26)
Remark 3.15. The map Λ t : T (H) → T (H) and Λ t : B(H) → B(H) defined in (3.24) and (3.26) are dual. In fact
tr H E ρε e iHt/ (X ⊗ 1)e -iHt/ ρ = tr H⊗Hε e iHt/ (X ⊗ 1)e -iHt/ ρ ⊗ ρ ε = tr H⊗Hε (X ⊗ 1)e -iHt/ ρ ⊗ ρ ε e iHt/ = tr H X tr ε e -iHt/ ρ ⊗ ρ ε e iHt/ = tr H (X Λ t (ρ)).
The map Λ t is trace preserving if and only if tr ρ ε = 1. The map Λ t is unital if and only if tr ρ ε = 1, in fact Λ t (1) = 1 tr ρ ε (as Lemma 3.8 stated).
In conclusion, given an Hilbert space H⊗H ε and a self-adjoint operator H acting on this space, there exists a way to define state's and observable's evolution of H coherently, which means by maps that: reduce the total dynamics on T (H⊗H ε ) and B(H ⊗ H ε ) to T (H) and B(H) preserving fundamental properties (positivity, normality, etc...); are compatible with the composition with (3.9),(3.10). This reduction maps suggest that the dynamics we defined cannot be reversible.
Those reasons lead us to the following Definition 3.16. Given an Hilbert space H we define a dynamical semigroup to be a one-parameter family of linear operators T t : T (H) → T (H) for each t ≥ 0, satisfying (i) T t positive for each t ≥ 0;
(ii) T t trace preserving for each t ≥ 0;
(iii) T 0 = 1 T (H) , T s T t ρ = T t+s ρ for all s, t ≥ 0;
(iv) lim t→0 T t ρ -ρ 1 = 0 for all ρ ∈ T (H).
Remark 3.17. The Hamiltonian evolution law (3.17) is a dynamical semigroup.
The one-parameter family Λ t defined in (3.24) satisfies (i),(ii) and is continue but it is not generally a semigroup (see [START_REF] Davies | Quantum Theory of Open Systems[END_REF]Sect. 10.4]).
It is useful to state an equivalent definition for the dual semigroup, Definition 3.18 ( [Li], [START_REF] Parthasarathy | An Introduction to Quantum Stochastic Calculus[END_REF]Sect. III.30] ). Given an Hilbert space H we will call dynamical semigroup (in the Heisenberg picture) a one-parameter family of linear operators S t : B(H) → B(H), t ≥ 0 satisfying (a) S t (X) ≥ 0 for all X ≥ 0 and t ≥ 0;
(b) for all t ≥ 0 S t (1) = 1, S t (X) ≤ X , S t (X) * = S t (X * ) and w. lim n→∞ S t (X n ) = S t (X) whenever w. lim n→∞ X n = X;
(c) S 0 = 1 B(H) , S t S s = S t+s for all t, s ≥ 0;
(d) lim t→0 S t (X) -X 1 = 0 for all X ∈ B(H);
We will call the dynamical semigroup uniformly continuous if
lim t→0 sup X ≤1 S t (X) -X = 0 for all X ∈ B(H).
For a dynamical semigroup there exists a (generically unbounded) linear operator L : B(H) → B(H) defined on a ultraweakly dense domain
D(L) such that lim t→0 L(X) -t -1 (S t (X) -X) = 0, X ∈ D(L).
COMPLETE POSITIVITY
L is called the generator of the semigroup. If the dynamical semigroup is uniformly continuous the generator L is a bounded linear operator on H and lim t→0 L -t -1 (S t -1) = 0. The map (3.24) satisfies (a),(b),(d) but is generally not a semigroup. For other examples we refer to [START_REF] Parthasarathy | An Introduction to Quantum Stochastic Calculus[END_REF]]. Nevertheless, one could try to understand under which conditions a dynamical semigroup S on B(H) admit a representation of type (3.26). It turns out that a condition stronger than the positivity is needed, as it will be discussed in the next Section.
Complete positivity
Consider a quantum system, whose pure states are described by H, in a well defined region of space. Suppose that there exists a particle with n degrees of freedom localized very far away from the first system, such that they have no interaction with each other. The Hilbert space of the total system is H ⊗ C n and if S is an operation on the system H that does not affect the distant particle, on factorized observables one has
S (n) (A ⊗ B) = S(A) ⊗ B, A ∈ B(H), B ∈ B(C n ) (3.27)
and S (n) extends to a linear map on B(H ⊗ C n ). The physical meaning of this map is clear, it is the trivial extension of an operation S on H to any larger system that includes H, but in which it is still isolate. This inclusion must not change the physical meaning of our description, so S (n) should be a positive linear map on B(H ⊗ C n ) if S is positive linear on B(H). However, this is not true. If S is a positive linear map, S (n) is not necessarily positive.
Example 3.20. Consider an Hamiltonian H 0 on H, the one-parameter unitary group e iH 0 t/ and the dynamical semigroup S t (A) = e iH 0 t/ Ae -iH 0 t/ . Then, for every n ∈ N the map
S (n) t on B(H ⊗ C n ) defined on factorized states S (n) t (A ⊗ B) = S t (A) ⊗ B = e i(H 0 ⊗1)t/ (A ⊗ B)e -i(H 0 ⊗1)t/ , extends to the map S (n) t (X) = e i(H 0 ⊗1)t/ Xe -i(H 0 ⊗1)t/ . Therefore S (n) t
is positive for all n ∈ N.
To see that not every positive map is completely positive consider the transposition map X → X T acting on B(C n ).
The above considerations should convince us that the positivity is not enough for an operation to have physical meaning. Definition 3.21. We will call a positive linear map S : B(H) → B(H) completely positive if for every n ∈ N the map
S (n) : B(H) ⊗ B(C n ) → B(H) ⊗ B(C n ) defined by (X i,j ) n i,j=1 → (S(X i,j )) n i,j=1 , X i,j ∈ B(H), (3.28)
is positive.
Remark 3.22. The definition of S (n) given in (3.27),(3.28) are equivalent given the isomorphism
B(H ⊗ C n ) ∼ = B(H) ⊗ B(C n ).
Two fundamental results about completely positive maps are crucial to answer the question we asked at the end of previous section.
Theorem 3.23 (Stinespring, [Pa] Thm. III.29.6 ). Let H 1 , H 2 be Hilbert spaces and let S : H 2 → H 1 be a linear operator satisfying the following conditions:
(a) S is complete positive; (b) S(1) = 1, S(X) * = S(X * ), S(X) ≤ X and w. lim n→∞ S(X n ) = S(X) whenever w. lim n→∞ X n = X;
Then there exists a Hilbert space K, an isometry V :
H 1 → H 2 ⊗ K such that: (•) S(X) = V * (X ⊗ 1)V for all X ∈ B(H 2 ); (••) {(X ⊗ 1)V ψ | X ∈ B(H 2 ), ψ ∈ H 1 } is dense in H 2 ⊗ K;
Conversely, if V : H 1 → H 2 ⊗ K is an isometry where K is any Hilbert space then the map S : H 2 → H 1 defined by S(X) = V * (X ⊗ 1)V satisfies condition (a),(b).
Theorem 3.24 (Kraus, [Pa] Thm. III.29.6 ). An operator S : B(H 2 ) → B(H 1 ) satisfies conditions (a),(b) of Thm 3.23 if and only if there exist operators L j :
H 1 → H 2 , j = 1, 2... such that j L * j L j = 1 is a strongly convergent sum and S(X) = j L * j XL j , for all X ∈ B(H 2 ). (3.29) If dimH j = n j < ∞, j = 1, 2
, then the number of L j s can be restricted to be lesser or equal than n 1 n 2 .
Example 3.25. Consider the map Λ t defined in (3.24) and let
ρ ε = |Ω Ω|, Ω ∈ H ε . Given {φ n } n∈N basis of H ε define the maps E n = 1 ⊗ |Ω φ n |. Notice that n∈N E n ρE * n = tr ε (ρ) ⊗ |Ω Ω| (3.30)
for each ρ = ρ ⊗ σ ∈ T (H) ⊗ T (H ε ), therefore it extends, by a density argument, to all T (H ⊗ H ε ). Observe that the linear map π :
H → H ⊗ H ε , π(ψ) = ψ ⊗ Ω has as dual map π * : H ⊗ H ε → H, π * (ψ ⊗ φ)
= Ω , φ ψ and they are such that
π * π = 1, πρπ * = ρ ⊗ |Ω Ω| (3.31)
for each ρ ∈ T (H). From (3.30),(3.31) we obtain that n∈N
E n ρE * n = tr ε (ρ) ⊗ |Ω Ω| = π tr ε (ρ)π * then tr ε (ρ) = π * n∈N E n ρE * n π.
By the latter equality we can write
Λ t (ρ) = tr ε e -iHt/ (ρ ⊗ |Ω Ω|)e iHt/ = π * n∈N E n e -iHt/ (ρ ⊗ |Ω Ω|)e iHt/ E * n π = π * n∈N E n e -iHt/ πρπ * e iHt/ E * n π = n∈N K n ρK * n
where K n = π * E n e -iHt/ π. Its dual map Λ t is obtained by duality
Λ t (X) = n∈N K * n XK n .
It is easy to verify that n∈N K * n K n = 1 strongly, so by the Kraus theorem Λ t is a completely positive map in case ρ ε = |Ω Ω| is a pure state.
Quantum dynamical semigroups
In this section we conclude the discussion begun in Sect. 3.5 about open quantum systems. We will see that an observable evolution of type (3.26) is peculiar to a special group of dynamical semigroup. (a') S t is completely positive for all t ≥ 0.
As we noticed before, the uniform continuity of the semigroup implies the existence of the generator, which is a linear bounded operator on B(H). Then S t (X) = e tL (X) for all t ≥ 0. Our first goal in this section is to find a general form for the generators of uniformly continuous quantum dynamical semigroups. Then we will see that each one of them is of the form (3.26).
Generators of uniformly continuous quantum dynamical semigroups were classified by Lindblad and Gorini, Kossakowski, Sudarshan in the 70'. Here we will state first a more general formulation of these results.
Theorem 3.27 ( [Pa] Thm. III.30.12 ). An operator L on B(H) is the generator of a uniformly continuous quantum dynamical semigroup if and only if there exists: a Hilbert space K, a bounded linear operator V : H → H⊗K and a bounded self-adjoint linear operator H on H satisfying
(i) L(X) = i[H, X] + 1 2 {2V * (X ⊗ 1)V -V * V X -XV * V }; (ii) the set {(V X -(X ⊗ 1)V )ψ | X ∈ B(H), ψ ∈ H} is dense in H ⊗ K.
From this statement one could recover the results of [Li] and [GKS] in their original (and often more useful) form.
Theorem 3.28 (Lindblad, [Li], [Pa] Thm. III.30.16 ). An operator L on B(H) is the generator of a uniformly continuous quantum dynamical semigroup if and only if there exists a sequence {L j } of bounded operators on H such that L * j L j is strongly convergent and a bounded self-adjoint operator H on H satisfying
L(X) = i[H, X] + 1 2 j {2L * j XL j -L * j L j X -XL * j L j }. (3.32)
Moreover the sequence {L j } could be chosen such that
(i) The set { j [L j , X]ψ | X ∈ B(H), ψ ∈ H} is dense in j H;
(ii) tr(ρL j ) = 0 for each j, given a fixed state ρ ∈ T (H);
(iii) If j |c j | 2 < ∞ and c 0 + j c j L j = 0 then c j = 0 for each j.
Theorem 3.29 (Gorini, Kossakoswki, Sudarshan [GKS]). An operator L * on B(C N ) is the generator of a continuous quantum dynamical semigroup if and only if can be expressed in the following form
L * (ρ) = -i[H, ρ] + 1 2 N 2 -1 i,j=1 c ij {2L i ρL * j -L * j L i ρ -ρL * j L i } (3.33)
where: {L j }, j = 1, ..., N 2 -1, is a sequence of bounded operator on H such that tr(L j ) = 0, tr(
L * i L j ) = δ ij ; H is a bounded self-adjoint operators on H such that tr(H) = 0; C = {c ij } N 2 -1
i,j=1 is a positive semidefinite complex matrix.
Remark 3.30. Here we stated the result for the dual generator. By duality one can recover the same form of the dual generator from equation (3.32) (observe that just one * moved). In this statement the operators {L j } are required to form an orthonormal basis of the space of trace zero operators in B(C N ).
Remark 3.31. The result of Gorini, Kossakowski and Sudarshan is the finite dimensional equivalent of Lindblad theorem. Given a unitary matrix
U = {u ij } N 2 -1 i,j=1
consider the base change
L j = (U L) j = N 2 -1 k=1 u jk L k , then -i[H, ρ] + 1 2 N 2 -1 i,j=1 c ij {2L i ρL * j -L * j L i ρ -ρL * j L i } = = -i[H, ρ] + 1 2 i,j c ij {2( k U * ik L k )ρ( h U * jh L h ) * -( h u hj L h ) * ( k u ki L k )ρ -ρ( h u hj L * h )( k u ki L k )} = -i[H, ρ] + 1 2 k,h i,j u ki c ij u hj {2L k ρL * h -L * h L k ρ -ρL * h L k } = -i[H, ρ] + 1 2 k,h j (U C) kj u hj {2L k ρL * h -L * h L k ρ -ρL * h L k } = -i[H, ρ] + 1 2 k,h (U CU T ) kh {2L k ρL * h -L * h L k ρ -ρL * h L k } = -i[H, ρ] + 1 2 k,h (U CU * ) kh {2L k ρL * h -L * h L k ρ -ρL * h L k }
By the positivity of the matrix C it is always possible to find a unitary matrix U such that U CU * is a diagonal matrix
D = {d ii } N 2 -1 i=1
and every coefficient d ii is non negative. Of course in this case ( d jj L j ) * ( d jj L j ) is strongly convergent and one obtain the Lindblad form of the generator.
Conversely, Theorem (3.28) says that it is possible to choose the operators L j with trace zero (choosing ρ = I/N in (ii)). Condition (iii) guarantees that at most N 2 -1 operators L j are different from zero and that they are linearly independent i. e. tr(L * i L j ) = δ ij . Then normalizing the L j 's, i. e. L j = tr(L * j L j ) -1/2 L j we recall the GKS form of the generator with the diagonal matrix {tr(L * j L j )} N 2 -1 i=1 (as usual, by adding a constant to H one could obtain a traceless and equivalent Hamiltonian).
The equation that generators of a quantum dynamical semigroups solve is called Lindblad equation and reads
Ẋ = L(X) = i[H, X] + 1 2 j {2L * j XL j -L * j L j X -XL * j L j } (3.34) for observables X ∈ B(H) and ρ = -i[H, X] + 1 2 j {2L j ρL * j -L * j L j ρ -ρL * j L j } (3.35)
for states ρ ∈ T (H).
We conclude with a theorem of Davies that classifies the form of finite dimensional uniformly continuous quantum dynamical semigroups.
Theorem 3.32 (Davies, [Da] Thm. 9.4.3 ). Let H be a finite-dimensional Hilbert space and S t a uniformly continuous quantum dynamical semigroup on B(H). Then there exist an Hilbert space K, a state ρ = |Ω Ω| on K and a strongly continuous one-parameter semigroup V t of isometries on H ⊗ K such that
S t (X) = E ρ (V * t X ⊗ 1V t )
for all X ∈ B(H) and all t ≥ 0.
Example 3.33 (Damped harmonic oscillator). As example of infinite dimensional quantum system we would illustrate the damped harmonic oscillator, namely a quantum harmonic oscillator coupled with an environment that stabilizes the average number of excitation of the system. Consider the quantum harmonic oscillator H = ω(a † a + 1 2 ), where the annihilator a and the creator a † are defined as
a † = 1 √ 2 (X -iP ) a = 1 √ 2 (X + iP ). (3.36)
Consider the Lindblad operators
L 1 = γ 2 (η + 1) 1 2 a † L 2 = γ 2 η 1 2 a where η = 1 e ωβ -1 .
A general density operator for that system reads
ρ = 1 2 n≤m c nm |n m| + c * nm |m n| n c nn = 1.
If ρ is a stationary state for the system it must satisfy 0 = ρ = L(ρ) where L is given by (3.35). Notice that by going to the interaction picture, i. e. performing the coordinate change e iHt/ , we can consider H = 0. Then
0 = γ 2 (η + 1) 2a 1 2 n≤m c nm |n m| + c * nm |m n| a † -a † a 1 2 n≤m c nm |n m| + c * nm |m n| - 1 2 n≤m c nm |n m| + c * nm |m n| a † a + γ 2 η 2a † 1 2 n≤m c nm |n m| + c * nm |m n| a -aa † 1 2 n≤m c nm |n m| + c * nm |m n| - 1 2 n≤m c nm |n m| + c * nm |m n| aa † = γ 4 (η + 1) 2 n≤m c nm (a |n )(a |m ) * + c * nm √ m |m -1 n -1| √ n - n≤m c nm n |n m| + c * nm m |m n| - n≤m c nm |n m| m + c * nm |m n| n + γ 4 η 2 n≤m c nm √ n + 1 √ m + 1 |n + 1 m + 1| + c * nm √ n + 1 √ m + 1 |m + 1 n + 1| - n≤m c nm (n + 1) |n m| + c * nm (m + 1) |m n| - n≤m c nm (m + 1) |n m| + c * nm (n + 1) |m n| = γ 2 (η + 1) n≤m |n m| c n+1,m+1 (n + 1)(m + 1) - n 2 c n,m - m 2 c n,m |m n| c * n+1,m+1 (n + 1)(m + 1) - n 2 c * n,m - m 2 c * n,m + γ 2 η n≤m,n≥1 |n m| c n-1,m-1 √ nm - n + 1 2 c n,m - m + 1 2 c n,m |m n| c * n-1,m-1 √ nm - n + 1 2 c * n,m - m + 1 2 c * n,m + γ 4 η n=0,m≥0 c 0,m |0 m| + c * 0,m |m 0| (m + 1) + c 0,m (m + 1) |0 m| + c * 0,m |m 0| = γ 2 1≤n≤m |n m| (η + 1) c n+1,m+1 (n + 1)(m + 1) - n 2 c n,m - m 2 c n,m +η c n-1,m-1 √ nm - n + 1 2 c n,m - m + 1 2 c n,m + γ 2 1≤n≤m |m n| (η + 1) c * n+1,m+1 (n + 1)(m + 1) - n 2 c * n,m - m 2 c * n,m +η c * n-1,m-1 √ nm - n + 1 2 c * n,m - m + 1 2 c * n,m + γ 2 0=n≤m |0 m| (η + 1) c 1,m+1 √ m + 1 - m 2 c 0,m - 1 2 η c 0,m √ nm(m + 1)c 0,m + γ 2 0=n≤m |m 0| (η + 1) c * 1,m+1 √ m + 1 + m 2 c * 0,m - 1 2 η (m + 1)c * 0,m-1 - n + 1 2 c * n,m + c * 0,m
The solution of the previous equation could be found componentwise by induction on the indices m, n. More precisely, for each pair n, m we project the latter expression on the subspace |n m| and we impose that is null, i. e.
|n n| L(ρ) |m m| = 0
We begin by n = m = 0. The coefficient of the term |0 0| is zero if and only if
(η + 1)(c 1,1 √ 1 + 0 - 0 2 c 0,0 ) - η 2 (c 0,0 + c 0,0 ) = 0 which implies c 1,1 = η η + 1 c 0,0 .
For n = 0, m ≥ 1 we obtain the condition
(η + 1)c 1,m+1 √ m + 1 - m(η + 1) 2 c 0,m - (m + 2)η 2 c 0,m = 0
that lead to the recursive relation
c 1,m+1 = (η + 1 2 )m + η √ m + 1 c 0,m . For n = m ≥ 1 (η + 1)(c n+1,n+1 (n + 1) -nc n,n ) + η(c n-1,n-1 n -(n + 1)c n,n ) = 0 which is c n+1,n+1 = (2η + 1)n + η (η + 1)(n + 1) c n,n - ηn (η + 1)(n + 1) c n-1,n-1 . (3.37)
It can be easily shown by induction that the latter eq. is satisfied by
c n,n = η η + 1 n c 0,0 .
Then given that n c n,n = 1, one obtains
c 0,0 = 1 η + 1 = 1 -e -ωβ c n,n = e -ωβn (1 -e -ωβ ).
Thus, the population of each level, i. e. c n,n = T r(ρ |n n|), of a stationary state for the damped harmonic oscillator is given by the Boltzmann distribution.
The mean number of quanta in this state is the expectation of the number of particle N = a † a a † a = nc n,n = η, which is the thermal average. If we define N (t) := T r(a † aρ(t)), i. e. the average number of quanta in the state ρ, then it satisfies the equation
Ṅ (t) = T r(a † aL(ρ(t)))
which reads Ṅ (t) = -γN (t) + γη.
The solution is
N (t) =e -γt N (0) + t 0 ds e γs γη =e -γt N (0) + η(1 -e -γt ).
Thus the average number of quanta of the system approaches, for γt 1 (which means on the scale of the inverse damping rate), the thermal average η. This happens for every initial data.
Chapter 4
Two-level closed and open quantum systems
In this chapter we will present our approach to the adiabatic controllability problem for an open quantum system. We will focus our attention to the case of a two-level system to illustrate the techniques used in our analysis. First we will introduce a set of coordinates for the state space of a finite dimensional quantum system. This allows to have a vectorial representation of the density matrix called vector of coherence. In these coordinates the Lindblad equation (an operator valued equation) translates into a set of ODEs that we can study as a classical dynamical system.
Vector of coherence
In this section we discuss the Bloch vector representation of density matrices for finite dimensional quantum systems. If H = C N it is always possible to choose a basis of B(C N ), {F i } N 2 -1 i=0 with F 0 = 1, tr(F j ) = 0 and tr(F * i F j ) = N δ ij (the generalized Pauli basis). Then every state ρ on C N admits the decomposition ρ = tr(ρ1) tr( 1)
1 + N 2 -1 j=1 tr(F * j ρ) tr(F * j F j ) F j = 1 N + N 2 -1 j=1 x j N F j ,
where
x j := tr(F * j ρ), j = 0, ..., N 2 -1. (4.1)
With respect to the previous decomposition we will call the coordinate vector x = (x 0 , ..., x N 2 -1 ) the vector of coherence of ρ.
Then, by the spectral properties of density operators, namely σ(ρ)
⊂ [0, 1] 1 N 2 tr(1) + N 2 -1 j=1 |x j | 2 N 2 tr(F * j F j ) = tr(ρ 2 ) ≤ 1 ⇒ N 2 -1 j=1 |x j | 2 ≤ N (1 - 1 N ) = N -1. The ball B N = {x ∈ R N | x 2 ≤ N -1} is called in this context Bloch ball.
We choose not to normalize the elements F j 's, but obviously by defining F j = F j / √ N the relation tr(F i * F j ) = δ ij holds, and
ρ = 1 N + N 2 -1 j=1 x j F j , x j = tr(F j * ρ), j = 0, ..., N 2 -1. Then 1 N 2 tr(1) + N 2 -1 j=1 x j 2 = tr(ρ 2 ) ≤ 1 ⇒ N 2 -1 j=1 x j 2 ≤ (1 - 1 N ).
For simplicity of notation, in the following we will make use of the non normalized base. Notice that in the case N = 2, B 2 is the unit ball.
Two-level closed systems
In this section we briefly discuss the evolution of the density operator for a closed two-level quantum system in the Bloch representation. Let H = C 2 and H ∈ B(C 2 ) be a self-adjoint operator, then H can be written can be decomposed as
ρ = 1 2 1 + 2 Re(ρ 01 )σ 1 -2 Im(ρ 01 )σ 2 + (ρ 00 -ρ 11 )σ 3 ,
therefore its vector of coherence x = (x 0 , x 1 , x 2 , x 3 ) is defined as in (4.1), i. e.
x 0 = tr(1ρ) = 1 x 1 = tr(σ 1 ρ) = ρ 01 + ρ 10 x 2 = tr(σ 2 ρ) = -i(ρ 01 -ρ 10 ) x 3 = tr(σ 3 ρ) = ρ 00 -ρ 11 (4.3)
We observe that
x 2 1 + x 2 2 + x 2 3 = 4 Re(ρ 01 ) 2 + 4 Im(ρ 01 ) 2 + (ρ 00 -ρ 11 ) 2 = 4 |ρ 01 | 2 + (ρ 00 -ρ 11 ) 2 ≤ 4ρ 00 ρ 11 + (ρ 00 -ρ 11 ) 2 = (ρ 00 + ρ 11 ) 2 = 1,
and moreover tr(ρ 2 ) = 1 2 (1 + x 2 1 + x 2 2 + x 2 3 ), so tr(ρ) = tr(ρ 2 ) = 1 if and only if x 2 1 + x 2 2 + x 2 3 = 1, i. e. a state ρ is a pure state if and only if its vector of coherence is on the Bloch sphere of radius 1.
In Bloch coordinates the Heisenberg equation in Hartree units (so that in particular = 1 hereafter) ρ = -i[H, ρ] (4.4)
translates into the following system of ODEs
ẋ1 = -(h 00 -h 11 )x 2 -2 Im(h 01 )x 3 ẋ2 = (h 00 -h 11 )x 1 -2 Re(h 01 )x 3 ẋ3 = 2 Im(h 01 )x 1 + 2 Re(h 01 )x 2 , (4.5) which shortly reads ẋ = A(h)x where h = 2 Re(h 01 ), -2 Im(h 01 ), h 00 -h 11 (4.6)
is the vector of coordinates of H -tr(H) in the basis {σ 1 , σ 2 , σ 3 } (see (4.2)) and A : R 3 → B(R 3 ) is defined
A(u) = 0 -u 3 u 2 u 3 0 -u 1 -u 2 u 1 0 . (4.7)
Observe that ẋ = A(h)x = h∧x and the matrix A(h) is skew-symmetric i. e. A(h) T = -A(h), so the exponential matrix e tA(h) corresponds to a rotation around a fixed axis. Then the trajectory of a vector of coherence x in the Bloch ball is a circumference of fixed radius. Thus the purity of each state, namely tr(ρ 2 ) = (1 + x 2 )/2, remains invariant during the dynamics. This is in agreement with equation (3.17).
When we consider a controlled system, we assume that the Hamiltonian H = H(u) is affine with respect to the control variable u ∈ R m , m ≤ dim(H).
Assumption 4.1. Assume that the Hamiltonian of a two-level system has the form
H(u) = 1 2 Eσ 3 + u 1 σ 1 + u 2 σ 2 , u = (u 1 , u 2 ) ∈ R 2 (4.8)
where E > 0.
Remark 4.2. We choose to consider a system with a drift H(0) = (E/2)σ 3 , because this uncontrolled Hamiltonian represents a standard two-level system with gap E between energy levels. The drift is chosen traceless, however this is not restrictive since two Hamiltonians which differ by c1 generate the same state evolution, see eq.(3.17). We choose to have two control parameters to ensure that the system can be controlled by means of slowly varying controls. If fact, as seen in Example 2.31, one control is sufficient to achieve controllability, but in general this could have unbounded derivatives.
The Heisenberg equation in Bloch coordinates for this choice of H reads ẋ = A(u E )x (4.9)
where u E = (u 1 , u 2 , E).
(4.10) Equations (4.9) generates the rotation around the axis u E , therefore the system has a set of equilibrium points, namely {cu E | c ∈ R}. Let us denote ûE = u E / u E .
Slowly driven closed systems
The control law (4.8) allow to choose the rotation axis for the dynamics in Bloch coordinates. If we are able to change adiabatically the axis of rotation u E , i. e. we can choose a control law such that uE < ε 1, then we can ask if the equilibria of the system remain stable. More precisely, given a control function u : [0, 1] → R 2 and an initial state x 0 such that x 0 -x 0 , ûE (0) ûE (0) < δ, we would prove that there exists ε small enough such that the solution x(t) of
ẋ = A(u E (εt))x, t ∈ [0, 1/ε] (4.11) with initial condition x(0) = x 0 satisfies x(t) -x(t) , ûE (εt) ûE (εt) < δ, ∀t ∈ [0, 1/ε].
Notice that with two controls u E cannot span every unit vector n ∈ S 2 R . Assuming (u 1 , u 2 ) ∈ R 2 , i. e. unbounded controls, ûE ∈ S 2 R ∩ {x 3 > 0}. However we remark again that the we are interested in preserving the stability of equilibria.
To simplify the problem it is convenient to perform changes of coordinates on the system
i ψ = H(u(t))ψ, ψ ∈ C 2
where H(u) has the form (4.8). Set
u 1 (t) -iu 2 (t) = v 1 (t)e -2i(Et-t 0 v 3 (τ )dτ ) , v 1 (t), v 3 (t) ∈ R (4.12)
and consider the time dependent transformation
V (t) = e 2i(Et-t 0 v 3 (τ )dτ ) 0 0 e -2i(Et-t 0 v 3 (τ )dτ ) . (4.13) Then φ = V (t) -1 ψ satisfies i φ = V -1 (t)H(u(t))V (t) -iV -1 (t) d V dt (t) φ = H rw (v 1 (t), 0, v 3 (t))φ
where the Hamiltonian H rw is defined
H rw (v 1 , v 2 , v 3 ) = 1 2 v 1 σ 1 + v 2 σ 2 + v 3 σ 3 . (4.14)
In these new coordinates, the system become driftless. Notice that if v 1 (0) = 0 the initial point of the control u is u(0) = (0, 0) or u E (0) = (0, 0, E). When view in Bloch coordinates H rw (v(t)) corresponds to the generator A(v(t)) where A(•) was defined above.
Example 4.3. Now choose the particular control
v(t) = (v 1 (t), 0, v 3 (t)) = (2 sin θ(t), 0, 2 cos θ(t)), t ∈ [0, 1] (4.15)
which means that the time dependent Hamiltonian of the system is
H rw (v(t)) =
H rw = σ 3 - θ 2 σ 2 .
Therefore the exponential of H rw is computed by the use of
e i(γ/2)[v 1 σ 1 +v 2 σ 2 +v 3 σ 3 ] = cos(γ v /2)1 + i sin(γ v /2) v (v 1 σ 1 + v 2 σ 2 + v 3 σ 3 ). (4.17)
Choosing θ(t) = εt, the solution φ(t) to the slowly driven Schrödinger equation
i φ = H rw (v(εt))φ = [sin(εt)σ 1 + cos(εt)σ 3 ] φ, t ∈ [0, ϑ/ε] (4.18)
with initial datum φ(0) = φ 0 and ϑ ∈ (0, 2π] is where
φ(t) = U (t)e -iH rw t U (t) -1 φ 0 = U (t) cos(ω(ε)t)1 -i sin(ω(ε)t) ω(ε) σ 3 -i ε sin(ω(ε)t) 2ω(ε) σ 2 φ 0
ω(ε) = 1 + ε 2 /4. Bloch coordinates x(t) = (x 1 (t), x 2 (t), x 3 (t)) of ρ(t) = |φ(t) φ(t)
| can now be computed by means of last equation and formula (3.13).
Choosing as initial data ρ 0 = σ 3 , i. e. x(0) = (0, 0, 1), after some straightforward computation one obtains
x 1 (t) = - εω 2 cos(εt) sin(2ωt) + 1 ω 2 - ε 2 8 ω 2 - 1 ω 2 sin(εt) + ε 2 8 ω 2 + 1 ω 2 sin(εt) cos(2ωt) x 2 (t) = ε sin(ωt) 2 x 3 (t) = - εω 2 sin(εt) sin(2ωt) + 1 ω 2 - ε 2 8 ω 2 - 1 ω 2 cos(εt) + ε 2 8 ω 2 + 1 ω 2 cos(εt) cos(2ωt)
which is immediately recognized as (v 1 (εt), 0, v 3 (εt)) + O(ε) (see Fig. 4.1). More generally if the initial state's coordinates are x(0) = (x 1 (0), x 2 (0), x 3 (0)) with x(0) = 1 and (0, 0, 1) -x(0) < δ we can compute its evolution using the above argument. We arrive to
x 1 (t) =x 1 (0) - εω 2 sin(εt) sin(2ωt) - 1 ω 2 + ε 2 8 ω 2 - 1 ω 2 cos(εt) cos(2ωt) + ε 4 32 1+ 1 ω 2 cos(εt) +x 2 (0) 1 ω cos(εt) sin(2ωt) + ε sin(εt) sin(ωt) 2 +x 3 (0) - εω 2 cos(εt) sin(2ωt) + 1 ω 2 - ε 2 8 ω 2 - 1 ω 2 sin(εt) + ε 2 8 ω 2 + 1 ω 2 sin(εt) cos(2ωt) x 2 (t) = x 1 (0) 1 ω sin(2ωt) + x 2 (0) ε 2 8 ω 2 - 1 ω 2 + x 2 (0) 1 ω 2 - ε 4 32 1+ 1 ω 2 cos(2ωt) + x 3 (0)ε sin(ωt) 2 x 3 (t) =x 1 (0) - εω 2 cos(εt) sin(2ωt) + 1 ω 2 + ε 2 8 ω 2 - 1 ω 2 sin(εt) cos(2ωt) - ε 4 32 1+ 1 ω 2 sin(εt) +x 2 (0) - 1 ω sin(εt) sin(2ωt) + ε cos(εt) sin(ωt) 2 +x 3 (0) - εω 2 sin(εt) sin(2ωt) + 1 ω 2 - ε 2 8 ω 2 - 1 ω 2 cos(εt) + ε 2 8 ω 2 + 1 ω 2 cos(εt) cos(2ωt)
from which we can see that
(v 1 (εt), 0, v 3 (εt)) -x(t) ≤ (0, 0, 1) -x(0) + O(ε) < δ, t ∈ [0, ϑ/ε]
for 0 < ε small enough.
It is possible to generalize the result of the previous example to control functions
v : [0, 1] → R 3 \ {(0, 0, 0)}. Theorem 4.4. Let v : [0, 1] → R 3 , v ∈ C 2 ([0, 1]) be a control function satisfying min t∈[0,1] v(t) > 0.
Let x(0) ∈ R 3 \ {(0, 0, 0)} be such that x 0 -x 0 , v(0) v(0) < δ. There exist ε > 0 such that the solution x(t) of the equation
ẋ = A(v(εt))x, t ∈ [0, 1/ε] satisfies x(t) -x(t) , v(εt) v(εt) < δ, ∀t ∈ [0, 1/ε].
Remark 4.5 (Sketch of the proof of Thm.4.4). The previous result follows from a generalization to evolutions on Banach spaces of standard quantum adiabatic results [AvFG]. In the hypothesis of the theorem 0 is eigenvalue of A(v(t)) for each t and is protected by a gap. Therefore, the projection P on the instantaneous eigenvector of eigenvalue 0 is well defined and ker A(v)⊕Ran A(v) = R 3 . Notice that P (t)x = x , v(t) v(t). We define propagator by parallel transport 1 the collection of the maps {T (s, s )} s,s ∈R ⊂ Aut(R 3 ), which are solution to the equation
∂ ∂s T (s, s ) = [ Ṗ (s), P (s)]T (s, s ), T (s , s ) = 1. (4.19) 1
The definition above has a geometric counterpart in terms of an Ehresmann connection on the trivial vector bundle over R with fiber R 3 . The Ehresmann connection is a generalization of the Levi-Civita connection in Riemannian geometry, and defines parallel transport on general vector bundles [START_REF] Von Westenholz | Differential forms in Mathematical Physics[END_REF]Part IV][Spi,Chap. 8].
In particular, they satisfy the intertwining property P (s)T (s, s ) = T (s, s )P (s ). By [START_REF] Avron | Adiabatic Theorems for Generators of Contracting Evolutions[END_REF]Theorem 9] (1 -P (t))x(t) ≤ (1 -P (t))(x(t) -T (t, 0)x(0)) + (1 -P (t))T (t, 0)x( 0
) = (1 -P (t))x(t) -T (t, 0)(1 -P (0))x(0) + T (t, 0)(1 -P (0))x(0) ≤ Cε x(0) + (1 -P (0))x(0)
which is the claim of the theorem.
In the case of system (4.8) we can state a corollary of the previous theorem.
Corollary 4.6. Let u : [0, 1] → R 2 , u ∈ C 2 ([0, 1]) be a control function for the Hamiltonian (4.8). Let x(0) ∈ R 3 \{(0, 0, 0)} be such that x(0) -x(0) , ûE (0) ûE (0) < δ. There exist ε > 0 such that the solution x(t) of the equation
ẋ = A(u E (εt))x, t ∈ [0, 1/ε] satisfies x(t) -x(t) , ûE (εt) ûE (εt) < δ, ∀t ∈ [0, 1/ε].
Two-level open systems
We will now obtain the general form of the equation ρ = L * (ρ) in the Bloch coordinates where L * is the generator in the form (3.33). Given a generic matrix C = {c ij } which is assumed positive semidefinite (then Hermitian), and the standard Pauli basis {I, σ 1 , σ 2 , σ 3 }, (3.33) reads
L * (ρ) = -i[H, ρ] + 1 2 3 i,j=1 c ij [2σ i ρσ * j -ρσ * j σ i -σ * j σ i ρ] = -i[H, ρ] + 1 2 3 j=1 c jj [2σ j ρσ j -ρσ j σ j -(1)ρ] + 1 2 3 i,j=1,i =j c ij [2σ i ρσ j -ρσ j σ i -σ j σ i ρ].
From the elementary relation
σ i σ j = i ijk σ k (4.20)
where ijk is the completely antisymmetric symbol 2 , one obtains that if k, h ∈ {1, 2, 3} and k = h then σ k σ h σ k = -σ h . Thus, this implies
2σ j ρσ j -2ρ = -2 i (1 -δ ji )x i σ i 2 ijk is 1 if (i, j, k
) is an even permutation of (1, 2, 3), -1 if it is an odd permutation, and 0 if any index is repeated. and so
L * (ρ) = -i[H, ρ] + 3 i,j=1 c jj (δ ji -1)x i σ i + 1 2 3 i,j=1,i =j c ij [2σ i ρσ j -ρ(i jik σ k ) -(i jik σ k )ρ].
Another elementary calculation based on (4.20) gives us
σ i ρ + ρσ i = σ i + x i 1, i = 1, 2, 3, from which L * (ρ) = -i[H, ρ] + 3 i,j=1 c jj (δ ji -1)x i σ i + 1 2 3 i,j=1,i =j c ij [2σ i ρσ j + i ijk (σ k + x k 1)].
Finally using
σ i ρσ j = 1 2 (i ijk σ k + x i σ j + x j σ i + ix k ikj ) = 1 2 i ijk (σ k -x k 1) + 1 2 (x i σ j + x j σ i ),
we arrive at Explicitly the equations are
L * (ρ) = -i[H, ρ] + 3 i,j=1 c jj (δ ji -1)x i σ i + 1 2 3 i,j=1,i =j c ij [(x i σ j + x j σ i ) + i ijk (σ k -x k 1) + i ijk (σ k + x k 1)] = -i[H, ρ] + 3 i,j=1 c jj (δ ji -1)x i σ i + 1 2 3 i,j=1,i =j c ij [(x i σ j + x j σ i ) + 2i ijk σ k ].
ẋ1 = -2(c 22 + c 33 )x 1 -(h 00 -h 11 )x 2 + 2 Re(c 12 )x 2 -2 Im(h 01 )x 3 + 2 Re(c 13 )x 3 -4 Im(c 23 ) ẋ2 = (h 00 -h 11 )x 1 + 2 Re(c 12 )x 1 -2(c 11 + c 33 )x 2 -2 Re(h 01 )x 3 + 2 Re(c 23 )x 3 + 4 Im(c 13 ) ẋ3 = 2 Im(h 01 )x 1 + 2 Re(c 13 )x 1 + 2 Re(h 01 )x 2 + 2 Re(c 23 )x 2 -2(c 22 + c 33 )x 3 -4 Im(c 12 ).
Observe that
Γ = (C + C T ) -2tr(C)1 = 2 Re(C) -2tr(C)1.
Thus, the positivity of C translates to a compatibility condition between Γ and k.
In fact, let us write C as
C = Re(C) + i Im(C), then the condition z , Cz ≥ 0 reads z , (Re(C) + i Im(C))z = z , Re(C)z + i 4 z , k ∧ z ≥ 0 ∀ z ∈ C 3 . (4.24) Given that tr(Γ) = 2tr(C) -2tr(C)tr(1) = -4tr(C),
in terms of Γ and k, condition (4.24) reads
z , Γz - 1 2 tr(Γ) z 2 + i 2 z , k ∧ z ≥ 0 ∀ z ∈ C 3 , (4.25)
which we observe is invariant under rotations
z , Γz - 1 2 tr(Γ) z 2 + i 2 z , k ∧ z = = Rz , RΓ(R T R)z - 1 2 tr(R T RΓ) Rz 2 + i 2 Rz , R(k ∧ z) = Rz , (RΓR T )Rz - 1 2 tr(RΓR T ) Rz 2 + i 2 Rz , Rk ∧ Rz .
Remark 4.7. The map
C = Re(C) + i Im(C) → (Γ, k)
is invertible from the subset of positive semidefinite matrices into the subset of elements (Γ, k) ∈ M 3×3 (R) × R 3 that satisfy (4.25) and such that Γ ≤ 0.
In fact C ≥ 0 implies that Re(C) is real symmetric and Im(C) is real antisymmetric as one can see from
Re(C) = 1 2 (C + C) = 1 2 (C + C T ), Im(C) = 1 2i (C -C) = 1 2i (C -C T ).
So first one notice from (4.23) that the map Im(C) → k is one to one. Then observe that Γ ij = 2 Re(c ij ) for i < j and Γ ii = -2
j =i c jj , so 2c ii = 2tr(C) + Γ ii = - 1 2 tr(Γ) + Γ ii .
From the discussion above we conclude that Re(C) is a symmetric non-negative matrix, hence it diagonalizes (the fact that Γ is symmetric for each C is true only in dimension 2, in general Γ has a mixed symmetry [START_REF] Alicki | Quantum Dynamical Semigroups and Application[END_REF]Sect. 2.4]). Moreover, it diagonalizes simultaneously with Γ. Performing the change of coordinates y = Rx that diagonalize Γ, equation (4.21) become ẏ = R(A(h) + Γ)R T y + Rk where A(h) = RA(h)R T is skew-symmetric Γ = RΓR T is diagonal and the pair (Γ , Rk) satisfies (4.25). So, without loss of generality we can always assume Re(C) diagonal .
In conclusion, the evolution of a generic two-level open system is completely determined by a positive matrix C that we can assume in the form
C = c 11 -i k 3 4 i k 2 4 i k 3 4 c 22 -i k 1 4 -i k 2 4 i k 1 4 c 33 (4.26)
and a skew-symmetric matrix A(u)
A(u) = 0 -u 3 u 2 u 3 0 -u 1 -u 2 u 1 0 (4.27)
which corresponds to the generic (traceless) Hamiltonian (4.29) It is important to notice that, with these choices, equation (4.25) translates into the following set of inequalities (see [START_REF] Alicki | Quantum Dynamical Semigroups and Application[END_REF]Sect. 2.3.1])
H(u) = (1/2)(u 1 σ 1 +u 2 σ 2 + u 3 σ 3 ). Given
γ 1 + γ 2 ≥ γ 3 γ 2 + γ 3 ≥ γ 1 γ 3 + γ 1 ≥ γ 2 γ i ≥ |k i | i = 1, 2, 3 γ 2 1 -(γ 2 -γ 3 ) 2 ≥ 4k 2 3 γ 2 2 -(γ 1 -γ 3 ) 2 ≥ 4k 2 2 γ 2 3 -(γ 1 -γ 2 ) 2 ≥ 4k 2 1 .
In particular these inequalities imply that if γ 1 γ 2 γ 3 = 0 then k = 0. This gives us a classification of open systems in two types. We will characterize each subcase in the following sections.
Remark 4.8 (Equilibrium points of the control system). We consider the system (4.28) where u is treated as control parameter. Let E be the set of points x such that 0 = x , ẋ , i. e.
E := {x | x , Γx + k = 0} (4.30) Case 1 : γ 1 γ 2 γ 3 = 0
The set E is an ellipsoid. In fact
0 = x , Γx + k = x , -Γ(x + Γ -1 k) ,
which can be written as
√ -Γx + 1 2 (-Γ) -1 2 k 2 = 1 2 (-Γ) -1 2 k 2 (4.31) where √ -Γ = diag( √ γ 1 , √ γ 2 ,
√ γ 3 ). The origin always belong to E. Moreover,
if x ∈ E \ {0} then there exists u x ∈ R 3 such that (A(u x ) + Γ)x + k = 0. Last equation is in fact equivalent to y , (A(u x ) + Γ) x = y , -k ∀y ∈ R 3 , (4.32)
but it is enough to satisfy (4.32) for y = e i , i = 1, 2, 3, where {e 1 , e 2 , e 3 } is an orthonormal basis. If we choose e 3 = Γx/ Γx and e 1 , e 2 such that e 1 ∧e 2 = e 3 we obtain the following equations for u
x Γx Γx , A(u x )x = Γx Γx , -k -Γx e 1 , A(u x )x = e 1 , -k e 2 , A(u x )x = e 2 , -k
which admit always at least one solution.
Case 2 : γ 1 γ 2 γ 3 = 0
The set E is a line. Assume without loss of generality that γ 3 = 0 and
γ := γ 1 = γ 2 (if γ i = γ j = 0 then γ k = 0 for every triple of different indexes i, j, k). 0 = x , Γx = -γx 2 1 -γx 2 2 ⇔ x = (0, 0, x 3 ).
Remark 4.9 (Invariance of the Bloch ball). If γ 1 γ 2 γ 3 = 0 then
x , ẋ = x , (A(u) + Γ)x + k = x , Γx ≤ 0.
On the other hand, if γ 1 γ 2 γ 3 = 0 from the previous Remark we can see that 0 ≤ x , ẋ = x , Γx + k if and only if
√ -Γx + 1 2 (-Γ) -1 2 k 2 ≤ 1 2 (-Γ) -1 2 k 2 ,
which means for x inside the ellipsoid E. Therefore we will show that if x is such that x , ẋ = 0 then x ≤ 1, this implies E ⊂ B 1 (0) and that the ball is invariant. Observe that from the inequality (4.25), choosing z = e 1 (x) + ie 2 (x) where e 1 (x), e 2 (x) ∈ R 3 are orthonormal vectors such that x/ x = e 1 (x) ∧ e 2 (x), we get
e 1 (x) , Γ - 1 2 trΓ e 1 (x) + e 2 (x) , Γ - 1 2 trΓ e 2 (x) + e 1 (x) ∧ e 2 (x) , k ≥ 0 -trΓ + x x , k ≥ e 1 (x) , -Γe 1 (x) + e 2 (x) , -Γe 2 (x) .
Similarly we could choose z = e 2 (x) + ie 1 (x) and this leads to
-trΓ - x x , k ≥ e 1 (x) , -Γe 1 (x) + e 2 (x) , -Γe 2 (x) . Since x , Γx + k = 0 -trΓ - 1 x x , -Γx ≥ e 1 (x) , -Γe 1 (x) + e 2 (x) , -Γe 2 (x) -trΓ ≥ e 1 (x) , -Γe 1 (x) + e 2 (x) , -Γe 2 (x) + 1 x x , -Γx -trΓ ≥ -trΓ - x x , -Γ x x + 1 x x , -Γx therefore x x , -Γ x x ≥ x x , -Γ x x x ⇔ 1 ≥ x .
Remark 4.10 (Bloch equations). Following [GKS], we want to show that there exists an abstract characterization of the vectors k compatible with each matrix Γ in the form (4.29).
Let k = -(A(u) + Γ)k 0 where k 0 ∈ K Γ , a subset of R 3 defined as follows
K Γ = y ∈ R 3 | inf x =1 x , -Γ(x -y) + y ∧ u ≥ 0 , Then equation (4.28) writes ẋ = (A(u) + Γ)x + k = (A(u) + Γ)(x -k 0 ) = u ∧ (x -k 0 ) + Γ(x -k 0 ),
which is commonly known as Bloch equation. One observes immediately that if k 0 ∈ K Γ then the unit ball is invariant under the dynamics, in fact
0 ≤ inf x =1 x , -Γ(x -k 0 ) + k 0 ∧ u = inf x =1 -x , (A(u) + Γ)(x -k 0 ) = inf x =1 -x , ẋ .
Moreover, if k ∈ K Γ then (Γ, k) satisfy the inequality (4.25).
To conclude this section we illustrate some examples of open systems which are physically interesting.
Example 4.11 (Lindblad eq. rotationally symmetric around the axis of the magnetic field). Let us consider the equation (3.35) with
H(0) = 1 2 Eσ 3 , L 1 = b + σ + , L 2 = b -σ -, L 3 = √ aσ 3 , a, b + , b -≥ 0 where σ ± = 1/2(σ 1 ± iσ 2 ), then ρ = 1 2 E(-yσ 1 +xσ 2 )+a(-xσ 1 -yσ 2 )+b + (σ 3 - 1 2 xσ 1 - 1 2 yσ 2 -zσ 3 )+b -(-σ 3 - 1 2 yσ 1 - 1 2 yσ 2 -zσ 3 ) that in Bloch coordinates reads ẋ1 = -(4a + b + + b -)x 1 -Ex 2 ẋ2 = Ex 1 -(4a + b + + b -)x 2 ẋ3 = -2(b + + b -)x 3 + 2(b + -b -) (4.33) which corresponds to C = b + + b --i b + -b - 2 0 i b + -b - 2 b + + b -0 0 0 4a . Define Γ = 4a + b + + b - γ + = 2(b + + b -) γ -= 2(b + -b -), then 2Γ ≥ γ + ≥ |γ -| . Observe that if γ + = 0 ẋ3 = -γ + x 3 - γ - γ + ,
so the system has a unique fixed point x c = (0, 0, γ - γ + ) with γ - γ + ∈ [-1, 1] (see Fig.4.2a). Otherwise, if γ + = 0 the entire x 3 -axis consist of fixed points. The term L 3 causes a contraction on the x 1 x 2 -plane, the term L 1 induces a transition toward the point (0, 0, 1) i. e. if b + > 0 and b -= 0 then γ -/γ + = 1. Conversely, L 2 induces a transition toward the point (0, 0, -1). If we add to the Hamiltonian the control operators σ 1 , σ 2 The coordinates of the fixed point are
H(u 1 , u 2 ) = 1 2 Eσ 3 + u 1 σ 1 + u 2 σ 2 the equations becomes ẋ1 = -(4a + b + + b -)x 1 -Ex 2 + u 2 x 3 ẋ2 = Ex 1 -(4a + b + + b -)x 2 -u 1 x 3 ẋ3 = -u 2 x 1 + u 1 x 2 -2(b + + b -)x 3 + 2(b + -b -). (4.34)
x 1 = γ - γ + + Γc E,Γ (u 2 1 + u 2 2 ) c E,Γ (Eu 1 + Γu 2 ) x 2 = γ - γ + + Γc E,Γ (u 2 1 + u 2 2 ) c E,Γ (Eu 2 -Γu 1 ) (4.35) x 3 = γ - γ + + Γc E,Γ (u 2 1 + u 2 2 )
where c E,Γ = 1 E 2 + Γ 2 , which corresponds to a point of the surface
Γ(x 2 1 + x 2 2 ) + γ + x 3 - γ - 2γ + 2 = γ 2 - 4γ + , (4.36)
see Fig.4.2a,4.2b. Notice that the drift term does not affect the set of equilibrium point of the system. With this notation of a two-level system spontaneously decaying from the excited state |1 to the ground state |0 while emitting a photon is given by
H(0) = 1 2 Eσ 3 , L 1 = b -σ -.
In fact, the evolution of |1 1|, which corresponds to (0, 0, 1) in Bloch coordinates is
x 1 (t) = x 2 (t) = 0, x 3 (t) = 2e -2b --1.
As noticed before every initial state converges toward |0 0| (see Fig. 4.3).
Results of geometric control theory for open quantum systems
Geometric control tools for affine systems, presented in Section 2.1.4 were applied to finite dimensional open quantum system in the vector of coherence formulation.
In particular in the work of Altafini [Alt], the case of two-level quantum system is considered in detail. We state here the main result of that paper.
Theorem 4.12 ( [Alt, Thm. 5] ). Assume that the system
i d dt ψ = H(u)ψ = H(0) + m k=1 u k H k ψ, ψ ∈ C 2 with u ∈ U ⊂ R m and -iH k ∈ su(2) is controllable.
Then for a two-level system (4.28) we have:
(a) the system (4.28) is accessible in B 1 (0), (b) the system (4.28) is never small-time nor finite-time controllable in B 1 (0) for Γ = 0.
Under Assumption 4.8 the system
i d dt ψ = H(u)ψ = 1 2 (Eσ 3 + u 1 σ 1 + u 2 σ 2 )ψ
is controllable by Theorem 2.24, so the previous results applies to our analysis. The result is partially negative, because controllability cannot be achieved on the whole Bloch ball. However, as in the case of closed system we want to see if adiabatic theory can be useful to produce robust controllability technique (on a smaller set).
Geometric Singular Perturbation
In this section we will recall briefly the main technique used in our analysis.
For closed systems the dynamics preserve the purity of the states, which also means that the equilibrium points of the system are stable but not asymptotically stable and every state follows a periodic orbit. In fact, by choosing a suitable control, one is able to modify these orbits in order to steer the system between states with equal purity. Similarly, for open systems, we saw in Remark 4.8 that exists a set of equilibrium points which depends on the Hamiltonian H(u), and so on the control u. However, in contrast with the closed systems, we will show that every state converge towards the equilibrium set. Moreover, if we control the system adiabatically, once the state approaches the surface (4.30) it will follow the instantaneous critical point. We will show this by means of geometric singular perturbation techniques.
Consider the control equation
ẏ = g(y, u) y ∈ R n , u ∈ R m (4.37)
and suppose that there exists a submanifold E of R n such that 0 = g(y, u) ∀y ∈ E.
If rank( ∂g ∂y ) = n and rank( ∂g ∂u ) = k ≤ m the manifold E admits a parametrization in terms of the control variable u given by the Implicit Function Theorem. We will denote this parametrization h = h(u), so
0 = g(h(u), u) ∀u ∈ R m .
Now choose a path in the control space u * : [0, 1] → R m and consider the slowed system y = g(y, u * (ετ )) ε 1. (4.38) This time dependent problem could be seen as a multiscale system in which the time plays the role of the slow variable.
Introducing the variable x := ετ , we can rewrite (4.38)
x = ε x(0) = 0 y = g(y, u * (x)) y(0) = y 0 , (4.39)
we should also consider the time scaling t = ετ and the system
ẋ = 1 x(0) = 0 ε ẏ = g(y, u * (x)) y(0) = y 0 , (4.40)
where we denoted with ˙the derivative with respect to t and with the derivative with respect to τ .
In the following we will use the notation (4.39.ε 0 ),(4.40.ε 0 ) to denote (4.39),(4.40) for fixed ε = ε 0 . We define also y c (x) := h(u * (x)) to not overweight the notation.
Remark 4.13. Observe that the manifold E = x, h(u * (x)) , x ∈ [0, 1] consists entirely of critical points for the system (4.39.0), while (4.39.ε) has no critical point for ε = 0. This singularity in the nature of the dynamics of (4.39) is the characterizing feature of the singular perturbations problems. Instead, E is the support of (t,
y c (t)) = t, h(u * (t)) t ∈ [0, 1],
which is the unique solution of (4.40.0) with initial condition y 0 = h(u * (0)) = y c (0).
We want to show that for ε sufficiently small and for y 0 ∈ B ρ (y c (0)), with ρ small enough, the solution y(t, ε) of (4.40.ε) with initial condition y 0 is definitively near the solution y c (t), i. e.
y(t, ε) -y c (t) = O(ε) t ∈ [t b , 1], with 0 < t b < 1.
It is more convenient to study the system after the change of coordinates η = y -h(u * (x)).
x = ε x(0) = 0 η = g(η + h(u * (x)), u * (x)) -ε ∂h ∂u (u * (x))u * (x) η(0) = y 0 -h(u * (0)) (4.41) ẋ = 1 x(0) = 0 ε η = g(η + h(u * (x)), u * (x)) -ε ∂h ∂u (u * (x)) u * (x) η(0) = y 0 -h(u * (0)) (4.42)
where u * , u * denote the same function which is the time derivative of u * . The path of critical points is now {(x, 0), x ∈ [0, 1]}.
Theorem 4.14 (Tychonoff ). Consider the singular perturbation problem
ẋ = f (t, x, y, ε), x(t 0 ) = µ(ε) (4.43) ε ẏ = g(t, x, y, ε), y(t 0 ) = ν(ε) (4.44)
and let y = h(t, x) be an isolated root of 0 = g(t, x, y, 0). Assume that the following conditions are satisfied for all
(t, x, y -h(t, x), ε) ∈ [0, t 1 ] × D x × D y × [0, ε 0 ]
for some domains D x ⊂ R q and D y ⊂ R n , in which D x is convex and contains the origin:
i) The functions f, g, ∂f /∂(x, y, ε), ∂g/∂(t, x, y, ε) are continuous; the functions h(t, x) and ∂g(t, x, y, 0)/∂y have continuous first partial derivatives with respect to their arguments; the initial data µ(ε) and ν(ε) are smooth functions of ε.
ii) The reduced problem
ẋ = f (t, x, h(t, x), 0), x(t 0 ) = µ(0) (4.45)
has a unique solution x(t) ∈ S, for t ∈ [t 0 , t 1 ], where S is a compact subset of
D x .
iii) The origin is an exponentially stable equilibrium point of the boundary-layer model
∂η ∂τ = g( t, x, η + h( t, x), 0), ( t, x) ∈ [0, t 1 ] × D x (4.46)
uniformly in ( t, x); let R y ⊂ D y be the region of attraction of
∂η ∂τ = g(t 0 , µ(0), η + h(t 0 , µ(0)), 0), η(0) = ν(0) -h(t 0 , µ(0)) (4.47)
and Ω y ⊂ R y a compact set.
There exists a positive constant ε * such that for all ν(0) -h(t 0 , µ(0)) ∈ Ω y and 0 < ε < ε * , the singular perturbation problem (4.43) has a unique solution x(t, ε), y(t, ε) on [t 0 , t 1 ], and
x(t, ε) -x(t) = O(ε) y(t, ε) -h(t, x(t)) -η(t/ε) = O(ε) hold uniformly for t ∈ [t 0 , t 1 ],
where η is the solution of the boundary-layer model (4.46). Moreover, given any t b > t 0 , there is ε * * ≤ ε * such that
y(t, ε) -h(t, x(t)) = O(ε)
holds uniformly for t ∈ [t b , t 1 ] whenever ε < ε * * .
Slowly driven two-level open systems
In this Section we apply the geometric singular perturbation theory to the specific case of a two-level quantum open system, obtaining a result (Prop. 4.15) in the generic case γ 1 γ 2 γ 3 = 0. The idea is to obtain a result analogous to Theorem 4.4 or Corollary 4.6 for open systems. Consider equation (4.28) where u : [0, 1] → R 3 is a fixed time dependent control function which varies slowly, i. e.
y = (A(u(ετ )) + Γ)y + k, τ ∈ [0, 1/ε], ε 1. (4.48)
As we saw before in Section 4.3 this equation describes the dynamics of a two-level open quantum system which is slowly driven. Equivalently, we can consider the equation In this case we can prove that the set of reachable points for t → ∞ coincides with a compact subinterval of the segment {(0, 0, s) | s ∈ [0, 1]}. Indeed, assume (without lost of generality) that the initial state of the system lies in the plane {y = 0}, namely y 0 = (y 1 (0), 0, y 3 (0)), than we choose to apply a control of the form u = (0, u 2 , 0). Therefore the system reduces to ẏ1 = -γy 1 + 2u 2 y 3 ẏ2 = 0 ẏ3 = -2u 2 y 1 .
ε ẏ = (A(u(t)) + Γ)y + k, t ∈ [0, 1], ε 1. (4.49)
Solving the equations for y 1 , y 3 under the constraint 4 |u 2 | > γ, we obtain
y 1 (t) y 3 (t) = e -γt/2 cos(ωt) + γ 2ω sin(ωt) -2u 2 ω sin(ωt) 2u 2 ω sin(ωt) cos(ωt) -γ 2ω sin(ωt)
y 1 (0) y 3 (0) (4.50) where the angular velocity ω := 4u 2 2 -γ 2 /4. So, one can see that
z + := sup t≥0 y 3 (t) = sup t≥0 e -γt/2 2u 2 ω sin(ωt)y 1 (0) + cos(ωt)y 3 (0) - γ 2ω sin(ωt)y 3 (0) (4.51) and z -:= max t≥0 y 3 (t) = max t≥0 e -γt/2 2u 2 ω sin(ωt)y 1 (0) + cos(ωt)y 3 (0) - γ 2ω sin(ωt)y 3 (0)
(4.52) are assumed for some finite value of t, namely t + and t -(the function in the above parenthesis is periodic, so y 3 (t) assume is max and min value in [0, 2π/ω]). Observe that for γ = 0 one has y 0 = sup t≥0 y 3 (t) = -inf t≥0 y 3 (t). Thus for every value
y f 3 ∈ [z -, z + ] exists a time t f ∈ [0, max{t -, t + }] such that y 3 (t f ) = y f 3 if u(t) = (0, u 2 , 0) for t ∈ [0, t f ]
. Now defining u(t) = (0, 0, E) for t > t f the systems converges exponentially fast to (0, 0, y f 3 ).
The result we obtained is partially satisfying. However, it could be a first step to study more general models of open quantum system. In particular, the main issue of our approach is the effectiveness of the Lindblad equation in the description of adiabatic open quantum systems. As we have seen in our analysis, the dynamics described by (4.28) is almost always a motion that converges exponentially fast to a unique equilibrium. In this framework the adiabatic theory cannot be effective because the convergence rate is accelerated when the time is slowed.
In literature, between the approaches to adiabatic open quantum systems, we are interested in the works of Lidar et
al. [Lid 1 ],[Lid 2 ].
In those papers the authors develop a physical model where to any variation of the Hamiltonian it corresponds a variation of the Lindblad operators. This is due to the fact that the dissipation/decoherence of the system occurs in the instantaneous energy eigenbasis.
Chapter 5
Controllability of spin-boson models 5.1 Introduction
In quantum mechanics one names spin-boson model an Hamiltonian that describe the interaction of a finite dimensional system, usually called spin, with one bosonic mode of a field. These type of models, arise in many different physical contexts, such as cavity QED, quantum optics and magnetic resonance. Two important spin boson models are the Rabi model and the Jaynes-Cummings model, which are also some of firsts ever introduced
[Ra 1 ][Ra 2 ][JaCu].
In the field of quantum control the study of these models has recently begun. Their interest lies in the fact that are some of the simplest infinite dimensional systems. More precisely, their simplicity could be seen at the level of symmetries. In general, symmetries are an obstacle to controllability, because they imply the existence of invariant subspaces for the system dynamics. Therefore, the external control must necessarily break all the symmetries of the unperturbed system in order to achieve controllability. There are highly symmetric systems that cannot be controlled, e. g. the harmonic oscillator, which was proved to be uncontrollable by Mirrahimi and Rouchon [MiRo]. On the other hand, controllability was proved for the trapped ion model [EbLa][ErPu] and more recently for the Rabi Hamiltonian [BMPS]. So, one may wonder whether more symmetric models are controllable. The Jaynes-Cummings model in particular, has an additional conserved quantity than the Rabi model, namely the total number of excitations, and his controllability is an interesting matter. The question was posed by Rouchon [Ro] some years ago.
In this chapter we provide an answer to this question. In Theorem 5.1 we prove that the Jaynes-Cummings model is controllable for almost every value of the interaction parameter, i. e. up to a set S of measure zero. Then, in Theorem 5.2 we characterize the points of S as solutions to explicit equations. Our technique exploits three ingredients: the integrability of the model [JaCu]; a study of the resonances of the spectrum which allows to invoke the controllability criterion 2.40 presented in Sect. 2.2.3; a detailed analysis of the resonance condition.
As for the future perspective, an interesting task would be provide a constructive control method for the Jaynes-Cummings model. In this chapter, we make an explicit contruction of a non-resonant chain of connectedness (see Def.2.39). However, as far as we know, this fact implies the approximate controllability of the system only via a theorem [BCCS] whose proof is not constructive. As for a related problem, is known that the Jaynes-Cummings Hamiltonian (JCH) could be seen as an approximation of the Rabi Hamiltonian in an appropriate regime, as discussed by [Ro]. A rigorous mathematical proof of the latter claim could provide a deeper understanding of these two models.
The Jaynes-Cummings model
Definition of the model
In the Hilbert space H = L 2 (R) ⊗ C 2 we consider the Schrödinger equation
i ∂ t ψ = H JC ψ
with Hamiltonian operator (JC Hamiltonian)
H JC ≡ H JC (g) = ω 2 (X 2 + P 2 ) ⊗ 1 + Ω 2 1 ⊗ σ z + g √ 2 (X ⊗ σ x -P ⊗ σ y ) (5.1)
where ω, Ω ∈ R + and g ∈ R are constants, X is the position operator, i. e. Xψ(x) = xψ(x), and P = -i∂ x . The operators σ x , σ y , σ z acting on C 2 are given by the Pauli matrices
σ x = 0 1 1 0 σ y = 0 -i i 0 σ z = 1 0 0 -1 .
The quantity ∆ := Ω -ω is called detuning and measures the difference between the energy quanta of the two subsystems corresponding to the factorization of the Hilbert space. By introducing the creation and annihilation operators for the harmonic oscillator, defined as usual by
a † = 1 √ 2 (X -iP ) a = 1 √ 2 (X + iP ), (5.2)
and the lowering and raising operators
σ = 1 2 (σ x -iσ y ) = 0 0 1 0 σ † = 1 2 (σ x + iσ y ) = 0 1 0 0 , (5.3)
the JC Hamiltonian (omitting tensors) reads
H JC = ω a † a + 1 2 + Ω 2 σ z + g 2 aσ † + a † σ .
The popularity of this model relies on the fact that it is presumably the simplest model describing a two-level system interacting with a distinguished mode of a quantized bosonic field (the harmonic oscillator). It was introduced by Jaynes and Cummings in 1963 as an approximation to the Rabi Hamitonian
H R = H R (g) = ω a † a + 1 2 + Ω 2 σ z + g 2 (a + a † )(σ + σ † ).
(5.4)
The latter traces back to the early works of Rabi on spin-boson interactions [Ra 1 , Ra 2 ], while in [JaCu] Jaynes and Cummings derived both (5.1) and (5.4) from a more fundamental model of non-relativistic Quantum Electro Dynamics (QED). Nowadays, both Hamiltonians (5.1) and (5.4) are widely used in several fields of physics. Among them, one of the most interesting is cavity QED. In typical cavity QED experiments, atoms move across a cavity that stores a mode of a quantized electromagnetic field. During their passage in the cavity the atoms interact with the field: the Hamiltonians (5.1) and (5.4) aim to describe the interaction between the atom and the cavity, in different regimes [BRH, HaRa]. More precisely, (5.1) and (5.4) can be heuristically derived from a mathematical model of non-relativistic QED, the Pauli-Fierz model [Sp]; we refer to [Co 1 ] and the more recent [BMPS] for a discussion of this derivation.
The approximation consisting in replacing (5.4) with (5.1) is commonly known as the rotating wave approximation (or secular approximation), and is valid under the assumptions [Ro] |∆| ω, Ω g ω, Ω (5.5) which mean that the harmonic oscillator and the two-level system are almost in resonance and the coupling strength is small compared to the typical energy scale. Heuristically, in this regime the probability of creating or destroying two excitations is negligible, thus one can remove the so-called counter-rotating terms a † σ † and aσ in (5.4) to obtain (5.1). More precisely, the justification of this approximation relies on separation of time scales, a well-know phenomenon in several areas of physics [PST 1 , PST 2 , PSpT]. Indeed, by rewriting the dynamics generated by (5.4) in the interaction picture with respect to
H 0 := H JC (0) = H R (0) = ω a † a + 1 2 + Ω 2 σ z , (5.6)
one gets
e iH 0 t/ (H R -H 0 )e -iH 0 t/ = g 2 e -i(Ω-ω)t a † σ + e i(Ω-ω)t aσ † + g 2 e -i(Ω+ω)t aσ + e i(Ω+ω)t a † σ † .
(5.7)
One notices that the terms a † σ, aσ † oscillate with frequency |ω -Ω|, while a † σ † , aσ oscillate on the faster scale ω + Ω, so that the latter average to zero on the long time scale |ω -Ω| -1 . While the physical principles leading from (5.4) to (5.1) are clear, as we mentioned in the Introduction a rigorous mathematical justification for this approximation seems absent from the literature, as recently remarked in [Ro].
We use hereafter Hartree units, so that in particular = 1.
Spectrum of the JC Hamiltonian
While apparently similar, the JC Hamiltonian (5.1) and the Rabi Hamiltonian (5.4) are considerably different from the viewpoint of symmetries.
As operators, they are both infinitesimally small perturbation, in the sense of Kato [Ka], of the free Hamiltonian H 0 (defined in (5.6)), which has compact resolvent. Eigenvalues and eigenvectors of H 0 are easily obtained by tensorization, starting from the eigenvectors {e 1 , e -1 } of σ z and the standard basis of L 2 (R) given by real eigenfunctions of a † a, namely the Hermite functions
|n = 1 2 n n! √ π h n (x) e -x 2 2 , n ∈ N, (5.8)
where h n is the n-th Hermite polynomial. As well known, they satisfy
a † a |n = n |n , a † |n = √ n + 1 |n + 1 , a |n = √ n |n -1 .
(5.9)
Then H 0 |n ⊗ e 1 = E 0 (n,1) |n ⊗ e 1 , H 0 |n ⊗ e -1 = E 0 (n,-1) |n ⊗ e -1 with E 0 (n,s) = ω(n + 1 2 ) + s Ω 2 , n ∈ N, s ∈ {-1, 1}.
Since (a + a † )σ x and (aσ † + a † σ) are infinitesimally H 0 -bounded, by standard perturbation theory {H JC (g)} g∈C and {H R (g)} g∈C are analytic families (of type A) of operators with compact resolvent [START_REF] Kato | Perturbation Theory for Linear Operator[END_REF]Section VII.2]. Therefore, by Kato-Rellich theorem, the eingenvalues and eigenvectors of H JC (g) and H R (g) are analytic functions of the parameter g. Coefficients of the series expansion of eingenvalues and eigenvectors can be explicitly computed [RS 4 ].
From the viewpoint of symmetries, it is crucial to notice that, as compared to the Rabi Hamiltonian, the JC Hamiltonian has an additional conserved quantity, namely the total number of excitations, represented by the operator C = a † a + σ † σ. As a consequence, the JC Hamiltonian reduces to the invariant subspaces
H n = Span{|n ⊗ e 1 , |n + 1 ⊗ e -1 } n ≥ 0, H -1 = Span{|0 ⊗ e -1 }, (5.10)
which are the subspaces corresponding to a fixed number of total excitations, i. e. C Hn = n + 1. Indeed, H JC restricted to these subspaces reads
H n (g) := H JC (g) Hn = E 0 (n,1) g √ n + 1 g √ n + 1 E 0 (n+1,-1) = ω(n + 1)1 + ∆/2 g √ n + 1 g √ n + 1 -∆/2 . (5.11)
Eigenvalues and eigenvectors of H n are easily computed to be
H JC (g) |n, ν = E (n,ν) |n, ν , n ∈ N, ν ∈ {-, +} (5.12)
where
E (n,ν) (g) = ω(n + 1) + ν 1 2 ∆ 2 + 4g 2 (n + 1) (5.13) |n, + (g) = cos(θ n /2) |n ⊗ e 1 + sin(θ n /2) |n + 1 ⊗ e -1 (5.14) |n, -(g) = -sin(θ n /2) |n ⊗ e 1 + cos(θ n /2) |n + 1 ⊗ e -1 (5.15)
and the mixing angle
θ n (g) ∈ [-π/2, π/2] is defined through the relation tan θ n := 2g √ n + 1 ω -Ω .
(5.16)
Hereafter, we will omit the g-dependence of the eigenvectors |n, ν for the sake of a lighter notation. Observe that in the resonant case, i. e. ∆ = 0, equation (5.16) implies |θ n | = π/2 for every n ∈ N, hence the eigenvectors |n, ν are independent from g, while the eigenvalues still depend on it. Moreover, depending on the sign of ∆, one has
E (n,+) (0) = E 0 (n,1) E (n,-) (0) = E 0 (n+1,-1) , for ∆ > 0 E (n,+) (0) = E 0 (n+1,-1) E (n,-) (0) = E 0 (n,1) , for ∆ < 0 E (n,ν) (0) = E 0 (n+1,-1) = E 0 (n,1) , for ∆ = 0 .
As we mentioned before, in view of Kato-Rellich theorem, the eigenvalues of H JC (g) are analytic in g if a convenient labeling is chosen. The table above shows which function, among g → E (n,+) (g) and g → E (n,-) (g), provides the analytic continuations of the spectrum at the points E 0 (n,1) or E 0 (n+1,-1) . When ∆ = 0, in order to have analytic eigenvalues and eigenfunctions we must choose E (n,ν) = ω(n+1)+ν √ n + 1g. The spurious eigenvector |0 ⊗ e -1 with eigenvalue E 0 (0,-1) = ∆/2 completes the spectrum of the JCH. Let us define
δ ≡δ(∆) := + if ∆ ≥ 0 -if ∆ < 0 .
Throughout the paper we will use the notation |-1, δ := |0 ⊗ e -1 and E (-1,δ) := E 0 (0,-1) . We will denote a pair (n, ν) with a bold letter n, meaning that the first component of n is the same not-bold letter while the second component is the corresponding Greek letter, namely
n = (n, ν), n(1) = n, n(2) = ν.
Let also us define
f n (g) := 1 2 ∆ 2 + 4g 2 (n + 1).
(5.17)
With this notation, we can write the spectrum of the JC Hamiltonian in a synthetic way as
σ H JC (g) = {E n } n∈N , E n (g) = ω(n + 1) + νf n (g) (5.18)
where
N := (N × {-, +}) ∪ {(-1, δ(∆))}. (5.19)
Notice that the notation is coherent since
E (-1,δ) = δ(∆)f -1 (g) = δ(∆) |∆| 2 = ∆ 2 = E 0 (0,-1) ,
in agreement with the definition above. It will be also useful introduce the following sets 5.20) which are copies of the natural numbers with { -1 } added to the set with the index δ(∆).
N ± := N ∪ {∓δ(∆)1} (
General setting and main result
In most of the physically relevant applications, the external control does not act on the spin part [BMPS, Sp]. Hence, we consider the JC dynamics with two different control terms acting only on the bosonic part, namely
H 1 = X ⊗ 1 H 2 = P ⊗ 1.
(5.21)
To motivate our choice, we notice that -for example -in the cavity QED context the experimenters can only act on the electromagnetic field stored in the cavity. In this context the control terms H 1 , H 2 correspond, respectively, to an external electric field and a magnetic field in the dipole approximation, see [BMPS, Section I.A], and the control functions u 1 (t), u 2 (t) model the amplitudes of this external fields. With the previous choice, the complete controlled Schrödinger dynamics reads
i∂ t ψ = H JC (g) + u 1 (t)H 1 + u 2 (t)H 2 ψ ψ(0) = Ψ in ∈ H, Ψ fin ∈ H s.t. Ψ in = Ψ fin u 1 , u 2 ∈ [0, c] ω, Ω > 0 |∆| ω, Ω (5.22)
Notice that the control functions u 1 , u 2 are independent from each other so, as subcases, one can consider the system in which just one control is active. Obviously, controllability of the system in one of these two subcases implies controllability in the general case. This is exactly what we are going to prove. We consider the system (5.22) in the subcases u 1 ≡ 0 or u 2 ≡ 0 and we prove that in each subcase the system is approximately controllable.
The following theorems are the main results of the paper.
Theorem 5.1 (Approximate controllability of JC dynamics). The system (5.22) with u 1 ≡ 0 or u 2 ≡ 0 is approximately controllable for every g ∈ R \ S * where S * is a countable set.
Theorem 5.2 (Characterization of the singular set). The set S * , mentioned in Theorem 5.1, consists of the value g = 0 and those g ∈ R that satisfy one of the following equations:
E (n+1,-) (g) = E (n,ν) (g), (n, ν) ∈ N (5.23) 2ω = f m+1 (g) + f m (g) -f n+1 (g) + f n (g), n, m ∈ N + (5.24) 2ω = f m+1 (g) -f m (g) -f n+1 (g) + f n (g), n, m ∈ N -, m < n (5.25) 2ω = f m+1 (g) + f m (g) -f n+1 (g) -f n (g), n, m ∈ N -, m > n (5.26)
where N , N ± and f n are defined in (5.19),(5.20) and (5.17), respectively.
The proof of Theorem 5.1 follows two main steps: we introduce a Hilbert basis of eigenvectors of H JC , namely {|n } n∈N , and analyze the action of the control operators on it in order to show that all levels are coupled for every value of the parameter g except a countable set (see Section 5.4.1). We then construct a subset C 0 of N 2 and prove that it is a non-resonant chain of connectedness (see Section 5.4.2). The claim then follows from the application of the general result by Boscain et al.,namely Theorem 2.40. To prove Theorem 5.2 we carefully analyze the resonances of the system, which are solution to the forthcoming equation (5.33). By proving that the latter has a countable number of solutions, we conclude that relevant pairs of energies are not resonant for every g ∈ R except the values in a countable set which will be characterized in the proof.
5.4 Proof of Theorem 5.1
Preliminaries
Preliminarily, we have to show that Assumption 2.32 is satisfied by
(iH JC (g), iH j , R, {|n } n∈N ), for g ∈ R \ S 0 , j ∈ { 1, 2 } ,
where S 0 is a countable set. Notice that the index set N plays the role of the countable set I in Definition 2.39.
We have already shown that {|n } n∈N is a Hilbert basis of eigenfunctions for H JC (g). Since H j is infinitesimally H 0 -bounded (H j H 0 for short) for j ∈ { 1, 2 }, then H j H JC (g) for j ∈ { 1, 2 } (see [RS 4 ,Exercise XII.11]). Hence (A 2 ) holds.
Moreover, this implies that H JC (g) + wH j is self-adjoint on D(H JC ) = D(H 0 ) for every w ∈ R (see [RS 2 , Theorem X.12]) and so (A 3 ) is satisfied. As for assumption (A 4 ), we observe that in view of the analyticity of the eigenvalues, there are just countable many values of g which correspond to eigenvalue intersections. With the only exception of these values, the eigenvalues are simple, so (A 4 ) and Assumption 2.32 hold automatically for every g ∈ R \ S 0 , where S 0 is the countable set corresponding to the eigenvalue intersections.
On the other hand, we can further restrict the set of singular points from S 0 to S 1 ⊂ S 0 . Indeed, if two eigenvalues intersect in a point g * , say E n (g * ) = E m (g * ), property (A 4 ) is still satisfied (by the same orthonormal system) provided that m| H j |n (g * ) = 0, j ∈ { 1, 2 }.
Observe that, given n ∈ N,
|m -n| > 2 ⇒ m| H j |n (g) = 0 ∀g ∈ R, j ∈ { 1, 2 } .
(5.27)
Hence, a priori the only possibly problematic points are solutions to the following equations
E m (g) = E n (g) m, n ∈ N , m = n, |m -n| ≤ 2.
(5.28)
By direct investigation, and using (5.18), one notices that there are solutions only in the following cases (for n = (n, ν) ∈ N ):
E n (g) = E (n+1,-) (g) which is satisfied if and only if (5.29) |g| = G (1)
n := ω 2 (2n + 3) -ν 4ω 4 (n 2 + 3n + 2) + ω 2 ∆ 2 ;
E n (g) = E (n+2,-) (g) which is satisfied if and only if (5.30)
|g| = G (2)
n := 2ω 2 (n + 2) -ν 4ω 4 (n 2 + 4n + 3) + ω 2 ∆ 2 ; E (n,+) (g) = E (n,-) (g) which is satisfied if and only if (5.31) g = 0 and ∆ = 0.
We will establish a posteriori whether we have indeed to exclude those points by the analysis in the next subsection, after looking at the action of the control operators.
Coupling of energy levels
To apply Theorem 2.40 to our case we need to build a non resonant chain of connectedness. As observed before in (5.27), the control operators do not couple most of the pairs. The coupling between remaining pairs is easily checked by using (5.3), (5.9), (5.14), and (5.15). For the sake of a shorter notation, we set c n := cos(θ n /2) and s n := sin(θ n /2). Some straightforward calculations for H 1 yield the following result:
n, -| H 1 |n, + = 0 n + 1, +| H 1 |n, + = 1 √ 2 ( √ n + 1c n c n+1 + √ n + 2s n s n+1 ) = 0 n + 2, +| H 1 |n, + = 0 n + 1, -| H 1 |n, -= 1 √ 2 ( √ n + 1c n c n+1 + √ n + 2s n s n+1 ) = 0 n + 2, -| H 1 |n, -= 0 n + 1, -| H 1 |n, + = 1 √ 2 ( √ n + 2s n c n+1 - √ n + 1c n s n+1 ) = 0 ⇔ g = 0 n + 2, -| H 1 |n, + = 0 n + 1, +| H 1 |n, -= 1 √ 2 ( √ n + 2c n s n+1 - √ n + 1s n c n+1 ) = 0 ⇔ g = 0 n + 2, +| H 1 |n, -= 0 0, -| H 1 |-1, δ = c 0 √ 2 ≥ 1 2 0, +| H 1 |-1, δ = s 0 √ 2 = 0 ⇔ g = 0.
From these computations we see that (compare with (5.29),(5.30)) in the points {G
(2)
n } n∈N the system still satisfies Assumption 2.32, while in the points {G
(1) n } n∈N does not. The point g = 0 is never solution to (5.29) or (5.30) in view of the assumption |∆| ω. Moreover, since n, -| H 1 |n, + = 0 for every g ∈ R, the system still satisfies Assumption 2.32 for g = 0, notwithstanding (5.31).
The same results hold for H 2 . Moreover, in each of the previous cases one has
m| H 2 |n = i m| H 1 |n .
We conclude that Assumption (A 4 ) is satisfied for every g ∈ R \ S 1 , where S 1 := {G
(1) n } n∈N .
Non-resonances of relevant pairs
Knowing exactly the pairs of levels coupled by the control terms, we claim that the set (illustrated in Figure 1)
C 0 = (n + 1, +), (n, +) , (n + 1, +), (n, -) | n ∈ N ∪ (0, +), (-1, δ) (5.32)
is a non-resonant chain of connectedness for every g ∈ R \ S 2 , where S 2 ⊂ R is a countable set. To prove this claim, we have to show that for every g ∈ R \ S 2 each pair of eigenstates in C 0 has no resonances with every other pair coupled by the control term. In view of the computation above, there are just four types of pairs coupled, as illustrated in Figure 1 and 2. So, we define S 2 as the set of the solutions g to the following equations:
|E k (g) -E l (g)| = |E s (g) -E t (g)| (5.33) where [k, l] ∈ C 0 and [s, t] ∈ C 0 ∪ (n + 1, -), (n, -) , (n + 1, -), (n, +) | n ∈ N ∪ (0, -), (-1, δ) .
(5.34)
It is enough to prove that the set of solutions to the latter equations is countable.
Observe that, by the analyticity of the functions g → E k (g) -E l (g), equation (5.33) may have at most countable many solutions unless is identically satisfied. Thus, we need to show that
E k (g) -E l (g) = ±(E s (g) -E t (g))
is not satisfied for some value g or, equivalently, that the Taylor expansions of r.h.s. and l.h.s. differ in at least a point. The same argument was used in [BMPS], where the authors computed the perturbative expansion of the eigenvalues of the Rabi Hamiltonian up to forth order in g. In our case, the model is exactly solvable,so that we can compute the series expansions in g = 0 directly from expression (5.13).
An explicit computation yields the Taylor expansion:
E n = ω(n + 1) + ν |Ω-ω| 2 + n+1 |Ω-ω| g 2 -(n+1) 2 |Ω-ω| 3 g 4 + o g 4 for ∆ = 0 E n = ω(n + 1) + ν √ n + 1g for ∆ = 0.
It is now easy to check, mimicking Step 2 in the proof of [BMPS], that for every choice of the indices in equation (5.33) the r.h.s. and l.h.s. have different series expansion at g = 0. We are not going to detail this calculation, since in the next Section we will analyze in full detail equations (5.33), in order to characterize the set S 2 . By setting S * = S 1 ∪ S 2 , the proof of Theorem 5.1 is concluded. In this proof we will discuss the resonances of the pairs of eigenstates in the chain C 0 , defined in (5.32). Our aim is to provide conditions to determine whether, for a particular value of g, resonances are present or not. This requires a direct investigation of equation (5.33). Since these computations are rather long, we prefer to collect them in this section, not to obscure the simplicity of the proof of Theorem 5.1 with the details on the characterization of the set S * .
Let us recall that the sets N ± defined in (5.20) include both the natural numbers, and -1 is added to the one, among N + and N -, whose label equals δ(∆).
In each of the following subcases the existence of solutions to equation (5.33) is discussed, for every choice of the indices compatible with the constraints (5.34). The mathematical arguments are based on elementary properties of functions f n (g) = 1 2 ∆ 2 + 4g 2 (n + 1), which are summarized in Lemma 5.3 in the Appendix.
Case 1: Assume [k, l] = [(n + 1, +), (n, +)], n ∈ N + . This assumption correspond to select the black arc labeled by 1 in the graph in Figure 5.2. We investigate the possible resonances between the selected arc and the other arcs, classified according to their qualitative type (see the labels in the left-hand panel of Figure 5.2). This analysis amounts to consider, with the help of Lemma 5.3 (properties (F.1)-(F.3)), the following subcases:
(1.A) [s, t] = (m + 1, +), (m, +) , m ∈ N + , m = n. Equation (5.33) reads f n+1 (g) -f n (g) = f m+1 (g) -f m (g)
which is satisfied if and only if g = 0, because f n+1 (g) -f n (g) is strictly decreasing in n for g = 0 in view of (F.3).
(1.B) [s, t] = (m + 1, +), (m, -) , m ∈ N -. Equation (5.33) reads
f n+1 (g) -f n (g) = f m+1 (g) + f m (g)
which is satisfied if and only if g = 0 and ∆ = 0. Indeed, by (F.2) one has
f n+1 (g) -f n (g) ≤ 2 |g| √ n + 2 - √ n + 1 = 2 |g| √ n + 2 + √ n + 1 .
For ∆ < 0, one has n ∈ N + = N and m ∈ N -= N ∪ {-1}, as illustrated in Figure 1. Hence, for g = 0,
f n+1 (g) -f n (g) ≤ 2 |g| √ 2 + 1 < |g| ≤ |g| ( √ m + 2 + √ m + 1) ≤ f m+1 (g)+f m (g),
and the last inequality is strict whenever ∆ = 0. Analogously, for ∆ > 0 one has n ∈ N + = N ∪ {-1} and m ∈ N -= N. Hence, for g = 0,
f n+1 (g) -f n (g) ≤ 2 |g| < |g| ( √ m + 2 + √ m + 1) ≤ f m+1 (g)+f m (g).
As above, the last inequality is strict whenever ∆ = 0.
(1.C) [s, t] = (m + 1, -), (m, +) , m ∈ N + . Equation (5.33) reads
ω + f n+1 (g) -f n (g) = |ω -f m+1 (g) -f m (g)| .
If |g| < G
(1) m,+ one has
f n+1 (g) -f n (g) = -f m+1 (g) -f m (g),
which implies g = 0 and ∆ = 0 because -f m+1 (g) -f m (g) ≤ 0 ≤ f n+1 (g)f n (g), and the first inequality is strict whenever ∆ = 0, while the second inequality is strict whenever g = 0.
On the other hand, if |g| ≥ G
(1) m,+ the equation above reads
2ω = f m+1 (g) + f m (g) -f n+1 (g) + f n (g) (5.24)
which has two solutions because the r.h.s. is equal to |∆| in zero (and |∆| ω in view of (5.22)) and is strictly increasing in |g|. Indeed, one easily sees that
∂ g (f m+1 (g) + f m (g) -f n+1 (g) + f n (g)) = g m + 2 f m+2 (g) + m + 1 f m (g) - n + 2 f n+1 (g) + n + 1 f n (g) =: gC m,n (g)
where C m,n (g) > 0 for every choice of indices n, m ∈ N + and ∆ = 0. For ∆ = 0 the r.h.s. of (5.24) is |g| (
√ m + 2 + √ m + 1 - √ n + 2 + √ n + 1
) which is clearly strictly increasing in |g|.
(1.D) [s, t] = (m + 1, -), (m, -) , m ∈ N -. Equation (5.33) reads
ω + f n+1 (g) -f n (g) = |ω -f m+1 (g) + f m (g)| Then if |g| < G (1) m,- f n+1 (g) -f n (g) = -f m+1 (g) + f m (g),
which is satisfied if and only if g = 0 because f n+1 (g)-f n (g) ≥ 0 ≥ -f m+1 (g)+ f m (g) and the inequalities are strict whenever g = 0. If instead |g| ≥ G
(1) m,-, the equation reads
2ω = f m+1 (g) -f m (g) -f n+1 (g) + f n (g) (5.25)
which has two solutions if and only if m < n. Indeed, f n+1 (g) -f n (g) is decreasing in n in view of (F.3) and the derivative of the r.h.s. is
∂ g (f m+1 (g) -f m (g) -f n+1 (g) + f n (g)) = g m + 2 f m+2 (g) - m + 1 f m (g) - n + 2 f n+1 (g) + n + 1 f n (g) =: gD m,n (g).
The function D m,n (g) is strictly positive for every ∆ = 0 and m ∈ N -, n ∈ N + with m < n because n+2 f n+1 (g) -n+1 fn(g) is strictly decreasing in n. For ∆ = 0 the r.h.s. of (5.25
) is |g| ( √ m + 2 - √ m + 1 - √ n + 2 + √ n +
1) which is positive if and only if m < n, and is clearly strictly increasing in |g|.
In view of the above analysis, there exist non trivial resonances (for g = 0) in cases (1.C), and (1.D) for m < n. In such circumstances, equations (5.24),(5.25) have two solutions each.As for the trivial value g = 0, the system exhibits multiple resonances, as noted in all previous cases. Hence, g = 0 has to be included in the set of resonant points.
Case 2: Assume [k, l] = [(n + 1, +), (n, -)], n ∈ N -. This assumption corresponds to select the black arc labeled by 2 in the graph in Figure 5.2. As before, we proceed by considering the following sub-cases:
(2.A) By symmetry, this case reduces to the subcase (1.B). As already noticed, a solution exists if and only if g = 0 and ∆ = 0.
(
2.B) [s, t] = (m + 1, +), (m, -) , m ∈ N -, m = n. The corresponding equation reads f n+1 (g) + f n (g) = f m+1 (g) + f m (g)
which has only the trivial solution g = 0.
(2.C) [s, t] = (m + 1, -), (m, +) , m ∈ N + . Equation (5.33) reads
ω + f n+1 (g) + f n (g) = |ω -f m+1 (g) -f m (g)| .
Then if |g| < G 1 m,+ the equation above become
f n+1 (g) + f n (g) = -f m+1 (g) -f m (g)
which clearly implies that g = 0 and ∆ = 0. On the other hand, if |g| ≥ G 1 m,+ the equation reads
2ω = f m+1 (g) + f m (g) -f n+1 (g) -f n (g) (5.26)
which has non trivial solutions if and only if m > n because f n is increasing in n for g = 0. Since the r.h.s. is strictly increasing in |g|, as one can see using an argument similar to case (1.C), the latter equation has two solutions if m > n.
(
2.D) [s, t] = (m + 1, -), (m, -) , m ∈ N -. Equation (5.33) reads ω + f n+1 (g) + f n (g) = |ω -f m+1 (g) + f m (g)| .
If |g| < G 1 m,-the above equation reads
f n+1 (g) + f n (g) = -f m+1 (g) + f m (g)
which is satisfied if and only if g = 0 and ∆ = 0, since -f m+1 (g) + f m (g) ≤ 0 ≤ f n+1 (g) + f n (g), and the first inequality is strict whenever g = 0, while the second inequality is strict whenever ∆ = 0. If, instead, |g| ≥ G 1 m,-, the above equation becomes 2ω = f m+1 (g) -f m (g) -f n+1 (g) -f n (g), which has no solution since the r.h.s. is non-positive for every g, ∆ ∈ R and n, m ∈ N -.
In summary, as far as Case 2 is concerned, there exist non trivial resonances (for g = 0) only in the case (2.C) for m > n, and in such case equation (5.26) has exactly two solutions.
Recalling the definition of C 0 (see (5.32)), one notices that every element of C 0 is non-resonant with every other element of C 0 except for the trivial value g = 0, in view of the analysis of the cases (1.A), (1.B), (2.A) and (2.B).
The proof above exhibits equations (5.24), (5.25) and (5.26) appearing in the statement of Theorem 5.2, as the equations which characterize the values of g in S 2 , namely those values such that an arc in C 0 is resonant with some arc (not in C 0 ) non-trivially coupled by the interaction. As we said before, the value g = 0 is included in S 2 . Finally, one has to include in S * those values of g for which some eigenspace has dimension 2 and the corresponding eigenvectors are coupled. These values, defining the set S 1 , have been already characterized by equation (5.23), whose solutions are exhibited in (5.29).
In view of Theorem 2.40, we conclude that the system is approximately controllable for every g ∈ R \ S * , where S * = S 1 ∪ S 2 is characterized by equations (5.23), (5.24), (5.25), and (5.26). This concludes the proof of Theorem 5.2.
Further improvements
We have seen that equations (5.24),(5.25),(5.26) specify points g ∈ R for which C 0 fails to be a non-resonant chain of connectedness. However for each of such points, we can think of suitably modify C 0 to replace resonant arcs and preserve the connectedness of the chain. Notice that C 0 is composed by arcs of type A and B, thus to replace one of them in case of resonances we must use arcs of type C,D (see Fig. 5.2). Therefore it is necessary to check resonances between arcs of type C,D.
Let us illustrate the problem with an example. Denote with g 2.n (m) ∈ R the positive solution of equation (5.26), i. e. g ≥ 0 such that 2ω
= f m+1 (g) + f m (g) -f n+1 (g) -f n (g), n, m ∈ N -, m > n is satisfied. Recall that g 2.n (
m) is a resonant point between the arcs (n, -), (n + 1, +) and (m, +), (m+1, -) (in red in Fig. 5.3). In this point we have to remove the arc (n, -), (n + 1, +) from C 0 because is resonant, thus the node |n, -disconnects.
We have three possible choices to restore the connectedness: add one arc between (n -1, -), (n, -) , (n, -), (n + 1, -) , (n -1, +), (n, -) to the chain (in green in Fig. 5.3). One choice seems more natural, since by eq.(5.26) (n -1, +), (n, -) is never resonant with (n, -), (n + 1, +) we can consider
C 2.n (m) = C 0 ∪ { (n -1, +), (n, -) } \ { (n, -), (n + 1, +) }.
Remain to verify that C 2.n (m) is non-resonant for g = g 2.n (m). This means to check that every arc of C 2.n (m) has no resonances at the value g 2.n (m). Therefore to ensure that no arc of type A has resonances in g 2.n (m) it must not be solution of the following equations
2ω = f t+1 (g) + f t (g) -f s+1 (g) + f s (g), ∀s, t ∈ N + 2ω = f t+1 (g) -f t (g) -f s+1 (g) + f s (g), ∀s, t ∈ N -, t < s
(these are equations (5.24),(5.25) for each possible choice of the indexes). Moreover we have to check resonances of (n -1, +), (n, -) which is an arc of type C.
At the moment, we don't have a general method to solve systems of equations of type (5.24),(5.25) or (5.26), and to affirm that they have not a common solution. Notice that the analysis is complicated by the fact that we have general parameters ω, Ω under the unique assumption (5.5).
The results of this Chapter are collected in [PP].
Appendix 5.A
The following Lemma contains a list of useful elementary properties of the functions {f n } n∈N , which have been used in the proof of Theorem 5.2 (Section 5.5).
Lemma 5.3. Let f n , n ∈ N, be defined as in (5.17). Then
(F.1) f m (g) -f n (g) ≥ 0 if and only if m ≥ n; (F.2) f n+1 (g) -f n (g) ≤ 2 |g| ( √ n + 2 - √ n + 1); (F.3) f n+1 (g) -f n (g)
is strictly increasing w.r.to |g|, and strictly decreasing in n;
Proof. Property (F.1) follows from the monotonicity of the square root. As for (F.2), one notices that
f n+1 (g) -f n (g) ≤ 2g 2 ∆ 2 + 4g 2 (n + 2) which is equivalent to (∆ 2 + 4g 2 (n + 2)) -∆ 2 + 4g 2 (n + 2) ∆ 2 + 4g 2 (n + 1) ≤ 4g 2
which follows from the fact that
(∆ 2 + 4g 2 (n + 1)) ≤ ∆ 2 + 4g 2 (n + 2) ∆ 2 + 4g 2 (n + 1).
Then,
f n+1 (g) -f n (g) ≤ 2g 2 ∆ 2 + 4g 2 (n + 2) ≤ 2g 2 2 |g| √ n + 2 ≤ 2 |g| ( √ n + 2 - √ n + 1).
Notice that the last inequality is strict whenever g = 0.
As for (F.3), one sets F (x, y) := 1 2 ∆ 2 + 4x 2 y and G(x, y) := y/ ∆ 2 + 4x 2 y, so that f n (g) = F (g, n + 1) and ∂ g f n (g) = 2g G(g, n + 1). Observe that for y ≥ 0 one has
∂G ∂y = 1 ∆ 2 + 4x 2 y 1 - 2x 2 y ∆ 2 + 4x 2 y > 0
and also
∂ 2 G ∂y 2 = 4x 2 (∆ 2 + 4x 2 y) 3/2 -1 + 3y 4 4x 2 ∆ 2 + 4x 2 y ≤ 0,
and the latter is equal to 0 if and only if x = 0. Then, one has (with an innocent abuse of notation concerning
∂ n f n (g)) ∂ g (f n+1 -f n )(g) =2g(G(g, n + 2) -G(g, n + 1)) > 0 for g > 0 < 0 for g < 0 ; ∂ n (f n+1 -f n )(g) = ∂F ∂y (g, n + 2) - ∂F ∂y (g, n + 1) =g 2 1 ∆ 2 + 4g 2 (n + 2) - 1 ∆ 2 + 4g 2 (n + 1) < 0.
The monotonicity properties claimed in the statement follow immediately.
Chapter 6
Towards an adiabatic derivation of the Jaynes-Cummings model
In this Chapter we present some preliminary results concerning the derivation of the Jaynes-Cummings Hamiltonian as an approximation of the Rabi Hamiltionian in a suitable regime. The limit that we investigate is the so called rotating wave approximation which we will see as an adiabatic limit. The precise meaning of the approximation is the usual in quantum mechanics: closeness of the evolution operators in the norm of B(H).
Time scales identification
In the physics literature, as we briefly mentioned in Sect.5.2, one finds applications of the Rabi and Jaynes-Cummings models in different fields. What is usually stated is that those Hamiltonians exhibits different behaviours depending on the range of the fundamental parameters ω, Ω, g. We are interested in the so called weak-coupling limit, in which the assumptions are |∆| ω, Ω g ω, Ω (6.1) and the rotating wave approximation (RWA) is supposed to hold [Ro], [AGJ]. The heuristic explanation we gave in Sect.5.2 is a way to introduce a time dependence in the Rabi Hamiltonian and identify the different time scales. Indeed, performing the time-dependent change of coordinates ψ = e iH 0 t ψ, the function ψ satisfies the Schrödinger equation i∂ t ψ = H R (t)ψ 1 where H R was obtained in (5.7) and is
H R (t) = e iH 0 t (H R -H 0 )e -iH 0 t = g 2 (e -i(Ω-ω)t a † σ+e i(Ω-ω)t aσ † ) + g 2 (e -i(Ω+ω)t aσ + e i(Ω+ω)t a † σ † ).
(6.2)
1 Throughout the chapter we will make use of Hartree units, so that in particular = 1
89
From the previous Hamiltonian (H R in the interaction frame) ones notices two different frequencies, namely |ω -Ω| and ω +Ω, which are of different order given (6.1).
To have a dimensionless parameter we choose ε := |ω -Ω| ω + Ω (6.3) instead of |∆| as small parameter. Therefore, terms a † σ, aσ † oscillate with frequency of O(1) as ε → 02 on the time-scale τ = εt, while a † σ † , aσ oscillate with frequency O(ε -1 ) as ε → 03 on the time-scale τ , so they average to zero on time intervals of O(1) as ε → 0. Notice that apparently no relations are assumed between ε and g. In principle this should mean that the limits ε → 0 and g → 0 could be performed in any order and the approximation will hold. Moreover, from equation (6.2) we observe that the order of oscillations does not depend from g, which seems an argument in favour of the independence of the two limits. However, observing that
H R = (ω -Ω + Ω)(a † a + 1 2 ) + Ω 2 σ 3 + g(a + a † )(σ + σ † )
we can choose to apply the transformation φ = e iΩt(a † a+1/2) e i(Ωt/2)σ 3 ψ (assume ω > Ω), to obtain another interaction frame
H R (t) = (ω -Ω)(a † a + 1 2 ) + g(e -2iΩt a + e 2iΩt a † )(e -2iΩt σ + e 2iΩt σ † ) = ε(ω + Ω)(a † a + 1 2 ) + g(a † σ + aσ † ) + g(e 2iΩt a † σ † + e -2iΩt aσ) (6.4) = H JC + g(e 2iΩt a † σ † + e -2iΩt aσ),
where the time dependence is all contained in the term e 2iΩt a † σ † + e -2iΩt aσ. In our hypothesis the latter time-dependent term average to zero on every scale τ = φ(ε, g)t where lim (ε,g)→(0,0) φ(ε, g) = 0 (so ε or g are possible choices) but the order of the limits seems important. More precisely, spectral properties of H R (t) strongly depend on the relation between ε and g. For each ε > 0 the operator H R (t) has pure point spectrum, while for ε = 0 it doesn't.
To state a rigorous result we must take into account this problem.
Statement of the result
We recall that H JC leaves invariant the subspaces H n defined in (5.10). Thus a general invariant subspaces for H JC reads n∈N H in where {i n } n∈N ⊂ N. Denote with P j the projection operator on H j , j ∈ N. Then for each E ∈ N we define Π E as the sum
Π E = E j=-1 P j . (6.5)
The result we claim is standard in the context of Adiabatic theories (see [Teu]), in which one consider the difference between the evolution operators generated by the two Hamiltonian in question (in our case e -iH R t ε and e -iH JC t ε ), and looks for an estimate in terms of the small parameter ε. The bounding term must vanish in the limit ε → 0 to affirm that the two dynamics exhibit the same behaviour in the adiabatic limit. In many cases this type of bound can be achieved only on a subspace of the state space (see for example the Born-Oppenheimer approximation [START_REF] Teufel | Adiabatic Perturbation Theory in Quantum Dynamics[END_REF]Sect.1.2]), therefore it is necessary to evaluate the difference of the two evolutions when projected on that subspace. This subspace is usually identified starting from physical consideration (for example, in the Born-Oppenheimer case, the kinetic energy of the nuclei must be bounded to have a uniform estimate). In our setting the subspace on which evaluate the difference is ⊕ E n=-1 H n , so that the corresponding projector is Π E . In conclusion, the estimate that we want to prove is
e -iH R t ε -e -iH JC t ε Π E ≤ M j=1 C j g α j ε β j (1 + t) (6.6)
for all t ≥ 0 and for some α j , β j ∈ R, 0 < C j < ∞, j = 1, ..., M . However, to understand if such a bound is achievable one must ensure first that the subspace Π E is approximately invariant for the total dynamics, which in our case is the Rabi dynamics. This estimate is preliminary to (6.6) and for this reason we prove in this Chapter the following Theorem 6.1. For each E ∈ N there exists a constant C < ∞ such that for all
t ≥ 0 (1 -Π E )e -iH R t ε Π E ≤ C g ε t.
(6.7) Remark 6.2. The previous estimate says that Π E is an approximately invariant subspace for the Rabi dynamics when the ratio g/ε → 0 as (ε, g) → (0, 0), i. e. when g = o(ε). We want to understand if is possible to improve the estimate (6.7) to have a weaker relation between ε and g.
6.2 Proof of Theorem 6.1
Preliminary estimates
With the notation of Chapter 5 (5.6) we write the Hamiltonians (5.4),(5.1) as
H R (g) = H 0 + √ 2gX ⊗ σ 1 (6.8) H JC (g) = H 0 + g √ 2 (X ⊗ σ 1 -P ⊗ σ 2 ). (6.9)
We recall that
X ⊗ σ 1 , P ⊗ σ 2 << H 0 ,
so that H JC , H R are self-adjoint operators on D(H 0 ) and essentially self-adjoint on each core of H 0 (see Sect.5.4.0). More precisely for every α > 0 and ψ ∈ D(H 0 )
X ⊗ σ 1 ψ ≤ α 1 2 H 0 ψ + 2α + 2 α 1 2 ψ P ⊗ σ 2 ψ ≤ α 1 2 H 0 ψ + 2α + 2 α 1 2 ψ ,
from which the following estimates descend
H R ψ ≤ (1 + √ 2αg) H 0 ψ + 2g α + 1 α 1 2 ψ (6.10) H JC ψ ≤ (1 + √ 2αg) H 0 ψ + 2g α + 1 α 1 2 ψ (6.11) H R ψ ≤ 1 + √ 2αg 1 - √ 2αg H JC ψ + 2g 1 - √ 2αg α + 1 α 1 2
ψ (6.12)
H JC ψ ≤ 1 + √ 2αg 1 - √ 2αg H R ψ + 2g 1 - √ 2αg α + 1 α 1 2
ψ , (6.13) (α > 0 small enough) i. e. H JC and H R are bounded with respect to each other with relative bound 1. Moreover, notice that since D(H R ) = D(H JC ) = D(H 0 ) it is obvious that each eigenvector of H JC is in the domain of H R . In addition, as one can see from (5.8), |n, ν belongs to the Schwartz space S(R), namely the set of rapidly decreasing functions on R (see [RS 1 , Sect.V.3]). The Schwartz space S(R) is a core of H R and is invariant in the sense that H m R |n, ν ∈ S(R) for every m ∈ N, thus |n, ν ∈ D(H m R ) for every m, n ∈ N, ν = ±.
Step 1
We start the proof with a standard argument. Since Π E is invariant for H JC , i. e. Π E H JC = H JC Π E , it commutes with e -iH JC t ε . Then
(1 -Π E )e -iH R t ε Π E = (1 -Π E )(e -iH R t ε -e -iH JC t ε )Π E = (1 -Π E )e -iH JC t ε -i ε t 0 dτ e iH JC τ ε (H R -H JC ) e -iH R τ ε Π E = e -iH JC t ε -ig ε t 0 dτ (1 -Π E ) e iH JC τ ε a † σ † + aσ e -iH R τ ε Π E .
(6.14)
We want to exploit the oscillating terms that are contained in last integral. More precisely, the operator e iH JC τ ε in the basis of the eigenvectors of H JC is a multiplication by an oscillating phase e iEn τ ε , n ∈ N . Therefore our next task will be to rewrite e iH JC τ ε a † σ † + aσ on this basis. If f is sufficiently regular and has compact support in (0, t) we get the estimate
t 0 dτ e i(c+o(ε)) τ ε f (τ ) ≤ ε |c + o(ε)| f L 1 .
Moreover, if f has n derivatives we can iterate the trick to get
t 0 dτ e i(c+o(ε)) τ ε f (τ ) ≤ ε n |c + o(ε)| n f (n) L 1 .
For the sake of a lighter notation we denote Then the coefficient matrix of a † σ † + aσ in the basis {|n } ∈N appears as √ n + 2 e iE (n+2,+) τ ε sin θ n cos θ n+2 e iE (n+2,+) τ ε cos θ n cos θ n+2 -e iE n+2,-τ ε sin θ n sin θ n+2 -e iE n+2,-τ ε cos θ n sin θ n+2 (6.22)
θ n := θ n 2 (
0 0 A - 1 0 0 0 A - 2 A + -1 0 0 0 A + 0 0 0 A + 1 A - n-1 0 0 A - n 0 0 0 A - n+1 A + n-2 0 0 0 A + n-1 0 0 A + (6.
Γ - n = e iH n-2 τ ε A - n =
√ n e iE (n-2,+) τ ε cos θ n sin θ n-2 -e iE (n-2,+) τ ε sin θ n sin θ n-2 e iE (n-2,-) τ ε cos θ n cos θ n-2 -e iE (n-2,-) τ ε sin θ n cos θ n-2 .
(6.23)
By means of Γ ± n , the integrand of (6.14) reads In the following we will always omit the ε and g dependence of functions p n,m for the sake of a lighter notation.
(1 -Π E ) e iH JC τ ε a † σ † + aσ e -iH R τ ε Π E = = (1 -Π E ) n≥1 (Γ - n + Γ + n )P n + Γ + 0 P 0 + Γ + -1 P -1 e -iH R τ ε Π E = n>E+2 Γ - n P n + n>E-2 Γ + n P n e -iH
To summarize, to estimate (6.14) we decomposed the integrand on subspaces H n . Each subspace H m , m ≤ E evolves under the operator a † σ † + aσ e -iH R τ ε and gives a contribution on H n which is Each one of these eight terms is proportional to an oscillating phase produced by the action of Γ ± n on |n, ν . Consider for example, ν = + in the previous expression, then p n,+,m,µ (τ /ε)Γ - n |n, + m, µ| = = √ n p n,+,m,µ (τ /ε)e iE (n-2,+) τ ε cos θ n sin θ n-2 |n-2, + m, µ| . (6.28)
In the latter term one recognizes: the oscillating phase; the coefficient p which is ε-dependent; the term cos θ n sin θ n-2 which depend on ε and g through θ n , θ n-2 but are bounded; the rank one operator |n-2, + m, µ|. Each other term of (6.26),(6.27) has the same structure. Therefore to bound (6.14) by means of the Riemann-Lebesgue lemma we need information about the regularity and the summability of coefficients p n,m .
Regularity of p coefficients
From conserved quantities and expectations of certain observables we can recover regularity and summability properties of coefficients p n,m .
(i) The dynamics is unitary. Observe that for each m ∈ N where A, B are the constants that appear in the estimates (6.12),(6.13),
A = 1 + √ 2αg 1 - √ 2αg , B = 2g 1 - √ 2αg α + 1 α 1 2
.
We compute the following expectation which is the bound we stated. As anticipated, the other summands in (6.26) can be estimated by essentially the same argument, yielding the claim.
To end the proof we illustrate an attempt to improve the previous estimate exploiting the oscillating phase by means of the Riemann-Lebesgue lemma. Notice that the sequence Since E (n-2,+) ∼ n, to bound the last line is sufficient to prove that n α |p n,+,m (t)| 2 < ∞ for some α > 0, which is satisfied because √ n p n,+,m ∈ l 2 (N), and d dt (p n,+,m ) ∞ ≤ n -(1/2+δ) for some δ > 0.
However, one can perform another integration by parts to obtain 1 ε t 0 dτ p n,+,m (τ /ε)e iE (n-2,+) τ ε = p n,+,m (τ )e iE (n-2,+) τ iE (n-2,+) ) ∞ < ∞. Those are weaker conditions that are satisfied as we have seen in Sect.6.2.3. In conclusion for (6.37) we get an estimate for the first summand in (6.26), namely
C 1 g + C 2 g ε t ≤ max{C 1 ε, C 2 } g ε (1 + t).
The latter estimate is not better than (6.35) since the bound we obtained for
d 2
dt 2 (p n,+,m ) ∞ is O(1) as ε → 0 and O(1) as g → 0 (see (6.33)). Therefore, to render useful this last argument we need to understand if the bound on d 2 dt 2 (p n,+,m ) ∞ could be improved to O(g α ) or O(ε β ) for some α, β > 0. It is not clear if this kind of bound can be achieved on the second time derivative or on higher derivatives.
E n = ω(n + 1) + ν ω + Ω 2 ε + ν n + 1 ω + Ω g 2 ε + ε • o g 2 ε 2 (6.38) (b) If g = ε α with 0 ≤ α < 1, i. e. ε/g → 0 E n = ω(n + 1) + ν √ n + 1g + ν (ω + Ω) 2 8 √ n + 1 ε 2 g + g • o ε 2 g 2 (6.39)
Proof. E(x) = √ 1 + x = 1 + 1 2 x + ∞ n=2 c n x n for |x| ≤ 1 with c n = (-1) n-1 (2n)! (2n -1)4 n (n!) 2 = (-1) n-1 1 2
n 1 • 3 • 5 • • • (2n -3) n! .
Observe that c n+1 x n+1 = (-1) n+1-1 (2(n + 1))! (2(n + 1) -1)4 n+1 (n + 1)!(n + 1)!
x n+1
= (-1) (2n -1)2(n + 1)(2n + 1) (2n + 1)4(n + 1)(n + 1)
xc n x n so c n x n + c n+1 x n+1 = c n x n (1 - 2n -1 2n + 2 x) < 0 if c n < 0 ⇔ n = 2k > 0 if c n > 0 ⇔ n = 2k + 1 . Then E(x) = 1 + 2k+1 n=1 c n x n + ∞ m=2k+2 c m x m < 1 + 2k+1 n=1 c n x n , ∀k ∈ N, |x| ≤ 1 E(x) = 1 + 1 2 x + 2k n=2 c n x n + ∞ m=2k+1 c m x m > 1 + 1 2 x + 2k n=2 c n x n , ∀k ∈ N, |x| ≤ 1
tr(A) := n∈N ψ n , Aψ n (3.7) where {ψ n } n∈N is an orthonormal basis of H. The definition define a linear map tr : B(H) → R ∪ {+∞} and is well posed because does not depend on the chosen basis [RS 1 , Sect.V.I]. Definition 3.4. An operator A ∈ B(H) is called trace-class if and only if tr |A| < ∞. We will denote the set of trace-class operator by T (H) (also denoted T 1 ).
≥ 0 by (3.11), so T * is positive. Observe that tr(ρ) -tr(T (ρ)) = tr((1 -T * (1))ρ), ∀ρ ∈ T s (H 1 ) + from which we have the equivalence between (3.18) and (3.19
Remark 3.19. If S is identity preserving its dual semigroup S * conserves probability i. e. tr(S * t (ρ 0 )) = tr(S * t (ρ 0 )I) = tr(ρ 0 S t (I)) = tr(ρ 0 ) = 1. The definition of normal map (b) is slightly different because we are considering maps on B(H) instead of B s (H).
Definition 3. 26 .
26 Given an Hilbert space H we will call quantum dynamical semigroup (in the Heisenberg picture) a one-parameter family of linear operators S t : B(H) → B(H), t ≥ 0 satisfying (b),(c),(d) of Definition 3.18 and
Figure
Figure 4.1
c 22 + c 33 ) c 12 + c 21 c 13 + c 31 c 12 + c 21 -2(c 11 + c 33 ) c 32 + c 23 c 13 + c 31 c 32 + c 23 -2(c 11 + c 22 ) c 23 -c 32 , c 31 -c 13 , c 12 -c 21 ) T = 4(-Im(c 23 ), Im(c 13 ), -Im(c 12 )) T . (4.23)
this choice the Lindblad equation in Bloch coordinates reads ẋ = (A(u) + Γ)x + k (4.28) where k = (k 1 , k 2 , k 3 ) and Γ is the diagonal matrix c 22 + c 33 ) γ 2 := 2(c 11 + c 33 ) γ 3 := 2(c 11 + c 22 ).
Figure
Figure 4.2
Figure 4 . 3 :
43 Figure 4.3: Decaying system. Two different trajectories of system (4.33) with E = 1, a = 0, b + = 0, b -= 1. In transparency the ellipsoid (4.36).
Figure 4 . 4 :
44 Figure 4.4: Trajectory of system (4.34) (blue line) with control u 1 (t) = -t sin(t)/ϑ, u 2 (t) = -t sin(t)/ϑ. Parameters are a = 0, b + = 0, b -= 0.01, E = 2, ϑ = π and ε = 0.001. In green the trace of u E / u E . In red the instantaneous equilibrium y c (u E (t)) given by (4.35). Below the plot of the distance between the blue and red trajectories.
Figure 5 . 1 :
51 Figure 5.1: Schematic representation of the eigenstates of the JC Hamiltonian and the chain of connectedness C 0 , in the case δ(∆) = -. Thick black lines correspond to pairs of eigenstates in the chain C 0 . Gray dashed lines correspond instead to pairs of eigenstates coupled by the control which are not in C 0 .
Figure 5 . 2 :
52 Figure 5.2: Classification of the arcs representing pairs of eigenstates coupled by the control operator. On the left-hand panel, red arcs and labels show the classification of generic arcs in four different types { A, B, C, D }. On the right-hand panel, black arcs of the set C 0 are classified in two types { 1, 2 }. Labels correspond to the cases enumerated in the proof of Theorem 5.2.
Figure 5 . 3 :
53 Figure 5.3: Disconnection of the chain C 0 for the value g 2.n (m). In red resonant arcs. In green the possible choices to reconnect the node |n, -.
Remark 6.3 (Riemann-Lebesgue lemma). The usual method to estimate an integral that contains an oscillating term is through the Riemann-Lebesgue lemma.
ν,m,µ (τ /ε)Γ - n |n, ν m, µ| + ν=± n>E-2 µ=± p n,ν,m,µ (τ /ε)Γ + n |n, ν m, µ| .
+,m (τ /ε)Γ - n |n, + m| is Cauchy in C([0, t]; B(H)) and, being the integral continuous in that topology, we can move the integral inside the summation getting e -iH JC t ε term inside the absolute value can be integrated by parts obtaining1 ε t 0 dτ p n,+,m (τ /ε)e iE (n-2,+) τ ε =p n,+,m (τ )e iE (n-2,+) τ iE (n-2,+) e iE (n-2,+) τ .
Let n ∈ N and E n defined as in (5.13)E n (g) = ω(n + 1If g = o(ε), i. e. g/ε → 0
6.15) here and thereafter. We compute a † σ † + aσ on the basis of eigenvectors of H JC , namely {|n } ∈N (see (5.8)),(a † σ † + aσ) |n, + = = √ n cos θ n sin θ n-2 |n-2, + + cos θ n-2 |n-2, -+ √ n + 2 sin θ n cos θ n+2 |n + 2, + -sin θ n+2 |n + 2, -(a † σ † + aσ) |n, -= -√ n sin θ n |n -1 ⊗ e -1 + √ n + 2 cos θ n |n + 2 ⊗ e 1 = -√ n sin θ n sin θ n-2 |n-2, + + cos θ n-2 |n-2, -+ √ n + 2 cos θ n cos θ n+2 |n + 2, + -sin θ n+2 |n + 2, -.Notice that a † σ † , aσ act as raising and lowering operators (sometimes called ladder operators) on the subspaces H n Ran a † σ † Hn ⊂ H n+2 , Ran (aσ Hn ) ⊂ H n-2 ,
√ n cos θ n 2 |n -1 ⊗ e -1 + √ n + 2 sin θ n 2 |n ⊗ e 1
(6.16)
thus we denote
A +
n := a † σ † Hn = a † σ † P n ,
A - n := aσ Hn = aσP n . (6.17)
18)whereA ± n : H n → H n±2 acts as cos θ n+2 cos θ n cos θ n+2 -sin θ n sin θ n+2 -cos θ n sin θ n+2 (6.19) cos θ n -sin θ n-2 sin θ n cos θ n-2 cos θ n -cos θ n-2 sin θ n . (6.20) when seen as linear maps from C 2 with basis {|n, + , |n, -} onto C 2 with basis {|n ± 2, + , |n ± 2, -}. Observe that (A - n+2 ) † = A + n . The evolutor e iH JC t ε assumes in this basis the block form
e i -ε(Ω+ω)
A + n = sin θ n A -√ n + 2 n = √ n sin θ n-2
R τ ε Π E expanding P n = ν=± |n n| = ν=± |n, ν n, ν| the latter row is equal to We define p n,m (ε, g, t) := n| e -iH R t |m , (6.24)and rewriting the latter row once again we arrive at(1 -Π E ) e iH JC τ ε a † σ † + aσ e -iH R τ ε Π E =(6.25)
= Γ -n |n n| + Γ + n |n n| e -iH R τ ε |m m| .
n∈N n∈N m∈N
n>E+2 n>E-2 m≤E
= p n,m (ε, g, τ /ε)Γ -n |n m| (6.26)
n∈N m∈N
n>E+2 m≤E
+ p n,m (ε, g, τ /ε)Γ + n |n m| . (6.27)
n∈N m∈N
n>E-2 m≤E
e -iH R t |m = n∈N n| e -iH R t |m |n , and being the vector on the left of norm one1 = n∈N |p n,m (t)| 2 , (6.29) then |p n,m (t)| ≤ 1, i. e. p n,m ∈ L ∞ (R), and p n,m ∈ l 2 (N ) uniformly in t.(ii) The energy of the system is conserved, moreover H JC is finite on the evolution of each initial state |m , m ∈ N becausem| e iH R t H JC e -iH R t |m ≤ e -iH R t |m H JC e -iH R t |m ≤ A H R e -iH R t |m + B e -iH R t |m
≤ A 2 H JC |m + AB |m + B ≤ A 2 E m + AB + B = (1 + O(g))E m + O(g) < ∞
m| e iH R t H JC e -iH R t |m = n,n ∈N m| e iH R t |n n| H JC |n n | e -iH R t |m = n,n ∈N p m,n (t)E n δ n=n p n ,m (t) |p n,m (t)| 2 ≤ (1 + O(g))E m + O(g) < ∞ (6.30)Observe that E n is a function of g, but for every fixed values of ĝ only a finite number of terms in the sum (6.30) are negative. In fact, recall that Analogously, for the second power ofH JC holds n∈N E 2 n |p n,m (t)| 2 = m| e iH R t H 2 JC e -iH R t |m ≤ H JC e -iH R t |m H JC e -iH R t |m < ∞ which means that n p n,m ∈ l 2 (N ) uniformly in t because 2ω(n + 1) ≥E n,+ ≥ ω(n + 1) R e -iH R t |m m| e iH R t H R ∞ (R) and e -iH R t |m ∈ L 2 (R) for every t. Similarly d k p n,m dt k (t) = n| (-iH R ) m e -iH R t |m , and =T r H k R e -iH R t |m m| e iH R t H k R = m| H 2k R |mwhich is bounded because |m ∈ D(H k R ) for all k ∈ N, as we noticed in Sect.6.2.1. For the second power, k = 2, we can explicitly computeH R (H R |m ) =H R H JC |m + g(a † σ † + aσ) |m (6.33) =H R E m |m + gA + m |m + gA - m |m =E 2 m |m + gH JC A + m |m + gH JC A - m |m + E m g(A + m + A - m ) |m + g 2 (A + m+2 + A - m+2 )A + m |m + g 2 (A + m-2 + A - m-2 )A - m |m .One recognize from the latter thatg 2 A - m-2 A - m |m ∈ H m-4 gH JC A - m |m + gE m A - m |m ∈ H m-2 E 2 m |m + g 2 A - m+2 A + m |m + g 2 A + m-2 A - m |m ∈ H m gH JC A + m |m + gE m A + m |m ∈ H m+2 g 2 A + m+2 A + m |m ∈ H m+4and being all operators in the previous expression bounded, is clear that m| H 4 R |m < ∞. It has other three analogous terms that complete the integrand (6.25). We will estimate this sample term in details, since the others can be estimated in the same way.Expanding (6.34) as in (6.28) and integrating as in (6.14) we get (τ /ε)e iE (n-2,+) τ ε cos θ n sin θ n-2 |n-2, + m| (τ /ε)e iE (n-2,+) τ ε cos θ n sin θ n-2 |n-2, + m| (τ /ε)e iE (n-2,+) τ ε cos θ n sin θ n-2 |n-2, + m| (τ /ε)e iE (n-2,+) τ ε cos θ n sin θ n-2 |n-2, + |m
= thus by the previous estimate n∈N E n (g) = ω(n + 1) + ν E n |p n,m (t)| 2 1 2 ∆ 2 + 4g 2 (n + 1), then E n,+ ≥ ω(n + 1) and E n,-(g) = 0 ⇔ g = ω 2 (n + 1) -∆ 2 4(n + 1) → n→∞ thus n∈N d p n,m dt (t) = tr H 2 R |m m| = m| H 2 R |m < ∞ because of (6.12). So d pn,m dt ∈ L n d k p n,m dt k (t) In summary, for the second derivative k = 2, holds 6.2.4 Step 2 Consider the summand S 1 := n>E+2 m∈N m≤E p n,+,m (τ /ε)Γ -n |n, + m| ε g 2ε t 0 dτ n>E+2 m∈N m≤E √ np n,+,m ≤ g ε t 0 dτ n>E+2 m∈N m≤E √ np n,+,m ≤ g ε t 0 dτ m∈N m≤E n>E+2 √ np n,+,m ≤ g ε t 0 dτ m∈N m≤E n>E+2 √ np n,+,m ≤ g ε t 0 dτ m∈N m≤E n>E+2 n |p n,+,m (τ /ε)| 2 1 2 . Since we proved that bounded by m≤E 1 in equation (6.26). e -iH JC t g ε t 0 dτ m∈N n>E+2 n |p n,+,m (τ /ε)| 2 2 ≤ g ε Ct ∞. (6.31) (6.32) (6.34) (6.35)
n∈N E n ω(n + 1) ≥E n,-≥ ω(n + 1)[1 -O(ε) -O(g)]
hold definitively in n ∈ N (see Appendix 6.A).
(iii) We compute the time derivative of p n,m (t),
d p n,m dt (t) = n| (-iH R )e -iH R t |m , 2 = n∈N n| H R e -iH R t |m 2 = tr H 2 = n n| H k R e -iH R t |m 2 d 2 pn,m dt 2 (t) ∈ L ∞ (R), d 2 pn,m dt 2 (t) ∈ l 2 (N ). n n |p n,+,m (t)| 2 < ∞ uniformly t,
then the latter row is
Observe that to have summability is now sufficient that n α |p n,+,m (t)| 2 < ∞ for some α > 0 and d dt (p n,+,m ) ∞ , d 2 dt 2 (p n,+,m
Then (6.36) is bounded by
g n>E+2 √ E (n-2,+) n (|p n,+,m (t/ε)| + |p n,+,m (0)|)
m∈N
m≤E + √ E 2 (n-2,+) n d p n,+,m dt (t/ε) + d p n,+,m dt (0) + d 2 p n,+,m dt 2 ∞ t ε . (6.37)
t t
ε 0 + i d dτ (p n,+,m ) (τ )e iE (n-2,+) τ iE 2 (n-2,+) ε 0
- 1 (n-2,+) E 2 0 t ε dτ d 2 p n,+,m dτ 2 (τ )e iE (n-2,+) τ .
A skew-symmetric operator X on D(X) is essentially skew-adjoint if iX admits a unique selfadjoint extension[START_REF] Reed | Methods of modern mathematical physics. II: Fourier analysis, self-adjointness[END_REF] Chap.X]
Here and thereafter we make use of the Landau symbols of which we briefly recall the definition. Let f, φ be two functions: we say that f(x) = o(φ(x)) as x → x0 if limx→x 0 f (x)/φ(x) = 0; we say that f (x) = O(φ(x)) as x → x0 if there exist constants M, δ > 0 such that |f (x)| ≤ M |φ(x)| for each x s.t. 0 < |x -x0| < δ.
In the following we will omit the expression "as x → x0" after o(φ(x)) or O(φ(x)) when is clear which kind of limit we are considering. In particular, since we consider only limits of variables that go to zero, o(x α ) and O(x α ) with α ∈ R will denote respectively "o(x α ) as x → 0" and "O(x α ) as x → 0".
Acknowledgements
PhD project, was very enriching and stimulating. I thank the University of Rome "La Sapienza" for having funded me in this PhD program and also through the project "Avvio alla Ricerca 2016".
Case 1 : γ 1 γ 2 γ 3 = 0 Γ is negative definite and so is A(u) + Γ. In fact, y , (A(u) + Γ)y = y , Γy ≤ 0, then 0 / ∈ σ(A(u) + Γ). Therefore for each u ∈ R 3 exists a unique fixed point of the dynamics which is y c (u) = -(A(u) + Γ) -1 k. Linearizing around this point and performing the coordinates change η = y -y c (u) the system reads
We observe that the origin is uniformly exponentially stable for
In conclusion Tychonoff theorem holds for this system (see Fig. Case 2 : γ 1 γ 2 γ 3 = 0 Assume without loss of generality that γ 3 = 0 and γ := γ 1 = γ 2 (if γ i = γ j = 0 then γ k = 0 for every triple of different indexes i, j, k). We observe that (A(u) + Γ)y = 0 has only the trivial solution iff
Therefore if u 2 1 + u 2 2 = 0 the system has 0 as unique fixed point and σ(A(u) + Γ) ⊂ {z | Re(z) < 0}, then it converges toward the origin. If u 1 = u 2 = 0 the entire y 3 -axis consists of fixed points and the equation for y 3 decouples, so the system converges toward the point (0, 0, y 3 (0)). |